Old Lessons for a New AI Moment in Community Colleges
Blog Post
Getty Images
Feb. 11, 2026
Late last year, I had a conversation with a community college president who told me something intriguing. Rather than asking staff to analyze data for him, he was using a large language model to ask questions of the dataset himself. It was a revelation. College leaders are often slow to use data effectively, and when they do, it usually involves staff building spreadsheets or dashboards that are clunky, static, and never quite answer the questions leaders actually have.
That conversation opened my eyes to the possibility that AI could create a new kind of integration across antiquated community college technology systems—one I hadn’t fully considered before. It also reminded me of an earlier body of my own work on predictive analytics, which, in many ways, laid the foundation for the AI tools we have today.
Between 2016 and 2021, I wrote or oversaw five publications designed to support colleges in using predictive analytics or AI. These publications hint at what community colleges should consider as they evaluate the use of AI to improve their technical systems. Here are some things we can learn from that might help colleges in the current AI moment:
Institutional Capacity, Technology Infrastructure, and Legacy Systems
Community college legacy technology systems, such as student information systems or learning management systems, are often defined by what they can’t do. They can’t match non-credit to credit enrollment. They can’t allow easy analysis of student performance data. They can’t help reach out to prospective students and help them enroll. And, unless you buy a certain product, they can’t tell when students might be in danger of dropping out.
That may change with the advent of more complex AI. Instead of trying to fix enterprise systems, we now have the potential to layer tools on top of them, enabling ad hoc data analysis, tracking student pathways, identifying struggling learners, and reaching out to students who need it. To be clear, many vendors have promised this utopia for years, including with predictive analytics; however, these models have become so powerful that we might actually be able to achieve it.
But we have to be careful. Ten years ago, in the Promise and Peril of Predictive Analytics in Higher Education, my colleague and I outlined how colleges were using predictive data, including in student supports, adaptive learning, and enrollment management. We wrote about how these tools could support students, but also how they might harm students if used poorly.
Now, the possibilities and risks have been supercharged with the advent of more advanced AI, but many concepts stay the same. Colleges and AI users must be clear about what these systems are for and minimize the possible harm to students. All of this in the face of the tantalizing possibility of using AI to integrate systems in a way they have never been integrated before.
Ethical Use of Data & AI
The ethics of using AI tools are already complex, given their environmental impact. When these concerns are layered on top of the use of sensitive student data in these systems—and the risk of incorrect outputs—the need for scrutiny becomes even more urgent. If these tools get something wrong, it can result in poor outcomes, such as misdirecting institutional resources, corrupting student records, or identifying students for inappropriate interventions that could actually harm them. These systems must be used to protect students, not just to optimize institutional outcomes.
A recent survey found 93 percent of higher education faculty and staff plan to use AI in their work over the next two years. As AI systems increasingly ingest massive amounts of student data for matching, modeling, and decision-making, without clear policies and governance, institutions risk privacy violations, diminished trust, and harmful outcomes. Modern AI models can amplify structural inequities, embed unfair predictions, and generate opaque outcomes that are hard to question. Mitigating algorithmic bias and ensuring equitable model behavior should be core requirements for AI deployment in college systems.
Colleges must develop governance structures, ethics review processes, policies, and mechanisms to evaluate performance (can we trust they are accurate and not hallucinating?) and harms before adopting and scaling AI systems. Without this infrastructure, institutions risk deploying sophisticated AI tools without sufficient oversight, leading to consequences like student misclassification or privacy breaches.
Our guide to implementing predictive analytics tools at colleges, Predictive Analytics in Higher Education: Five Guiding Practices for Ethical Use, outlined steps and considerations for schools using this technology effectively. These steps included having a vision for what success looks like and a team to make key decisions; building a supportive infrastructure; ensuring proper data use; addressing bias; and intervening with care. These steps remain the same as we consider effective uses of AI in colleges.
And our Choosing a Predictive Analytics Vendor: A Guide for Colleges lays out in even more detail how colleges can ensure the AI or predictive analytics tool fits their staff's needs, is transparent in its data use, protects privacy and security, includes performance metrics in the contract, and supports evaluation and professional development. The gap between the technical knowledge of staff purchasing AI-enhanced systems and tech vendors in this space has only grown. Many of the questions and considerations outlined in this report are still incredibly relevant.
Connecting it to Students
As AI becomes integrated into student supports, advising, and student information systems, institutions need to be transparent about what data AI systems use. They must also explain how and why those systems make particular recommendations or decisions and communicate this information to students and users in accessible ways to build trust.
Given the importance of student involvement in policies on these topics, we wrote about how students feel about colleges collecting and using their data. Keeping Student Trust: Student Perceptions of Data Use Within Higher Education found that students want colleges to be transparent about how they collect and use student data, ensure students understand and consent to those practices, and prioritize privacy protections in institutional decision-making.
We also wrote about how to message interventions to students to both minimize the risk of harm and maximize the likelihood of behavior change. How You Say It Matters: Communicating Predictive Analytics Findings to Students lays out the science of effectively communicating predictive analytics and then provides real-world examples of how a college could craft messages for students. As AI systems intersect more with student life, communicating with care is more important than ever.
Bottom line
With careful implementation, we can see a bright future where community college technical infrastructure functions much better than it currently does, allowing staff to focus on what matters most: the students. But there is much work to be done. We must outline the different use cases and their risks and rewards. We must think through the appropriate governance for these systems, ensuring they provide accurate information and preserve privacy. We have to think about how to support staff in using these systems effectively. The good news is that many of these concepts have already been explored, but they need to be updated for the current AI reality.