AI and The Ownership of Teacher Work

Blog Post
Illustration of teachers and students gathered around a large laptop
Shutterstock
Feb. 5, 2026

As the world experiences ubiquity in readily available large language models and other tools, we’ve seen some real tensions spike about the work of teachers and to what extent technology can be helpful. The acceleration of technologies in the form of generative AI has both augmented and muddied the conversation. Some educators would argue that they’re not worried about teachers being replaced by large language models. Yet, some worry about whether such technology could undermine the intellectual and responsive work that must be done in service of children. This compounds the complicated ways everyone at the school-based level has had to wrestle with uses of AI.

For now, we can suspend the idea that educators will be replaced by AI (we’ve seen early evidence from the business world that we’re far from that happening). To reiterate, the better question to ask right now remains “To what extent will AI make the work of teachers easier?” It’s important to interrogate the different dimensions of teacher work as different ventures have promised to make such work easier for practitioners and students. While skepticism has an important place in a field that centers children, I’m also recognizing that the technology is already ubiquitous in our schools. For example, in recent studies, educators report cautiousness about student use of AI while also using it more often in their practice for lesson planning and assessment. Therefore, and for our purposes, I use three general categories to help us better understand how school-based educators in multiple systems have navigated new technologies through innovative policies and practices: assessment, curriculum, and pedagogy.

Assessment

Since the passage of the No Child Left Behind Act of 2001, the education research field has seen a deluge of data on local, state, and federal levels. Schools used these troves of student-specific data to make educational decisions, particularly those in categories such as free- and reduced-lunch, students with disabilities, and students considered low achievers. From a policy perspective, these assessments created a norm-based reference system from which schools could better serve their students across the board. However, standardized tests have also been inappropriately used in decisions about teacher evaluations and school closures with mixed results. However, for educators, assessment is not the same as testing. Testing is a narrow part of how we understand assessment. Fast forward to now and having better systems for data collection and analysis across multiple interim and summative assessments has been helpful for educators who’ve used it in this way.

The conversation about assessment has been mostly subsumed by standardized testing as opposed to a multi-faceted approach to understanding whole child learning. For many, LLMs hold the promise of helping to interpret large data sets. Some educators have built LLMs that help them see large sets of data related to discipline. For example, in addition to writing a school-based AI implementation policy for her school, Dr. Annie Phan, Director of Diversity, Equity, & Inclusion and Dean of Student Belonging at the Charles Armstrong School in Belmont, CA, uses AI tools to help her analyze data sets on discipline, special education, and other factors related to student success. She says, “As someone who struggles with executive dysfunction, I think that AI tools will help me develop a sense of "distributed cognition," (or even distributed metacognition) and interact with both internal and external processes that support my thinking and learning.” Underneath this statement is the recognition that, when educators already have funds of expert knowledge, they can use these tools to enhance what they already know.

Curriculum

At this juncture, curriculum is the baseline set of materials teachers use to implement learning standards for, and hopefully with, students. Currently, we’re seeing big shifts from how school districts are thinking about curriculum. The science of everything has had a larger impact on these conversations. Many districts have moved to “high-quality, scripted” curriculum as some research suggests that having a strong curriculum is vital for student success. However, teacher autonomy is one of the benchmarks for teacher professionalism. Even in places where teachers have a vetted curricular option, many still opt to adapt it for their students and schools. Yet, policymakers over the last few years have taken a stronger stance on curricular options and the materials within classrooms. While fidelity to curriculum matters in terms of understanding the materials’ effectiveness, educator buy-in also affects whether this approach is sustainable.

We’ve seen calls at the federal level to prepare students to advance artificial intelligence whereas the previous presidential administration advocated for thoughtful AI implementation. (A clear difference is that President Biden’s executive order provides a clear framework with appropriate safeguards for using AI while President Trump prefers less regulation and more streamlining of the promulgation of AI tools.) To their credit, schools like MIT have developed resources to help us better navigate ethics of AI in K-12 classrooms. This all speaks to the caution, if not skepticism, educators have proceeded with generative AI. In use cases, teachers have seen how AI can help brainstorm scaffolds for students with disabilities or multilingual learners. It may also help students develop enhancements for their projects such as slides or images they otherwise couldn’t find. But, as Dr. Manuel Rustin, social studies teacher in Pasadena, CA, says, “In terms of actual student learning, I don’t think it has been helpful at all. It can boost engagement temporarily if the assignment calls for creativity, but students generally seem less likely/willing than ever to read and to struggle through a difficult task when AI can easily do it for them. This is a shortcut generation and AI is the best shortcut out right now.”

The rise of generative AI has added complexity to our instructional designs, particularly with lesson planning. While some start-ups (notably Playlab and Oak National Academy) have sought to utilize LLMs to engage teachers in thoughtful lesson planning with curriculum in mind, too many people envision LLMs as a lesson plan generator. Such a practice may prove detrimental to student and teacher engagement. “And in the rush to create and assess the product, we’ll miss the learning,” writes Chris Lehmann, principal and co-founder of Science Leadership Academy in Philadelphia, PA. “I don’t ever want to be in a classroom where a student asks a teacher, ‘What did you think of my work?’ and the teacher has to check to see what AI told the student before answering.”

Pedagogy

I typically think of pedagogy as the synergistic confluence of theory and action. Educators gather as much evidence about students as possible before, during, and after getting to know them, develop a theory about how students learn best, and implement the sum in classrooms. As such, the general consensus is that, yes, teachers as human beings develop the connective tissue to make this all work. Having content knowledge, good assessment tools, and a viable curriculum matter, and having a teacher who can deftly navigate these parts towards great student learning is the optimal goal.

Alexis Armstead, special education teacher at Compass Charter School said, “Unpacking units, creating intentional and responsive IEP accounts, and planning for instruction require deep thought, masterful facilitation, and teacher reflection which AI simply cannot do.” While AI policies have yet to catch up to a vast majority of schools, teachers have had to wrestle with implementation and uses on their own. She continues, “I have found that there are some teachers that use AI to lessen the intellectual work burden but the plans do not reflect the kids that are in front of them.”

Yet, other narratives have emerged. For instance, the rise of Alpha schools has allowed tech enthusiasts to implement the “guide on the side” vision for teachers that we’ve heard for decades in technology spaces. This vision for teaching asserts that centering the tech in schools will release students from having to go at the same pace as their peers, providing for a limitless education. On the other hand, many teachers have tried to find ways to implement analog versions of their classrooms. This rift in AI implementation speaks to the growing ubiquity of these tools, but also the multiple schools of thought about what teaching is and does.

Conclusion

Technology executive and advocate for ethics in technological advancements Anil Dash recently wrote, “Technologies like LLMs have utility, but the absurd way they've been over-hyped, the fact they're being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.” Ultimately, the widespread use of AI tools and its multiple use cases has complicated the narrative about its possibilities in and out of classrooms. Cognitive off-loading (i.e. the idea that one assigns mental tasks to any technology including an LLM) applies to educators as well. The deep thinking necessary to pull together a cohesive learning experience with compassion and identity in mind comes from humans individually and in collectivity. Unfortunately, the ratio of hype to value has been highly disproportionate. For educators looking to use it (and for policymakers looking to sponsor it), gen-AI can assist with the overwhelming nature of teaching.

From data analysis and differentiation to lesson planning, gen-AI in schools can be implemented with policy-based and ethics-based safeguards. However, without those safeguards, as with any labor in any field, we lose the human-centered intention of education.