First, some context: I’ve been an English teacher for over two decades. For about ten years, I’ve taught English online in an asynchronous setting. I’ve only taught in a traditional setting for one semester in my career and quickly gravitated towards student populations who struggle. I find myself always seeking out whatever challenging area in our field exists and try to help be part of the solution.
I never really liked school. There are a lot of hoops to jump through, and so often it seemed like learning wasn’t as important as how well one could navigate the system. Credits and grades took precedence over what I knew. It was frustrating. Much of my drive to become a teacher was related to those early experiences. I wanted things to change and hoped to be a part of that change.
To meet the needs of my learners in alternative settings, I discovered the world of EdTech. Building in efficiencies and using Learning Management Systems to provide more individualized options to help students meet requirements changed what was possible when teaching classes that included students in seventh through twelfth grades. I evolved. I learned about engaging design, accessibility, and most importantly, Open Educational Resources.
Things went pretty well for a long time, and then a few years ago, a bomb dropped. I started seeing really strange student submissions that included perfectly formatted essays that very few of my twelfth-grade students bothered submitting. Now I was seeing a few a day. Speaking with other educators experiencing the same thing, we quickly discovered ChatGPT. I was hesitant to sign up for an account because they wanted my phone number, but then I remembered that phone books used to have all our phone numbers! I signed up and started playing around. My heart sank. If a student had access to this, then what were we going to do?
Luckily, students using this tool weren’t very adept at it. But I was intrigued. I started playing around with my prompts to write from different perspectives with different tones, different details. The results were a bit surprising considering where the previous state of AI had been. I could get these tools to generate written responses to my prompts that I could get to sound like a student, including personal connections, examples, insights. I started searching for red flags and hallmarks that I could use to determine if text was generated by a Large Language Model (LLM). While those red flags exist and I use some of them now in my rubrics to score student work, it will not be long before students become better at prompting, making it harder and harder to determine authenticity. Would we really be able to AI-proof our courses? Especially an online asynchronous one like mine? We also now know that detection tools are not reliable, nor are most people able to tell the difference.
The overwhelming pressure on social media in education circles was to try these new tools! Use them to do things for you! “Here are the 120 best AI tools you have to start using or you’re falling behind!” I was skeptical. When has any new thing been all it was cracked up to be, much less support marginalized populations? I was already seeing new inequities being established just with the simple existence of these tools.
If we trained these tools on what humans produced and we know that humans are biased, then what were these new tools hitting the market giving us? Sexism, racism, homophobia and more. They are a mirror reflecting our worst stereotypes back upon us in ways that sound like us. I wasn’t seeing that message shared with the same vigor as the ones that were pressuring us to accept the inevitability and just use them!
Frank Herbert once wrote, “We pay for the mistakes of our ancestors.” Here we are doing it again. Our past coming back to haunt us in the form of AI produced content.
I decided to do what I always do: take the path of most resistance. I got vocal. I pushed back. I spoke to any groups who would listen. There are more of us now, but with the adoption of AI vendor products, I fear that it is another losing battle. The pressure to use them and teach students to use them is ubiquitous.
I read studies and develop strategies to mitigate the bias in these responses, but how many are really out there doing that? I can tell you that it is possible in many cases, but not in all. Image generators are particularly difficult to produce inclusive results. I can show you default LLM lessons and those created using my strategies and you can see the stark contrast. It is possible to create something that aligns more closely with our objectives and values, but it takes a little more effort. People gravitating to AI aren’t really looking for effort—they want to reduce their already herculean efforts!
You can layer your conversations with LLM to have responses more reflective of our values by including guidelines that the language is more inclusive of marginalized communities, incorporates Social Emotional Learning frameworks, integrates Universal Design for Learning principles, and demonstrates empathy and care in the language.
You will get something much closer to what you are looking for, but it also depends on the prompter’s ability to know which layers to add and the expertise to evaluate if those requests are truly being integrated into the responses.
Back to my learners… I’ve been paying close attention. I have now graded thousands of AI submissions. I’ve been creating lessons that teach about generative AI. Is there anything that might convince learners to shy away from AI tools to offload their thinking? I inform my students about the AI companies’ violation of copyright, the fact that generating messages with them uses a bottle of water per interaction, that the use of AI by young people results in reduced critical thinking skills. I’ve been surveying them about concerns and effects like that, and you should know something: most of them do not care. Most of the students who use LLM to produce their work see school as an obstacle to overcome, not something that is valuable to their own growth, development and learning. We need to do something about that.
Despite what you may have heard, this isn’t like anything that has happened before. It’s not like calculators, spellcheck, the Internet, cell phones, etc. This directly impacts existing challenges with critical thinking, engagement and learning. It’s going to take a collaborative effort to make the systemic changes necessary to adapt to this new world and reengage learners. Let’s all make a commitment to be a part of the solution by sharing our struggles and success stories.
You should also know something else about me. I use AI tools regularly. I am also an educational consultant that trains professionals how to integrate LLM into their workflows. I believe they can serve an important role in making us more efficient so long as they serve us as assistants and not a source of knowledge. Their capacity to improve curriculum for students who need extra supports and meet accommodations and modifications is remarkable. LLM are useful for having conversations with the information we upload. I can talk to an open licensed study about reading modalities and what that means for students who say they don’t like to read. I can use frameworks and documents to make lessons more inclusive and help me align materials to academic benchmarks. They have a place—just not ours.
Learn more about the author on our 2025 Contributors page.