The Serenity Prayer

Like many other schools across the state of Minnesota, we at BlueSky School have struggled with the dual opportunities and threats of Large Language Models (LLMs) and Artificial Intelligence (AI) for the last three years. Last May, we presented our struggles, guiding philosophy, and paths to teaching in this new environment.

In turbulent times, I take solace in the Serenity prayer, written by Reinhold Niebuhr. It reads:

[…] [G]rant me the serenity to accept the things I cannot change,
the courage to change the things I can,
and the wisdom to know the difference.

 The wisdom I take from this and his other writings is that there are going to be things in the world—evils, obstacles, barriers—that we cannot hope to neutralize. We need to focus on the tools we have, and the things we can do, to make the world better. LLMs and AI inhabit a strange space where they are both a barrier and a tool.

It is easy and natural to take a strong, censorious stance against using AI. And in fact, it often is correct. We want to give students the skills to comprehend the world around them, think critically and creatively, and make positive changes in their lives. If they instead rely on AI to do these things for them, they do so at the expense of their autonomy and personal growth. 

But we have to be honest about the world as it is, and using LLMs and AI tools can have benefits beyond being merely shortcuts. If we teach students about responsible, ethical, and thoughtful uses of LLMs and AI, they will be able to harness that technology to do the things we hope for them.

Who We Are

As a 100% online charter school in Minnesota, we have a unique position when it comes to student use of Artificial Intelligence. Our staff can teach students for entire semesters without meeting them face-to-face or sharing space in the same room. Our 11th- and 12th-grade classes are almost entirely asynchronous, so we are very familiar with when and how students try to plagiarize for their school work. But we were a little unprepared, two years ago, when Chat GPT 3.5 suddenly appeared on the scene.

Where We Were

Suddenly, our students were very verbose, even in 7th grade. When confronted, students would many times confess, but would sometimes become indignant, and sometimes double down in downright confusing ways. We were often at an impasse, where students would submit work that was written at a college level, while their communications defending it were written without basic grammar or punctuation, and students would rather fail than confess to it. If we couldn’t have an honest conversation about AI use, it was going to be impossible to move forward. We realized that we were being overwhelmed, and we had to do something more proactive. 

Philosophy 

We met as a team to figure out where to go from here. My co-presenter Emily Torvik, who had done some work at Concordia University with AI, along with English teacher Amee Wittbrodt and Science teacher David Bjorklund, came up with some guiding principles as we explored how to solve this problem.

  • AI is inevitable, (un)fortunately. AI has been part of the back end of experiences on the Internet for years such as in spellcheck and writing suggestions, turn-by-turn directions, and curating experiences such as your “feed” through algorithm sorting.  Moreover, LLMs are already being heavily used in web search and customer support. Social media companies and users are actively sharing AI-generated content, sometimes leading to bizarre situations. Finally, AI is already outperforming average human performance on many basic tasks. So, we need to teach about AI so students can be aware of when it is being used, and when they can use it appropriately.
  • AI is moving (too) quickly. The AI industry is still growing rapidly and many companies offer competing products. As I write this, a relatively unknown Chinese company, Deepseek, debuted a model that surpassed OpenAI’s best models in efficiency and effectiveness. We need flexible, open policies that do not force us into a corner by assuming AI’s capabilities or sources.
  • AI checkers are counterproductive. Multiple studies have shown that you can’t detect whether a text was generated by AI. AI detectors are “neither accurate nor reliable.” Instead of arguing about word choice, we want to focus on the skills and mastery the student needs to demonstrate to pass the course.

What We Did

First, we updated our plagiarism policy and academic dishonesty policy to include AI. Next, we created an introductory lesson in our English courses that details the policy along with examples of positive uses for AI. Feel free to take it! Then, the English department made a policy about monitoring student compositions and emphasizing the writing process. Students are told explicitly about how we monitor the writing process, and we are using Google’s “Version History” and similar methods to monitor student composing in Google Docs. We also started a series of recurring training sessions for staff about confronting students using AI, and ways to use AI positively in the classroom. This series continued throughout the year, blossoming into interdisciplinary staff book clubs about the ethics of AI.

Where We Are Now

We’ve implemented several quizzes, assignments, and projects about AI in our courses. Here are some examples:

We are continuing to explore ways to have students ethically use AI, including clear expectations, instructions for citations, and creating AI-“proof” questions. As we continue to struggle with the implications of AI in education, we invite the MCTE community to share about what’s working, what’s not, and how to continue to move forward.

Learn more about the author on our 2025 Contributors page.

Leave a comment