I was making my way through the year's final batches of papers for my high-school history class, and it happened again. Reading a student's work, I went from thinking, "this isn't bad" to "this is a little more polished than I usually get" to "ugh, this bears the lifeless, robotic mark of the AI beast."

This sparked a cascade of feelings that I've had all too often this year. I don't get angry — I've been an educator long enough to understand that some teenagers are going to cheat, especially if the incentives encourage them to do so. There's some disappointment with the student who cheated, of course, especially if the perpetrator was someone I thought wouldn't do such a thing. But mostly, I feel irritated — not at the student but at the hassle I'm about to endure.

Now I have to go back through the student's work with a fine-toothed comb to decide whether the prose that activated my radar is really evidence of AI usage. I might go back to the Google Doc in which they wrote the paper and click through the revision history to see if I can find any suspicious events (like a whole page of text getting pasted in all at once). If I find enough evidence, I'll have to speak to the student, bring the case to the school's dean, and perhaps participate in a disciplinary meeting with the student's parents. The ten minutes I might have spent with this paper just spiraled out into hours of my time.

Other times, I can't find any bulletproof evidence of cheating. I might have a conversation with the student in question, but if that's inconclusive I'm left with a choice: trust my instincts and insist on a punishment for this student even though I might be wrong (accusing a student of cheating is a pretty serious thing and carries major consequences for them) or just give them a grade and move on with my life, even though I strongly suspect that the kid just pulled a fast one on me. Either way, there's a lot of gray area and a lot of room for me to doubt both myself and my student.

What I especially dislike is what happens when I read the next paper, and the one after that. My mind, already addled from hours of slogging through student work, starts to cast its wild-eyed, paranoid glance onto every paper. I wonder: how many of them are cheating? Am I only catching the sloppy ones? Every time I come across a smoothly written paper, I have to ask myself — did this student get better at writing, or are they cheating?

I don't like feeling this way. I don't like having to second-guess my students. I don't like feeling like I'm being made a fool. I don't like suspecting my students of cheating rather than being glad that they've improved their writing. I don't like having to be the interrogator or the cop with my students. I don't like the way AI is making education feel.

Some of you are no doubt wondering why I'm such a stick in the mud about AI. Why not embrace the future? Shouldn't we be encouraging students to use AI for their work?

Maybe. But the act of writing forces us to do a good kind of work. To successfully write an essay, or even a paragraph, we have to tame the disorder in our brains, make our thoughts linear and rational, and support our ideas with evidence. These things do not come naturally, and they take practice. Writing is a very useful form of practice; every time I write something, the act of writing makes my ideas better.

I'm not eager to allow my students to cede even more mental territory to technology. They've already surrendered their attention to smartphones and social media apps. Now we're going to let them outsource reading and writing entirely? This doesn't sound like a good plan for a society that wants its citizens to be independent, critical thinkers in the future.

You might also argue that students have always cheated, and that this is no different. They used to plagiarize from books at the library, then they copied and pasted text from the internet, and now they have chatbots write their papers. They used to use Cliff's Notes, and now they have ChatGPT summarize their homework reading. Nothing is new under the sun.

There's some truth to this, but it feels like we've passed some threshold where the ease of cheating and the incentives to do so now outweigh the incentives against academic dishonesty. There's no friction anymore, and the temptation to cheat must be overwhelming.

Imagine being 17 years old, grinding your way through a research paper in one tab on your browser. You know that, at any moment, you could open another tab and have a chatbot spit out a paper that you could turn in for a reasonable grade. It would save you hours and you could go play video games. How many of us would have resisted temptation at that age? How many students can we expect to resist now, in an age when their attention spans are shorter and their anxiety is higher?

On top of all this, if enough students are cheating and getting away with it (even chatbot slop will earn decent grades because grade inflation ensures that everybody who turns in passable work gets at least a B), which student feels like they're losing out? We can tell teenagers that their hard work will pay off, that when they're grown up, they'll be happy they can think for themselves. But in the short run, the ones who stay up late and work hard for their grades will feel like idiots for doing school the right way.

I want to trust my students, not be their adversary. I don't want to spend hours investigating potential malfeasances. I don't want to set up a system in which they're rewarded for being dishonest. So… what do I do?

I suppose one solution is just to let them use AI. Surrender is tempting. It's the reason that we didn't fight the incursion of smartphones into schools. It's the reason we loosened up our grades. Holding the line is hard, giving in is easy.

Some educators frame this not as surrender, but as an exciting embrace of the future. These students are learning the skills of the future, becoming more efficient workers, and no longer wasting time on antiquated skills. We need to teach them to use AI skillfully, not bury our heads in the sand.

Maybe. But I worry that what will happen in practice will be quite like this generation's experience with smartphones. Adults will give in and let kids use this technology, and the kids will not use these tools in wise or healthy ways. The harm will only show up when it's too late to do much about it. A generation won't learn to think for themselves, they won't be able to read long texts, and they won't be able to understand complex ideas. And if they can't do any of these things, what was the point of going to school?

Another possibility is to go medieval (or at least 1980s). Students could take multiple-choice tests in class, under my supervision, with no technology. They should take notes in five-subject notebooks and write essays in blue books with pencils. But this isn't satisfactory either.

I don't want to organize my entire course, including the way that I assess student learning, around principles that are only tangentially related to the values I think are most important. I want students to be able to do research, and I want my older students, especially, to be able to write thoughtful, more complex essays. If I orient my course around avoiding AI cheating, I run the risk of also avoiding some of my most important educational goals.

I could also lay traps and obstacles for students who cheat. There are a lot of ways to do this. Some teachers put tiny white text in their instructions that asks the chatbots to mention Batman in the third paragraph. When the student pastes the prompt into the chat box, the model will dutifully produce a Batman reference, and the student will be sunk. Alternatively, I could make students complete a laborious drafting process that would make it difficult for them to (entirely) rely on AI. Or require them to submit all of their notes, including annotated sources.

These methods might work, but some of them (like the Batman trick) will only weed out the dumbest cheaters while the more savvy scoundrels get away. Others (like the a heavily scaffolded drafting process) would involve slowing down my courses (which already cover so much less ground than they did a decade ago) and put a lot more work on me.

On top of that, they might not even work. A new AI product might soon be able to produce exactly the "proof" of honest work that I require, or students who are better at this stuff than me might easily figure out a way around my obstacle course. Maybe I'll end up doing a whole lot of extra work with very little educational value for nothing.

Everybody's focusing on the flashy new technology, but I think the core problem here isn't AI. It's this: how do we get teenagers to do a lot of stuff they don't really want to do because it's good for them in the long run?

Our old system of ensuring this — grades and college admissions — is profoundly broken. The idea was that students would work hard because we dangled A's, honors courses, and spots in college in front of them. This worked for a while, but now we give out so many A's that they've become meaningless, and, anyway, it's far too easy for students to cheat their way to those A's. Meanwhile, the college admission system has become increasingly sadistic as college itself grows less affordable. As a result, kids have grown anxious and cynical. We need a new approach.

One option would be to try to make grades matter less. The game of school has long been transactional: a student does some work, the school gives her some points. If she does great work, she gets more points, and the kid with the most points wins. Now the students are figuring out a way to get the points without doing the work, and they've lost sight of any reason to do the work other than to get the points.

So what if we started to de-emphasize grades? There are lots of hard things we do because they bring us satisfaction even if we receive very little tangible reward for doing them. I go running several times a week because I like the way the effort makes me feel, not because I'm ever going to win a race. Can we make grades and other results matter less and process matter more, if what we really want is for kids to spend time on the process even though it's hard?

Another option might be to find ways to make school more authentically interesting for kids. What if school were not a pressurized grind full of unpleasant tasks but instead an experience where students could explore their interests and enjoy the pleasures of learning? Rather than finding ways to incentivize kids for doing things they don't want to do, we might consider making school something kids actually enjoy doing.

Do I have a magical plan for achieving these changes in our educational system? No, not really. But AI presents us with a crisis that may force us to start thinking about how to re-orient school around intrinsic rather than extrinsic motivation.

It's a confusing time, but I'm becoming increasingly sure that AI has made the status quo unsustainable.

Thanks for reading! If you'd like to receive an email whenever I publish an article, click here. I also write Looking Through the Past, a newsletter about historical artifacts and images — check it out! You can "buy me a cup of coffee" here.