Today is the deadline for logging students as NA, or non-attending, and a third of my students aren't even real.

I am an adjunct faculty instructor at a community college. I teach in-person, online synchronous, and online asynchronous Art History and Art Appreciation courses. My current summer course load includes an online asynchronous Art Appreciation course. This is a course where all of the content is online and there are no specific live meeting times (such as a weekly Zoom seminar.) Students access all readings and videos for the course, and submit all discussions and assignments, via our Canvas site. Since it's online asynchronous, I measure attendance by looking at the amount of time a student is logged into the Canvas site, and the assignments they have submitted.

This time, there was something funny in the assignments.

Case 1: A Failed Scavenger Hunt

The first assignments students do is meant to help them learn about specific formal qualities in art, such as symmetry and contrast. Instead of having them write out definitions, I ask them to take photographs around their homes, neighborhoods, or campus that fulfill certain criteria (such as a photograph showing linear perspective) and submit this as a Powerpoint. Of the eighteen students currently enrolled in my course, five submitted a single photograph taken from the web instead. Another submitted text that was only barely relevant to the assignment. Four students completed the assignment as instructed, and eight did not turn in anything at all.

Now, the eight students who didn't submit at all could be anything: missed the deadline, waiting for the last minute to submit all their coursework at once (lateness penalty be damned), thought the assignment was dumb and not worth the points, whatever. Or, they could be spambots, I haven't ruled it out as it's not possible to tell at this point. But instead I want to take a look at the students who turned in just the still images.

None
A photograph by Jane Allan that was submitted in place of an assignment by one of the AI spambots.

The images they turned in were disparate and random: a Claude Monet painting, a textured wall, abstract black lines on a white background, two photographs of models staring at the camera. You might ask "but if multiple students misunderstood the assignment, maybe that's the fault of the assignment sheet?" True, but I've taught this class before and never had this issue with students misunderstanding what I was asking for. It's not just that they submitted something incorrect, but that multiple 'students' ignored what the assignment asked for all in the same peculiar way. While not conclusive, it got me curious about these 'students' attendance in the course.

Canvas analytics allows you to see the amount of time students spend in a course site. The average real student will have at least an hour in the course site during the first week. That only counts the time students are accessing the web pages, so if a student downloads an assignment sheet and then closes out Canvas to work on their assignment, that would not be counted. But what about these students with the weird assignment submissions?

Seven minutes. Nine minutes. Eleven minutes. All fewer than twenty minutes in the course site, despite submitting multiple assignments. Now, that doesn't seem right. So, what's going on here?

Explaining the Scam

I was first tipped off to the presence of AI-powered spambots by my department chair, but I had to look into it to understand exactly what was happening. This has been a noted problem in California Community Colleges for a few months now. The scam is for some mastermind to register a fake student to collect financial aid, enroll that spambot student in courses, and let AI handle submitting some of the assignments. This allows the person behind the spambot to collect financial aid money for as long as the spambot is enrolled in the course.

This is key because it is a legal state and federal compliance requirement for faculty to report Non-Attending students to the college. So, if the spambot is enrolled but doesn't turn anything in, I can report the student as Non-Attending and the student will be removed from the course and lose the financial aid. If they submit something, anything, even if it's garbled nonsense that I fail with a 0, the student is still considered "attending" and can stay enrolled longer (just like a real student who is failing a course) to collect more money.

According to journalist Adam Echelman:

"They're called "Pell runners" — after enrolling at a community college they apply for a federal Pell grant, collect as much as $7,400, then vanish."

AI Spambots vs. AI Cheaters

So what's the difference between AI spambot students and ordinary students who cheat using AI? It all comes down to intention. An ordinary student who uses AI to cheat wants to go unnoticed and get a good grade in the course. They want to get out of doing the work required to complete the course (or at least some of it), but they fundamentally want the grade and the course credit at the end. They may use AI only some of the time, and may put effort into disguising their use of AI. AI Cheaters will still act like a normal student in a course in other ways, such as corresponding with the professor (especially if they are accused of cheating) and accessing web pages.

On the other hand, the AI spambot student does not care about the grade at the end of the course. For the person behind the spambot, the goal is to stay enrolled in the course for as long as possible so that they can continue to collect the financial aid money. So, the AI spambot student will complete some assignments (maybe not all) but probably won't put effort into making the submission coherent, relevant, or undetectable.

AI Hallucinations

One of the students who didn't turn in my formal qualities scavenger hunt above did turn in another assignment, a visual analysis paper. A visual analysis is a short paper based on unpacking how an artist creates a certain effect in an image using formal qualities (so, that painter put paint on a canvas in a way that makes you feel sad, how did they do that? What choices did they make?) I give students a list of potential artworks to use, and they turn in a 2–3 page analysis. This 'student' turned in 2–3 pages of AI-generated nonsense. It was an analysis of "Serenity" by Nydia Marquee. You'll note that no such artwork and no such artist exist, but the student also submitted an image of Randolph Roger's Nydia, the Blind Flower Girl of Pompeii, 1853–54. So where is "Marquee" from? As far as I can tell, it was scraped from this Met web page that lists the following at the bottom of the page:

"Marquee: Randolph Rogers (American, 1825–92). Nydia, the Blind Flower Girl of Pompeii, 1853–54; carved 1859. Marble, 54 x 25 1/4 x 37 in. (137.2 x 64.1 x 94 cm). The Metropolitan Museum of Art, New York, Gift of James, Douglas, 1899 (99.7.2)."

I'm not sure where the "serenity" came from, though it is a word occasionally used in descriptions of Randolph Rogers' work.

None
Nydia, the Blind Flower Girl of Pompeii by Randolph Rogers, 1853–54, via the Met Museum.

The actual visual analysis that followed is total nonsense. While the 'student' submitted an image of the white marble Nydia sculpture, the essay expounds at length about the artwork's "palette of soft blues and greens, interspersed with gentle touches of white and pale yellow." It goes on to describe how this color choice allows "the viewer's eyes to glide effortlessly across the canvas." In describing the texture, it says "Marquee employs a variety of techniques to create both visual and physical texture, from delicate brushstrokes to more pronounced, textured layers of paint."

This is what is called an AI hallucination, and it's tied into how generative AI works. AI text bots like Chatgpt work by predicting a preferred output based on a requested input, sort of like a hyper-advanced version of the predictive text on your phone. It's just very good at mimicking human writing/speech because it is trained on a large set of data written by humans. But it doesn't actually care if that information is true or not, just if it appears to make sense based on the data on which it has been trained.

When it comes to artworks, something interesting happens. A very famous artwork, like the Mona Lisa, has been written about thousands and thousands of times, its formal qualities described at length. So, if you ask the AI to generate a visual analysis of the Mona Lisa, it has a lot to go on and will do a pretty good job. But when you ask it to do a less well-known artwork with little to no writing about it, the AI falters. It falters because it cannot see the artwork you're talking about, so it hallucinates and makes up what sounds reasonable based on the input (which is why the visual analysis makes so many references to things that seem serene, since "serenity" is the made-up title.)

But is this essay just a student using AI to get out of doing the assignment, or is it an AI spambot? A few things clued me in. Firstly, when the student submitted the assignment they actually only uploaded the image. The body of the essay was submitted as a comment to the submission, which is not something a student familiar with Canvas would do. Not conclusive, but weird enough to make me think twice, especially considering the images uploaded by the other spambot students.

Secondly, the fact that the artwork is made up. Remember that students who cheat with AI want to take the path of least resistance: they want to do minimum work, go undetected, and get a good grade. A student just using AI to cheat would just take a title from one of the supplied artworks in the assignment sheet and submit it to the AI chatbot. Even if they didn't proofread before submission, which many cheating students don't, it doesn't make sense that they would invent an entire artwork.

The final nail in the coffin happens when I go check the amount of time the 'student' has spent in the course site for the entire first week. Eight minutes.

None
Photo by Niamat Ullah on Unsplash

So, what now?

None of what I have reported above is absolutely conclusive, but when it compounds in a consistent pattern, that's where I've felt the need to act. In the immediate sense, I've already done what I have needed to do. I reported the non-attending students to the college, and I reported the suspected AI bots to our college's SPAM reporting form. It's just a waiting game from here, and I won't get a final say in whether or not the students are AI. Still, I'm as certain as I can be based on the evidence of my own course, and the rest of the evidence will have to come the college's internal procedures and presumably correspondence with the 'student.'

But more broadly speaking, I'm nervous for what this rise of AI spambots will do to the community college in which I teach, and other community colleges facing the same issues. These colleges have tight budgets and limited enrollments for specific classes, so these spambots take resources away from real students who actually need them, both in terms of potential financial aid and in terms of classroom capacity. It also makes me nervous that community colleges will be unfairly targeted for these AI students and that real students will have trouble mobilizing their community college degrees.

This is especially concerning because the responsibility of catching these AI spambot students often falls on faculty, most of whom are overwhelmed and many of whom are likely not as savvy in catching AI. Recently at a teaching and learning conference where I was presenting a poster about teaching visual literacy in the age of AI image disinformation, I heard people discussing how they still use AI-detecting tools to screen student work, despite the fact that those tools are known to be ineffective. I admit I spend more time than the average person thinking about AI in higher education (see my article It's Not About the Hands: Identifying AI Art, for example) but it's discouraging to know how far we have to go in spreading the word about the dangers of AI in higher education, beyond just academic dishonesty. This can have real effects if students are unjustly accused of cheating and lose their financial aid.

So, here's my plea to others in higher education: get educated about how AI works and try to get good at identifying AI on your own, not just with these so-called "detectors." Make note of things that seem weird, and bring them to the attention of your department chair or dean. Don't treat all students with suspicion (students who don't understand assignments might just be that) but look for patterns of behavior. Consider designing assignments that AI is bad at (like my visual analysis essay, or the scavenger hunt) to help you weed them out. Stay vigilant. If you think higher education matters like I do, it's up to us to protect it.

Hi, I'm Mary! I'm an art researcher who loves teaching about art. If you enjoyed this piece and want to hear more about writing, art history, education, and museums, consider giving me a follow. Thank you for your support!