Last month, I wasn't sure whether to be livid or despondent when I saw an academic on a local message board offering 60–70€ in exchange for someone to copyedit her research manuscript: a book chapter. She admitted that 60–70€ wasn't much in exchange for such work, but this was her limit. To sweeten the deal, she also offered to bake a cake.

The job in question was something that would normally take most professional academic copyeditors anywhere from 4–12 hours of meticulous attention and skilled labor — more if the document exceeded a standard length.

Well, who could resist a cake and a single hour of pay?!

Someone popped onto the thread then to suggest:

Use ChatGPT!

At that, I had enough internet for the day.

It's been weeks since I saw this post, and my mind still churns with worry: this was my job someone seemed to think was worth little more than a baked good. And my job someone (likely lots of people) thought ChatGPT could do.

We'll set aside the question of whether cake is fair; I know there are wildly differing levels of love for cake, and I won't judge yours. But the question of "Can ChatGPT do this?"

No — and unfortunately, the people who believe ChatGPT can do such highly nuanced work don't know what they don't know; they're oblivious to the ways in which AI is bound to fail them. Which is pretty much the problem with all the work that People Who Don't Know are outsourcing to AI these days.

I'm anxious because this threatens my livelihood, yes. But it also vexes me because I'm a person who understands that the research world impacts everyone. Researchers drive all kinds of improvements in quality of life, from social and economic policy, to solutions for highly specialized problems in fields like sustainability, tech, and medicine. Whether or not most laypeople recognize it, we owe nearly all the highest comforts of modern life to the invisible heroes of the research world and the brilliance they share with society's movers and shakers. We can't just leave them without the benefit of perceptive, qualified, HUMAN help in their efforts to communicate their findings (clearly and accurately!) to the world.

I value how the Digital Age has made my entire lifestyle (as a location-independent, immigrant freelancer) possible. On any given day, I can receive an email from a client across town or across the world, download my next project, and get to work. And I can work from anywhere, at any hour, wearing anything I want. I can also fact-check or double-check whatever I need to, instantly. Best of all, I can also pause everything at any second and take a spontaneous phone call from my family — thousands of miles away.

I thank technology for ALL of this. Such flexibility has done wonders for my health, both mental and physical. In fact, it would feel self-destructive to give this freedom up by taking a more conventional job. I would resent having to.

But I'm feeling the AI crunch.

Work's been dropping off over the past year or so, as fewer new-client inquiries trickle in. It's impossible to unsee how this dip coincides with an era when people are trying to outsource all manner of language needs to AI. In the past, new clients who emailed me with the hopeful claim that they needed "just some simple proofreading" were usually both shocked and thrilled at how much a professional's services could elevate their work. (For what it's worth, in over a decade of this, I can count on one hand the projects that truly needed only proofreading.) This is because writers who can't trust themselves enough to catch even minor errors in their work tend to struggle with far deeper writing challenges than they even notice. When they see a professional's services first-hand, however, they recognize that a human copyeditor is a totally worthwhile investment.

Nowadays though, would-be clients often turn, instead, to sources like the world's most famous chatbot for help. They never get to see what a human editor can do — and especially what AI can't. A human editor notices if an argument is flimsy even when your grammar is flawless. A human editor dialogues with you to clarify that unintelligibly clunky sentence. A human editor is also a sensitivity reader, catching details that might embarrass you professionally — like politically incorrect language, or claims that ring controversial and thus need more citations (lest your audience raise their pitchforks).

Basically, a language professional's job entails far more than just "cleaning it up" and making it "sound good." It requires social and emotional intelligence. Writers who are blind to this (and their pitfalls) think AI suffices just fine. The result? Quality suffers. And since these writers are none the wiser, the prices they're willing to pay for someone to edit their work… plummet.

If you've ever stumbled upon a piece of soulless, rambling, vague, inaccurate trash on a content mill, then you know just how woefully inadequate AI is at catching the subtler (yet arguably more serious) problems in a text. It's annoying enough to see this garbage on content mills. But if AI takes over the world of research publication — where clarity and facts sometimes matter to a life-or-death level?

God save us, because AI won't.

I burnt out last year, to be frank. I've been at this gig since 2011. Back then, on the brink of my exit from academia, it didn't make sense for my professors to give me research duties. They — advanced-career professionals with stellar reputations — recognized my skill at writing, though. So they had me edit for them. They loved the results. My job was born.

I've spruced up CVs for people who went on to be offered corporate-level or Ivy League jobs. I've edited books (and large chunks of books) that were picked up by leading academic publishers, and even a medical research piece that won an award. I've engaged with my clients in dialogues that inspired them so greatly that they decided to incorporate my reflections into their scholarship. And I've saved people from saying inadvertently sexual, accidentally inaccurate, or politically problematic things more times than I can count. I am this damn good not just because I have language savvy, but also (and especially) because I had a very multidisciplinary education, with research training at one of the USA's top institutions. Indeed, one of the top universities, and programs of its discipline, in the world. Spending three years being mentored by all-stars in the ivory tower puts me in a whole different league from a standard proofreader.

Or — this should be obvious — a machine.

But unlike a machine… I doubt I can keep doing this. Because while my rates are over 50% higher than when I started, they're still pretty much in line with what was considered average for this niche almost a decade ago — and it's infeasible to charge much more. At the same time, taking more projects to boost my income is impractical because the work is cognitively intense.

Feeling desperate, I reached out last year to peers in the field to find out how they manage. I opened a conversation on a local message board. I emailed a copyeditor whose work I admired. I even talked shop with a copyeditor acquaintance I ran into at a friend's party. I was ready and willing to face the music: maybe I just needed to 'be more disciplined' and find a way to add more hours to my schedule. So I asked lots of people lots of questions, all boiling down to:

How do you make this job sustainable?

And you know what most of their answers amounted to?

I don't. I had to quit.

They cited reasons I already sensed, such as how cognitively grueling it is to take an obtuse, scientific-sounding text from an unpublishable state to a publishable one. How the going pay doesn't really match the burnout-inducing level of focus required. How it's hard to find clients who can afford the (already-underpriced) services to begin with.

In fact, the only academic copyeditors I know who haven't quit to find new jobs are making ends meet not by copyediting, but via other circumstances. For instance, one editor didn't realize they were underpricing the work and said that most of their income came from a higher-paid niche that subsidized their editing instead. Moreover, both the academic editors I know who've been able to continue this job long-term have a spouse who brings in further earnings.

In all the 'sustainable' cases… the job didn't sustain them after all.

That was the truth I kept encountering: that no one made ends meet on this highly-specialized work alone, largely because, despite all the value it brings, it's impossible to charge a rate truly commensurate to the time, energy, and brain power involved.

And with ever-more people now believing they can just fire up ChatGPT and call it a day, I don't see the situation getting better. Instead, I see an invaluable industry — an industry that supports the research world, for God's sake, with all the humanistic, scientific, and general quality-of-life improvements that researchers bring to society — being turned over to AI.

Good luck to us all.

I don't blame the woman who wanted to purchase editing services in cake. I do laud her for recognizing that this job is best performed by a human. I can even relate (painfully) to knowing you need a highly-skilled professional's help, yet recognizing that you're too economically strapped to afford the prices that those services merit. I've been there; I've had to offer moneyless trades of my services (sometimes editing, sometimes reiki or tarot reading) too.

And, hey, for all I know, her cakes might be epic.

But even if they aren't, everyone deserves access to the supports that can secure and advance their livelihood. Unfortunately, the number of us who can access these things shrinks every day. Stability (forget about success), is fast becoming a luxury in our increasingly unaffordable world.

I don't think ChatGPT and the like will solve these kinds of problems. I believe AI will create some while neglecting others — such as the jobs it will kill, and the user errors and quality-control issues that will fly under a radar devoid of authentic, human social and emotional intelligence. We need a social world. We need an emotional world. We need a world where people can afford to cover the basics and then some — where people can afford to invest in the kinds of things that can support and improve their lives.

But that's not the world we have or the direction we seem to be going, economically. And while I love my brilliant, appreciative clients (and they compensate me fairly and without complaint), I finally recognize what most other peers in my field seem to have already seen:

This work is not sustainable. And since I don't have a higher-paying gig (or a spouse) to subsidize it… I can't keep trudging down this path. New clients are scarcer. Rates, even rising ones, are vastly outpaced by the climbing cost of living. What scares me is, for how many years I spent refining and leaning into this set of specialized skills… I'm not qualified for anything else. I have no experience in lucrative, conventional professions: IT, finance, marketing, medicine, the trades. Even clerical jobs are beyond my current qualifications, given all the years I never needed to use most office software. And with all the debt I incurred to obtain an advanced education in the first place, I can't afford to go back to school, at age 39, for something new.

What kind of educational investment would truly be economically sustainable amidst this revolution though, anyway?

So I will keep doing this work as long as I can. Not just because I have nothing else (…yet), but because I genuinely care about supporting the life-enhancing insights that researchers generate. At the same time, I recognize that I need to seek alternatives. I'm just another highly-skilled human, being replaced by an arguably (far) less-skilled machine — thanks to the people who don't realize the difference and an economy that punishes us for investing in each other.

It gives me no joy to say this, but maybe someday you will be too.