Like many others, I have helped friends who were addicted to damaging drugs. The media rarely accurates reflects the true nature of addiction. Addicts are, more often than not, highly functioning and fit into society perfectly. You could walk past them on the street and not know anything was wrong. But their affliction is eating them from the inside out, destroying their capability, cognition, and well-being. Still, they feel they need these substances to survive and will do Olympic-level mental gymnastics to justify their consumption. "I need it to work harder," "It keeps me relaxed," and "I only use it when I need to" — -these aren't excuses, but a desperate attempt to justify a detrimental band-aid, rather than address the underlying problem itself. Often, this is because the underlying problem is not in that person's control. After going through this process a few times, you see that the same pattern of justification and denial of the real problem plays out again and again. What I did not expect to see was this exact same pattern in an AI study. Dr Rebecca Hinds and Dr Bob Sutton have collected first-hand experience from more than 100 executives, technologists, and researchers to create a "blueprint that can help drive AI success", but it reads more like an addict desperately trying to acknowledge the damage while defending their use at an intervention. Reading this report made me realise that AI needs to be treated as a hard drug. Let me explain.

The 'blueprint' was published in a new report from the Work AI Institute, a research organisation run by Glean AI, a generative AI platform for businesses. Straight away, there is a conflict of interest here. The Work AI Institute, at the very least, looks like yet another industry "think tank".

But despite this potential bias, research leader Doctor Hinds still admitted some devastating findings about AI in the workplace in this study and in her summary. She claimed that when workers use AI, "There's often this illusion that you have more expertise, more skills than you actually do," which is causing office workers to feel smarter and more productive, but whose core skills are being actively eroded. The report also found that if AI is used to replace human judgement, it can create hollow, alienating work.

The report concluded that AI can create either a cognitive dividend or a cognitive debt. Essentially, they found that when used as a partner alongside an expert to supplement their expertise, it can free up time and sharpen judgement, creating a cognitive dividend. However, when used as a shortcut, such as to automate a task, increase worker scope and reduce workforce size, it erodes workers' abilities and fosters false confidence, creating a seriously damaging cognitive debt.

While I do agree with the cognitive debt analysis, the cognitive dividend analysis gives me flashbacks to "The Wolf of Wall Street", humming and banging chests.

Many independent studies, like the one from METR, discovered that AI tools actually significantly slow down expert workers, as they spend more time correcting the AI than the time the AI saves them. What's more, other studies, such as those from JYX and Carnegie Mellon, found that experts, too, lose critical core skills and critical thinking skills (in other words, judgement skills) when using AI. In fact, many studies, such as this one, actually show that workplace AI is detrimental to workers' well-being, which in turn reduces capability. So no, experts using AI as a "partner" in their field do not get a "dividend" of extra time and sharper judgment and, in fact, suffer just as bad, if not worse, skill atrophy than workers using AI as a shortcut. Meanwhile, both groups' well-being will likely be worse as a result of using AI.

In a single breath, this study has acknowledged the enormous amount of cognitive damage AI causes whilst also trying to gaslight people into thinking this damage won't happen due to the way they are using AI. With the context of where this study has come from, it's like a drug dealer giving a script to their best customer to help them cognitively navigate a much-needed intervention. But this script is akin to saying, "Oh no, it's okay, I only do meth to help me clean the house a little quicker." Firstly, you are still doing meth; it will mess you up, and secondly, the need to clean the house almost certainly isn't the reason you are turning to this stuff. It is that same pattern of justification and denial.

But, just as someone afflicted with addiction will make Freudian slips or highlight the actual cause of their problem by talking around it, which is why you need to listen empathetically to them and not berate them, this study and its author did that too.

Doctor Hinds explained that leaders often unintentionally exacerbate the illusion-of-expertise problem by ranking employees based on how often they use AI, as a marker of AI adoption and productivity gain. This creates cognitive debt by not just creating skill atrophy but also depriving workers of learning opportunities to acquire new and essential skills. Instead, she highlights how leaders should measure real outcomes to measure the success of their AI deployment. She also suggests that to have the best chance of creating a cognitive dividend, they should ensure AI is being used within the user's expertise, and ensure tasks that need human judgment or creativity aren't being automated.

This is talking around the real issue here.

The reason leaders don't actually measure AI productivity gains is that it almost never creates a net positive. McKinsey, which is massively biased towards the AI industry, found that 80% of AI pilots had no positive impact on the bottom line, while MIT found that 95% of AI pilots failed to generate returns, and many of them actually reduced productivity. In other words, if you actually measure AI's impact, the project will fail.

But there is also a growing body of evidence that when people use AI within their expertise, it actually slows them down, as we talked about before. What's more, very few jobs don't require human judgment or creativity. Even 'simple' back office jobs like bookkeeping require critical human judgment. And as the JXT study found, and Dr Hinds said herself, if such workers use AI, they risk serious skill erosion that can have enormous damaging effects down the line.

Really, what this study is saying here is that applicable scenarios for generative AI in the workplace to have a net positive impact are vanishingly small. Yet, corporations seemingly know this and are doing it anyway.

You probably already know why. It is the enshitifaction of everything we see every day. It is company-crippling, devastating mass layoffs to temporarily boost earnings. It is the stock price speculation before business health. It is the commodification of everything. It is unchecked corporate greed. It is wealth extraction before economic production.

We are all far, far too aware of this. We feel it every single day, and it is why no one is suprised at how AI is wielded against the working class.

As a side note, join a union.

But this is also the same reason many addicts turned to drugs in the first place. Self-medicating to cope with the degrading pressures of such a stratified economy, or taking mind-breaking stimulants to meet unrealistic sky-high expectations, are some of the most prevalent causes of addiction out there. And yes, using hard drugs may seem like it can help, but it has an enormous cost, as it erodes your health, cognition ability and wellbeing. The same structural pressures that cause these types of substance addictions are now driving corporate AI addiction. And this addiction too is affecting the minds, ability and well-being of the people under it.

Again, I feel I need to bring the context of where this study has come from. It has come from a research institute owned and funded by a workplace generative AI platform. I am sure Dr Hinds is a brilliant scientist, and she deserves absolutely no flak at all for her work here. But this is important context. In our analogy, this study is from the drug dealer, and is telling the drug user how to justify their damaging addiction to their worried family.

But I don't think this is just a helpful analogy. I do genuinely think we need to treat generative AI as a hard drug. Not because it is so powerful, but because of how damaging it can be to our mental health and cognitive abilities.

After all, generative AI is not, and can never be, built to give you accurate answers. All it can do is deliver an answer that will statistically please you. It does not think, it does not know, it is an inept mirror of our own hubris, creating a reality-destroying tiny echo chamber of one.

So, when a company announces it is going to roll out AI to its workers, we really should liken that to when the Allies and Nazi militaries doused their soldiers to the brim with methamphetamine or amphetamine. Sure, it might help them get through a crunch point, it might make for good PR at the time, but it's going to fuck up that organisation's human resources beyond compare. And, like these soldiers, many of these workers do not consent, and instead are threatened or coerced into it. The real addiction is the ones in power, using these 'tools' to extract more out of people to their detriment.

Thanks for reading! Don't forget to follow me on YouTube, Bluesky, and Instagram, or support me over at Substack.

(Originally published on PlanetEarthAndBeyond.co)

Sources: Glean, BI, Will Lockett, camh, Primrose Lodge, The Guardian, METR, CIPD, FWW, Defence in Depth