Nobel laureate Geoffrey Hinton, often called the "Godfather of AI" for his enormous contributions to the artificial neural network technology that powers AI, has been on a bit of a tirade against Big Tech recently. From calling out their corporate greed to highlighting the dangers of AI, like Pandora, he has been desperately trying to stuff the fates back into the box. But, in a recent interview with Bloomberg, he turned this up to eleven by calling out AI's very economic viability. When asked by Bloomberg whether the eye-watering investments in AI will ever pay off, Hinton replied, "I believe that it can't," and elaborated, "I believe that to make money you're going to have to replace human labour." Now, of course, Hinton, who also believes he has invented a computer god, is focused on the enormous negative impact of AI replacing human labour at scale. It basically turns this multi-trillion-dollar AI bet into a lose-lose situation. After all, if the investment 'pays off', the economy will be destroyed, making any kind of investment useless. But what Hinton failed to do was ask, "Can AI replace labour?" Hinton seems unwilling to deface the propaganda propping up his digital Frankenstein's monster, but fortunately, I have no such qualms. This is why AI can't replace you, and why that means it is doomed to fail.

If you listen to the hype, AI is definitely going to replace labour soon. For example, research from AI Resume Builder found that 30% of companies plan to replace HR roles with AI in 2026, and the boss of the UK's Buy It Direct has claimed AI will replace two-thirds of their employees. That sounds pretty scary, right? But AI Resume Builder has a huge vested interest in AI paying off, and this research is not robust at all. Likewise, the company's scumbag of a boss has been openly using AI as a threat against the UK's new "living wage" — presumably because he knows that paying people enough money to live will cut into his superyacht budget — and so has decided to lay off his employees for AI if he can't put his workers into poverty. What a lovely chap…

In the real world of critical thinking, the data paints a totally different picture.

Take the now-infamous MIT report that my readers are probably sick of me referencing, which found 95% of AI pilots didn't increase a company's profit or productivity at all. In fact, many companies saw a negative impact. Bear in mind, these pilots aren't designed to automate workers; they are designed to augment them. If AI can't even help us do our jobs better, how can we expect it to do that job itself?

And what about the other report my readers are getting tired of? The METR report found that AI coding tools actually significantly slow developers down. It turns out that AI isn't all that accurate and gets things wrong constantly. These failures have been brilliantly PR-spun into being called "hallucinations" to anthropomorphise the cold plagiarism machine. But when accuracy is important, such as when AI is doing any task of relative importance, but especially when AI is being asked to create code, this is a huge problem. It means the AI writes nonsensical bugs constantly, and because the coder didn't write the code themselves, it takes them ages to find and correct the errors. As such, any developer with even a little experience will waste more time debugging AI code than they originally saved by getting the AI to write the code. Now, coding is supposed to be one of the main industries where AI will completely replace labour. But again, it can't even augment workers, let alone automate them.

This problem isn't just isolated to coding, though. A recent Harvard Business Review survey has found that 40% of workers have had to deal with "workslop" in the past month, which they define as "AI-generated work content that masquerades as good work but lacks the substance to meaningfully advance a given task." They found that this "workslop" problem, generated by "hallucinations" and AI's inherent inability to integrate into a work environment — due to it exaggerating an individual's ignorance, disrupting communications between experts and decision makers, being inaccurate when executing tasks, and creating the task bloat of having to manage the AI — is actually severely impacting overall productivity across many industries. So, again, if AI reduces productivity because of its inaccuracies and inherent structure when being used to augment, how on Earth can it be used to automate jobs?

Okay, but these workslop-generating AIs and AI coding tools are just LLMs; the real labour threat is "agentic AI", which can perform tasks independently. Well, truth be told, no agentic AI exists, as they can't really be built for a plethora of reasons. They need to be trained on more than just words, images and pictures; they need to be trained on human actions, which are far more varied and complex, and AI models significantly struggle with this task. This leads to fatal issues such as exponentially large and expensive models, a lack of usable training data, scope bloat, and constant edge-case problems. Instead, the "agentic AIs" being developed and flogged today are just LLMs repackaged in a crappy wrapper, and unsurprisingly, they suck. Carnegie Mellon University undertook a study to quantify the performance of the best agentic AIs out there and found that they totally failed the task given to them 70% of the time! These tasks were broad and quite simple, including routine office functions such as analysing datasets, writing performance reviews, and basic problem‐solving. In other words, AI is so ineffective that it can only automate basic 'low-skill' tasks with a woeful 30% success rate. So, no, agentic AI can't automate even 'simple' jobs.

Then there are the psychological issues that come from trying to replace workers with AI — not for the traumatised workers, though that would be a severe problem, but for those left to manage the "AI workers". Let me explain.

Research from Microsoft and Carnegie Mellon University has found an astronomically strong negative correlation between the use of AI tools and critical thinking. Basically, the more you interact with these AI tools, the more cognitive offloading you do, and the less you engage in critical thinking. But critical thinking is like a muscle; it needs to be worked, otherwise it wastes away. So, the longer and the more frequently you use AI tools, the more critical thinking skills you lose.

Then there is the MIT Sloan study, which found that, while less-experienced workers can benefit from generative AI, experienced workers using these tools experienced a blunting of their expertise. With the Microsoft and Carnegie Mellon University study as context, this less-experienced worker benefitting from AI is a worrying finding, as you need to develop critical thinking skills to become an experienced professional, which suggests that generative AI is preventing workers from gaining the experience needed to advance their careers. But the latter finding of experienced workers losing expertise isn't surprising either. Like critical thinking, expertise requires constant use to stay fresh, stay relevant and stay useful. These AI tools force workers to cognitively offload, given that is what AI augmentation is about: reducing cognitive load and stopping these workers from using their expertise, which means they will lose it over time.

With these two studies in mind, imagine a manager overlooking a small army of AI bots that have automated their entire department. There is already a massive disconnect in the corporate world of management being ignorant of the work, demands and reality of workers beneath them, meaning they can't push them, hold them to account or verify the quality of their work, which eventually leads to significant strain for everyone. But with an AI workforce, this issue will be exacerbated as management becomes even more ignorant due to interacting exclusively with the AI whilst also losing their own expertise and critical thinking skills. With this gradual drop in ability, how can these managers verify the AI workers' output? How do they control this digital mob if the simple act of interacting with them robs them of the cognitive skills and knowledge needed to understand their work?

It's simple, they can't. So, even if AI were efficient enough to even try and automate jobs, which it absolutely isn't, it doesn't make sense to replace huge chunks of the labour force with it. We are psychologically ill-equipped to manage such a workforce; we will lose control of them as we lose the skills needed to keep them in check, and that can only lead to disaster.

Speaking of the massive disconnect currently plaguing managers and their workers, if AI is so obviously this bad, why do so many people think it will replace workers? Surely, that many people can't be wrong. Well, they can, thanks to our good old friend, the Dunning-Kruger Effect.

Take the recent Upwork survey, which found that 96% of top executives claim they expect the use of AI tools to increase their company's overall productivity, yet 77% of employees in the survey say AI tools have actually decreased their productivity and added to their workload for reasons identical to Harvard's "workslop" study. I will write an article expanding on this topic soon, but the only reason this disconnect exists is because executives and upper management are so far removed from the actual work being done beneath them that they are firmly cemented in the Dunning-Kruger Effect and believe they are, in fact, experts in the work, rather than what they actually are, which is experts in management. As such, they can't spot the AI "hallucinations" and errors and so perceive the incorrect word salad the AI produces as comparable to or better than the actual expert advice of their workers. It would make sense that, to them, AIs are more than capable of replacing human workers.

This issue is both a symptom of the broken hierarchy in modern corporations, where managers are seen as more authoritative than expert workers, rather than the two working collaboratively as equals, and the desperate propaganda pushed by Big Tech. This is why the data, studies, and reality are so far apart from the rhetoric and investment.

That being said, there are studies which do show that AI can boost productivity, such as this one from Harvard. However, they are all either too constrained and fail to reflect the real-world use of AI, too small in scope, have major flaws like self-reporting gains, or simply do not check if the AI actually did the work well enough to count towards productivity. What's more, the overall consensus is currently heavily skewed towards AI not boosting productivity, so these studies exist in the minority. All of this together means they carry far less weight.

Regardless, this is the state of AI right now. The trillions of dollars being poured into AI as we speak will make it much better in the near future, rendering all this insight obsolete. Right?

Well, no.

Firstly, there is the efficient compute frontier, which I have covered before. This principle describes how AI training experiences diminishing returns, requiring exponentially more compute power, and in turn exponentially more investment, to keep improving at a linear rate. While the investment in AI this year is enormous compared to last year, it is not nearly enough to make any linear improvement. In fact, we are so deep into these diminishing returns that the latest models, which are orders of magnitude larger than their predecessors, are so marginally improved that most people can't tell the difference. As a result, even with this enormous expense, AI is likely about as good as it is going to get.

Then there is the Floridi Conjecture, which uses mathematical analysis of the systems behind AI to postulate that AI can have either a small scope and reliable results or a wide scope and unreliable results, regardless of the model's size. In other words, LLMs and all generative AI have too broad a scope to ever be reliable, no matter how much investment, data and computing power you throw at them.

In fact, OpenAI's latest research paper backs this up. They found that increasing the computing power behind these models, or shoving more data into these models, can't reduce AI "hallucinations" from their current level. In fact, they found there is no viable way to reduce AI hallucinations, meaning these models are doomed to stay as unreliable as they currently are.

So, Geoffrey has nothing to worry about. AI technology has hit its inherent limits and isn't going to get even slightly better, let alone take a huge leap forward. Sure, villainous corporations will try to replace workers with AI, but it's not going to displace labour en masse.

However, Geoffrey was right that for AI to even break even on its current investments, it needs to rapidly replace labour. As I covered in a previous article, a recent report found that the AI industry will need to generate $2 trillion in annual revenue just to pay for the data centres they plan to build by 2030. This report used very optimistic revenue projections, based on AI adoption and AI revenues increasing year-on-year, despite both falling in 2025 compared to 2024, to estimate that the AI industry will be $800 billion short of breaking even by 2030! As I covered in another article, the AI industry can't fill this income gap with AI browsers, AI apps, and AI porn, which have been its main attempts so far. Those markets are simply too small. Geoffrey was correct when he said that to fill this gargantuan hole in their books, AI companies have to replace labour at scale and soon. But as we have covered today, they simply can't.

So, will AI replace human labour and plunge the entire world into an economic hole? Hell no! That line of thinking is more of a fantasy than the D&D I'm going to play this evening. But it will damage our economy in a different way. Investors and banks have poured so much capital and debt into the AI bubble that it is propping up the entire Western economy and tying every financial institution's health to this gargantuan bet eventually paying off (read more here). In other words, they have bet the entire Western economy on a demonstrable falsehood that is guaranteed to backfire. So, while my fantasy will induce tears of joy and laughter as our moronic party falls on its face yet again, this fantasy will bring the world to its knees, not from replacing labour or creating AI overlords, but from sheer ignorant recklessness.

Thanks for reading! Don't forget to follow me on YouTube, Bluesky, and Instagram, or support me over at Substack.

(Originally published on PlanetEarthAndBeyond.co)

Sources: Bloomberg, Futurism, Tech.co, BBC, Fortune, METR, The Register, HBR, Microsoft, MIT Sloan, OpenAI, MLQ, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Will Lockett