I used to think the AI craze would naturally pass. Maybe cause a little economic trouble when the bubble inevitably burst. But I don't think this anymore. Instead, I now see this AI frenzy as a symptom of something far more insidious, detrimental, and disastrous for society. Sound a little hyperbolic? Well, let me explain.

A few weeks ago, The Information revealed that OpenAI was facing potential bankruptcy, as they were set to post a $5 billion loss by the end of the year. Not only that, but their AI development program costs are set to soar from around $3 billion per year to well over $7 billion per year as they try to build larger, more capable models critical to their growth and survival. In short, OpenAI was bleeding out and waiting to die. But, soon after this revelation, OpenAI announced they were looking to raise $6.5 billion in investment, valuing the company at $150 billion, or nearly double what it was valued at the beginning of the year, and that it is attempting to secure $5 billion in credit from banks. If secured, this would only keep OpenAI afloat for another year, at most. To make matters worse, there is plenty of evidence that, even with these funds, OpenAI can't manufacture these improved models and can't reach profitability (which we will discuss further in a second).

Surely, you'd think no one would give OpenAI this huge sum of money, right? Wrong! OpenAI just announced they have raised $6.6 billion of investment from Nvidia, Microsoft, Softbank, and Thrive Capital at a valuation of $157 billion and secured $4 billion in unsecured rolling credit from the likes of JP Morgan, Goldman Sachs, Morgan Stanley, Santander, Wells Fargo, SMBC, UBS, and HSBC.

So, why did some of the largest companies, investment firms, and investment banks pump such a colossal amount of cash into OpenAI? Is it the business opportunity of the century? Or is there something else at play?

Well, let's take a look at OpenAI's fundamentals and see if this is a good investment opportunity (hint: it's absolutely not).

Firstly, as we briefly covered, OpenAI isn't profitable. Again, not only were they set to turn a $5 billion operational loss by the end of the year, but their AI development expenditure for the year was around $3 billion. This means that even with their hundreds of millions of users and significantly exceeding every revenue prediction, they are still miles away from turning a profit. But even if they spent zero dollars developing better AI (which the valuation of their company is entirely based on), their current AIs are so damn expensive they would still post a loss per year of several billion dollars.

All in all, OpenAI's business fundamentals are absolutely horrific. But if OpenAI has a route to making significantly more money and lowering costs down the line, then there is an argument that despite this shitty balance sheet, they are still worth investing billions of dollars into.

Unfortunately, this simply isn't the case.

To begin with, as I covered in this previous article, AI development is hitting seriously diminishing returns. In short, for AI to keep improving at the same rate it has been, the amount of training data, infrastructure, and power usage have to increase exponentially. This means OpenAI and other AI companies are butting up against some serious limitations. But, by far the most prevalent limitation is expenditure, as this means that, for AI to continue developing at the same rate it has been, the cost to develop, build, and maintain AI will grow exponentially!

Now, companies like OpenAI have deep pockets, but not that deep. As a result, AI development is beginning to stagnate. You can see this in their ChatGPT models. From version 1 to 3.5, each version was a monumental leap forward. But the difference from 3.5 to 4, 4 to 4o, and 4o to o1 were minuscule. In fact, many of the changes weren't actually improving the performance of the AI but improving the usability of the AI.

This is also why OpenAI is predicted to spend $7 billion per year on AI training; it has to dramatically expand its development just to keep taking tiny incremental steps forward.

And even if OpenAI was able to raise the mind-boggling amount of money required to develop their next-generation AIs, that still doesn't solve the problem. Firstly, this will take them even further away from profitability as costs will soar, not shrink. And secondly, they also risk "model collapse."

When you train an AI on AI-generated data, it can rapidly destabilise the model to the point at which it produces nonsensical gibberish. This is because AI-generated content contains tiny, almost indistinguishable trends that human-generated content doesn't have. As you train an AI on this content, it progressively gives more weight to these nonsensical trends, and eventually, the entire statistical model the AI is based on collapses.

OpenAI has gathered literally billions of lines of text from the internet to train its ChatGPT models, which, until recently, were a fantastic resource for human-created written content. However, as ChatGPT use has become more prominent, its use online has become more common. Presently, over 13% of Google search results are estimated to be AI-generated, and that figure is only set to grow. However, the vast majority of this AI-generated content isn't labelled or obvious. As such, if OpenAI continues to scrape data from the web, it risks catastrophic model collapse.

To prevent this from occurring, OpenAI and other AI companies have turned to high-quality sources, like books or video transcripts. However, these bodies of work have larger corporate publishers and copyright holders to fight back against this.

These AI companies operate in a grey area when it comes to sourcing their training data. They claim they can take this data and train their models on it without permission or payment under "fair use." This element of copyright law claims a person or entity can use copyrighted material for commentary or if they transform it. However, training an AI on someone's data essentially enables the AI to replicate that person's work, sometimes exactly, at a near-zero cost, and as such, violates the meaning of copyright law. There is also an argument that using data in this way is "unjust enrichment," which is a law that stops someone or a corporation from profiteering off your labour or work without compensation. Consequently, some big-hitters like WB and Sony are suing AI companies to either stop using their copyrighted data or pay them millions of dollars for their material. In fact, the entire AI industry is facing an ever-growing mountain of lawsuits stacked against them.

Therefore, there is a decent chance that in a year or two, OpenAI will have to drop the vast majority of the data that powers their models as copyright law is finally wielded against them.

But, even if this didn't happen, OpenAI's products will never be reliable enough to deliver the unsupervised automation they claim. New data shows that as you train an AI on more data, it becomes better at specific tasks but worse at general tasks. In short, you can't solve AI's errors or hallucinations with more data. So, even if they did develop these next-gen AIs, they would still require huge amounts of human supervision to do even basic work, completely undermining the core promise that this technology will revolutionise every industry.

Take computer programming. This is meant to be one of the easiest industries for AI to disrupt. Yet it isn't really. Sure, these AIs can code ten times quicker than even the fastest programmer. However, the code they do produce is so buggy and full of errors that it takes hundreds of times longer to debug it than it does to debug the human programmers' work. Overall, the human programmer is actually more efficient and cheaper.

Unless OpenAI can completely eradicate errors, which they and the entire AI industry currently have no way of doing, this will always be a fatal problem with their products and the AI industry as a whole.

So, if that is the case, what is the actual use of AI?

There are numerous cases of AI being advertised as completely automating a job, such as Cruise's robotaxis or Amazon's checkout-less stores. But these companies have to employ just as many human supervisors as jobs they have replaced to keep an eye on these AIs, and even then the quality of the service is worse, and the overall cost is greater than just hiring humans to do the job in the first place. When you actually get down to nitty-gritty business planning, there are basically zero use cases of AI in which the consumer, business, and AI provider receive a good service and remain profitable.

The companies and banks that have just poured billions of dollars into OpenAI know all of this. They hire some of the best market and technology analysts on the planet specifically to understand these sorts of details. So, why have they invested so heavily in such an obviously sinking ship?

It's simple. All of my analysis of OpenAI relies on us living in a meritocracy. But we don't. Our capitalist society is leaning more and more toward a monopolistic, power-hungry, plutocratic volumocracy.

They know robotaxis, AI journalists, AI HR bots, AI programmers, and the like do a far worse job than humans and aren't profitable, but they don't care. They can produce and deliver far more than any human, drowning out the competition by overloading the market with their subpar quality product. This will effectively enable them to secure boundless amounts of market shares by brute force rather than through meritocracy. This will embolden the near-monopolies of big tech and media that these banks and businesses love and are heavily invested in — in fact, some of them are near-monopolies like Microsoft.

This is the reality of the AI push. It is caused by the dehumanising, power-hungry, fascistic tendency of our financial sector and near-monopolistic big corporations. They want more power; damn the consequences.

So, they have invested billions of dollars into a technology that removes the little meritocracy that our society and economy have thrived on over the past few centuries. It devalues and dehumanises work by drowning out the competition. This enables the near-monopolies these investors have gigantic stakes in to no longer have to work their arses off to keep offering better services to stay on top. Instead, they can spend their hoards of cash simply drowning out the competition and resting on their laurels. To these investors, it doesn't matter that OpenAI isn't profitable; it will give them even more power and market control.

In short, AI used in this way effectively mitigates the free-market forces that capitalists claim to love so much and instead creates an unjust consolidation of power.

Meanwhile, it's us, the citizens of the world, who suffer. Our jobs are taken away or devalued, investment that is needed for more crucial sectors is poured into AI, the products and services we rely on degrade in quality to the point of being nearly useless, and our quality of life goes down.

This isn't hyperbole; it's already happening. Remember what I said about programmers? Well, thanks to AI, the number of job openings has fallen off a cliff over the past year. Meanwhile, there is a growing chorus of complaints as the quality of coding has dramatically declined during the same period.

This is why I am terrified of AI. It isn't a revolutionary technology. It is a symptom of the rot at the core of our modern society. Companies like OpenAI can only exist when the giant corporations that already rule our lives are willing to dehumanise and fatally damage society in order to gain even a morsel more power and control. They want an all-powerful throne, no matter how broken the world is they rule over. Sure, this house of cards will eventually fall when these models collapse or their training data gets pulled. And sure, they will do immeasurable damage to many industries by pushing out talent or erasing skills and knowledge before they get stopped. But they don't care as long as they wield that power.

Thanks for reading. Content like this doesn't happen without your support. If you want to support content like this, or read articles early, go and follow me and my project Planet Earth & Beyond or follow me on Bluesky or X.

(Originally published on PlanetEarthAndBeyond.co)

Sources: Originality, BBC, The Guardian, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Planet Earth & Beyond, CNBC, Deeplearning.ai, Tech Startups, The Economic Times, The Wrap, AI Snake Oil, The Independent