Every month it seems we are greeted by news of the launch of yet another artificial intelligence tool that simulates human creativity. These tools are usually lumped together under the heading of Generative AI. The latest is ChatGPT, a text-producing engine that generates remarkably human-sounding English. There are many reasons to be very concerned about Generative AI. But some of the major questions surrounding it need to be asked more publicly. When we look at the justifications developers give for pursuing these projects, we find them embracing a profound nihilism on the one hand, and offering no coherent reason for being on the other.

As they have with other Generative AI programs like DALL-E and Stable Diffusion, creators have raised alarms about the technology. Some of their worries have to do with replacing their jobs — jobs which, it should be noted, are generally among the kinds of work that people enjoy doing and, presuming they are properly compensated, derive significant satisfaction from doing. Some have to do with consumers of the simulated product confusing it with work by human beings. Some have to do with students and others substituting the AI product when human work is required, especially in higher education.

Other objections have been raised by researchers looking into racial and other forms of bias. Many prior text-based Generative AI projects have almost immediately started spewing racist and misogynistic messages, emerging from the vast amounts of web-based bigoted data on which their models have been trained. Researchers quickly uncovered that ChatGPT is using some fairly blunt filters to prevent bias — filters that can be bypassed with "simple tricks" that are "superficially masked," as computer scientist Steven T. Piantadosi tweeted; he expanded on these concerns in an interview with technology journalist Sam Biddle. A small twitter account soon produced a "racism rap" full of biased statements.

While some bias concerns might be addressed by improving the software, creativity concerns are only likely to be exacerbated as software like ChatGPT gets better. Creators and educators might say that ChatGPT should not exist at all, even if it could be freed from bias altogether.

Their concerns about confusion and the replacement of creative labor are very real, but they can obscure what is arguably an even more fundamental problem. Generative AI is built on very dark and destructive ideas about what human beings, creativity, and meaning are. These are ideas that very few of us who practice or study creative arts, and hopefully very few others as well, would agree with. These destructive impulses are relatively clear as soon as we scratch the surface of the projects.

It is hard on the other hand to see what Generative AI is supposed to be good for. Most of the apologies for the technology refer to it as a kind of toy, something fun to play around with. That's true. ChatGPT is fun. So are many of the other projects. But fun is not a compelling reason to produce something that is intended to harm us, and has a proven record of being able to harm us. Lots of destructive things are also fun. Reassurances about the limits of current Generative AI projects, or the possibility of humans using these tools to create new forms of art, while accurate enough at least to some extent, miss the bigger point. The point of these projects is to replace what we do. They are very likely to succeed, in far too many ways. Indeed they already have succeeded, in ways too ubiquitous and subtle for us to see.

Venture capitalist Sam Altman is the CEO of OpenAI, the nonprofit that steers the work of the company who built ChatGPT, DALL-E and several other Generative AI simulators, and the machine learning software that underpins them. Soon after the release of ChatGPT, Altman tweeted that he is "a stochastic parrot, and so are u." This is an incredible rebuke to everyone who thinks that human creativity and even human life are valuable and important things.

"Stochastic parrot" is a term developed by linguist Emily Bender, former Google AI Ethics lead Timnit Gebru, Angelina McMillan-Major and "Shmargaret Shmitchell" in a 2021 paper, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" In the paper they use the term to characterize AI language models (LM) that produce text and distinguish them from human uses of language: they write that "Text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader's state of mind." A stochastic parrot generates apparently meaningful text through probabilistic means, but like an actual parrot, it does not understand itself to mean anything by that text (put aside the fact that at least some real parrots do seem to understand something about what they say). "If one side of the communication does not have meaning," they write, "then the comprehension of the implicit meaning is an illusion arising from our singular human understanding of language (independent of the model)."

So when Altman writes that we are all stochastic parrots, he is saying very clearly that in his opinion none of us can create meaning; that just as the apparent meaningfulness of ChatGPT text is an illusion, so is the meaningfulness of our own communications. Meaning, according to Altman, just doesn't exist at all.

The name for the view that human life is meaningless is nihilism. It is one of the most despairing and darkest of all philosophies human beings have ever developed. Nihilism is reasonably understood by many political theorists to be the real heart of what we call fascism. In fact, to some theorists, notably Erich Fromm, nihilism precedes and directly leads to fascist politics.

When digital artist Jason Allen recently won the Colorado State Fair with a Generative AI-produced image called "Théâtre D'opéra Spatial," he said in an interview: "Art is dead, dude. It's over. A.I. won. Humans lost." Whether or not he was in some way or other joking, the comment is dead serious as an embodiment of the conceptual underpinnings of Generative AI: the point of these projects is to produce nihilism and despair about what humans do and can do.

Sam Altman would certainly deny that he is a nihilist. No doubt, at a personal level, he isn't. Yet his practice is unmistakable. His work is based on contempt for anyone who suggests that there is any difference between living beings and machines. Even though in practice it is very unlikely that he believes that to be true — surely Altman thinks his own life has meaning, despite dishonestly calling himself a "stochastic parrot" — the problem is that building and distributing software that puts nihilism into practice that can only have destructive effects.

Altman isn't alone. In October, Emad Mostaque, founder and chief executive of Stability AI which builds Stable Diffusion, gave an interview to the New York Times. Buried in the interview is a remarkable statement: "So much of the world is creatively constipated, and we're going to make it so that they can poop rainbows," Mostaque said. Anyone who can look at the remarkable outpouring of human creativity in our age, from books to television shows to movies to music, and declare it "constipated" has an incredibly blinkered and negative view of human creativity. It is frightening to think of them at the helm of any creative project.

Even a cursory dip into the subreddits and other social media posts, whitepapers, and other documentation and community discussions surrounding generative AI, makes it hard to miss the spite and negativity directed at human creativity. They are almost necessary preconditions to be part of the development community. Ironically, of course, the developers celebrate their own creativity in building these software tools, despite the fact that they implicitly rebuke the work and lives of everyone who creates or thinks meaning is confined to living things. As Abeba Birhane and Deb Raji documented recently, the responses of members of the Generative AI communities to criticism are, to put it mildly, less than generous. If the people building Generative AI are truly thinking about the good of humankind — as OpenAI's own mission statements say — their communities have a very odd way of showing it.

One of the first iterations of what we now call Generative AI was ELIZA, the script written by computer scientist Joseph Weizenbaum to emulate psychotherapy interactions. Despite being incredibly crude in operation, ELIZA seemed to many users actually to care about them and provide therapeutic help. Seeing this turned Weizenbaum into one of our leading theorists of digital technology. He rightly saw that people would willingly believe that machines care about us despite knowing they don't and can't. This radicalized him into no longer working on the kinds of projects at all, and into becoming a great critic of digital technologies, particularly due to their political commitments.

The despair that Generative AI projects produce in nearly all people involved in creative work is the point. It's the same despair Weizenbaum felt. It's the same despair that nihilists feel and want to impose on the rest of us. That despair is, to many of the thinkers who examined the rise of fascism during World War II, part of what the historian Fritz Stern called "the politics of cultural despair." Despair is closely tied to loneliness, which as both artist Annie Dorsen and technology writer LM Sacasas have noted seems to result from dealing with Generative AI projects, and is the emotion the great philosopher Hannah Arendt tied to the rise of totalitarianism.

One mark of the political radicalism of Generative AI is found in the fact that OpenAI declares that its "mission is to ensure that artificial general intelligence benefits all of humanity." Artificial General Intelligence refers roughly to the idea of producing consciousness in machines. It's what we most often see as AI in science fiction. Yet almost all scientists, philosophers, and even serious technologists who work in the field or study it consider AGI to be science fiction, based on a profound confusion between the idea of "intelligence" and the idea of "consciousness." The idea that "intelligence" (however it is measured) is the only or the fundamental feature of consciousness is one with a profoundly conservative pedigree, associated with eugenics and other forms of racism. The people who insist that these are identical despite the huge amount of evidence that they aren't seem committed to a nihilist philosophy: humans are just machines. We aren't.

In a viral tweet in late November, Anil Dash, one of the few venture capitalists who does take seriously the ideas and interests of non-technologists, wrote that "It's impossible to overstate how much the big tech CEOs and VCs are being radicalized by living within their own bubble." The nihilism that informs Generative AI is one of the most serious embodiments of this radicalism.

AI projects, like much of digital technology, need to be regulated far more heavily than they currently are. The ideology of "permissionless innovation" so cherished by tech leaders is antidemocratic to its core. But in this case there is something even more plain. ChatGPT and other Generative AI programs should not exist. They are not the kinds of things that someone who cares about human life would build. Nobody who understood the stakes of asserting that our lives are meaningless would participate in such an endeavor. That OpenAI and other projects like it insist on pursuing this line of research over the strong objections of so many of us who do care about human meaning only shows their own embrace of nihilism.