The Medium is the Message
The real and imagined dangers of AI

By Douglas Rushkoff. Published in Medium on 22 December 2022

the main article image

I got interviewed earlier today by CNN about Artificial Intelligence. They asked “what are the dangers of ChatGPT?” I answered:

ChatGPT is a variant of the GPT (Generative Pre-training Transformer) language model, which is a machine learning model developed by OpenAI for generating human-like text. It is designed to be able to generate coherent and coherent paragraphs of text based on a given prompt or context.

While ChatGPT and other language models like it have the potential to be useful tools for a variety of applications, there are also potential dangers to consider. One is the possibility that language models like ChatGPT could be used to generate fake news or other types of misinformation. This could be done by feeding the model biased or misleading information, or by using it to automatically generate large amounts of content that appears to be written by humans.

Another potential danger is the possibility that language models like ChatGPT could be used to automate certain types of online interactions, such as customer service or technical support. While this could potentially save time and resources, it could also lead to a loss of personal connections and a decline in the quality of these types of interactions.

Overall, it is important to be aware of the potential dangers and to use language models like ChatGPT responsibly, taking steps to ensure that they are not used for nefarious purposes.

They seemed satisfied. Then I told them that everything I had just said was generated by ChatGPP just moments before. I had quietly typed their question into the empty field on OpenAI, and then read the results out loud on the air.

The bigger question is, did that matter? The answer, of course, depends what they were after, and which questions they really mean to ask.

We’re supposed to be worrying that AI is so good it will soon replace workers and humans in all aspects of life. But I’m not so sure we should be accepting the claims in AI company press releases at face value. I can still remember when people thought Adobe Photoshop would put designers out of business. Yes, it changed print publishing and required paste-up people to learn new skills. But designers? After a month or two of people believing Photoshop and Illustrator made their work look indistinguishable from that of professional designers, we all began to recognize the telltale visual signs of amateurs being led by the software. Real designers used Adobe’s products as tools, rather than the other way around. Likewise, neither calculators nor Quickbooks replaced accountants, and telephone answering systems still annoy everyone when they’re used.

Questions about human replacement seem intended to stress the inevitability of AI’s commercial future, and only distract us from the reality we should be looking at: AI is way more costly in labor, money, and materials than is visible to the public. Today’s OpenAI platform is energy intensive; it costs one hundred thousand dollars a day to run. And it’s still just in a beta version, being used for simple demos. The powerful computers and servers running AI require massive amounts of rare earth metals, and Silicon Valley firms are already competing with the EV industry for the rights to extract lithium deposits. That takes human labor and impacts the environment. AI doesn’t eliminate those costs; it simply shifts them.

As for kids cheating in school, well, I’ve been reading stolen, downloaded, and purchased papers since I started teaching twenty years ago. Instead of entering the AI arms race and paying for a platform to analyze the writing for signs of digital synthesis, I just have a five-minute discussion with each student about their paper. For the students who wrote their papers for real, it’s a chance to interrogate their underlying assumptions. Those who didn’t come up with their own ideas reveal themselves pretty quickly.

Meanwhile, the threats posed by AI to art, literature, and entertainment, ultimately come down to what people are really after. Do you look at a Van Gogh painting purely for the retinal engagement? Are you interacting solely with the painting? Or are you connecting with the brilliant, sad, human soul whose brush strokes created it? When you read Ulysses, are you really just taking in word combinations, or are you sharing a thought space with James Joyce? I can understand purchasing an AI-generated pattern for my bathroom wallpaper or an area rug, but even then there’s a certain aura it will lose if there is no human on the other side (other than the enslaved workers at the factory and lithium mines).

AI’s ability to generate deep fake videos and new stories doesn’t frighten me either. It just puts us back where we were when we used to get our news from printed text. Anyone can put anything into the same font, print it out, and pass it around. The veracity of a news story has nothing to do with its production value and everything to do with the integrity of the institution producing it. That’s why news organizations are supposed to work within a system of “journalistic standards and practices.” They teach this stuff in journalism school, and places like BBC (and sometimes CNN) employ it. Your willingness to consider the ideas in this piece is dependent on your trust in me, a human being with a reputation at stake and, hopefully, a track record of at least attempting to write truthfully.

Even CNN was ultimately unsatisfied with my AI-derived report. They wanted to know what I—Douglas Rushkoff—really think, because only a fellow human can believably and effectively provide comfort or provoke appropriate concern. It’s never about the information, but the comportment and sheer presence of the other person.

Unlike the opening paragraphs of this piece, you are not just connecting with these words; you are connecting with me. Because what any writing-reading relationship really says is that we are in this together.

(I will write about the threat AI poses to labor in a future piece — as well how it reveals less about our relationship to technology than our faulty assumptions about jobs.)