In 2016, the United Nations declared access to the internet to be a basic human right, right up there with shelter, food, and water. And while many of us may have access to an internet, none of us has access to the internet. That’s because it isn’t one uniform entity.
Thanks to surveillance and customizations technologies, each of us gets our own internet. Your Google search results are different than mine. Your feeds show different posts and ads than mine — even if we subscribe to the same sources. Your news apps deliver different news to you than mine, prioritized differently, and from a different political perspective.
This, more than anything else, presents our country’s greatest barrier to engaging in anything resembling civic discourse. The issue isn’t the content (though it can certainly be problematic). It’s the platforms. How can we forge any semblance of consensus with people who are not even looking at the same realities? To fix the net’s influence on political discourse, we need to end the automated customization of what we see.
Customization is the net’s original sin, a practice originally named “one-to-one marketing” by customer management consultants Don Peppers and Martha Rogers in 1993 — before the web even existed. They believed augmenting email marketing with a good database could turbocharge direct marketing into a behavioral science. Instead of managing products, marketers could “manage customers,” bringing them ever more accurate depictions of what they really want.
Back in the 1990s, this meant better, more targeted email spam. The web was still neutral territory, where all visitors to a site saw the same ads and content. But in 1994, with the invention of the cookie, advertisers gained the ability to track us individually and serve each of us the ads we were most likely to click on. From that moment on, customization became the ethos of the net. Every company or organization could establish a different “relationship” with each one of us.
Now, that’s one thing when it’s applied to marketing. But it’s another thing entirely when it’s applied to the news and information we use to understand the world. We each end up in a feedback loop between ourselves and the algorithms that have been assigned to us. Slowly but surely, we progress toward a more extreme version of ourselves — exacerbated by the fact that the stories and images we receive are irreconcilable with everyone else’s. Worse, they change based on how we react to them. Reality, as depicted by the algorithms, is itself a moving target. This makes us even more unsettled and more vulnerable.
We are like rats in a psychology experiment; each of our responses to a stimulus is measured and recorded. Then the results are interpolated into the next stimulus in an increasingly refined Skinner box of classical conditioning. As we “train” the algorithms that serve us our content, we are being trained ourselves. We are receiving confirmation bias of every assumption, taste, and belief.
Yes, this means the content we see is better customized to our individual impulses every day. You get cats, I get dogs, and my daughter gets alpacas. This is valuable for marketers who want to make sure we each get ads that depict their products in accordance with each of our sensibilities. And they can assemble their pitches in real time for each of us as we navigate their virtual worlds.
Where we get into trouble is when the rest of what we see online is subjected to these same customization routines. They end up reinforcing every one of our projections, from vaccination safety to the extinction of the white race. According to the logic of one-to-one marketing, the successful “customer relationship” means reflecting back to each person the reality that will get them to respond to the call to action.
Direct marketing techniques like these might even be considered appropriate for entertainment or advertising — but not for a public service like news. It’s the equivalent of dispensing medical advice based on a person’s individual superstitions instead of clinical data. As Neil Postman warned us in the 1980s, if we allow news to be ruled by the economics of entertainment, we risk “amusing ourselves to death.” It’s just such a regression that allows for Trump to equate his television ratings with his fitness for office or for California Representative Devin Nunes to criticize the legitimacy of impeachment hearings on the grounds that they were “boring” and had bad ratings.
When we get used to the idea that our news and information should always be entertaining, we start to make our civic and political choices based on sensationalism alone. Who would be more entertaining: Trump or Biden? But it gets worse. At least ratings are a reflection of consumer choice — a form of polling. They hearken back to an era when our limited perspectives on the world were a result of whether we chose to consume the Times or the Post, FOX or NBC, Rush Limbaugh or NPR.
On platforms from Google to Facebook to Apple News, algorithms select our stories for us based on our previous behavior. If we stop clicking on stories about war, then wars will eventually be excluded from our picture of the world. If a MAGA supporter wants to believe Trump’s claim that America’s cities are “infested” or that Central American immigrants are “invaders” and “murderers,” the algorithms will figure that out and deliver this dark picture of the world to them. In the digital media environment, such realities coalesce automatically, without our conscious control and beyond our ability to intervene.
Trump supporters aren’t the only ones being radicalized by the digital media environment. Clustered together — again, by the most crudely defined versions of common interests — progressives end up incapable of adopting a nuanced approach to progress. They must respond immediately and correctly to each new outrage, from the MAGA-hat kid to MIT’s coddling of Jeffrey Epstein, or else risk the wrath of social media’s enforcement wing — itself less a squad of people than an emergent phenomenon. Worse, they must live in fear that their past transgressions against future restrictions may one day be discovered. And that’s tough in a media environment built on surveillance and memory, where using a word like “niggardly” in 1986 becomes evidence of racial insensitivity in 2019.
All of us are trapped in customized filter bubbles, without the means to connect over anything real — particularly with people in bubbles that have no intersection with our own. Instead, we must conjure these “figures” that represent some version of our shared terror over changing ground. Whether we pick a real threat like climate change or a manufactured one like George Soros, the abstracted, panicked, and hallucinatory manner in which we are relating to them is the same.
This all makes the one-to-one communications landscape a propagandist’s dream come true. As French media philosopher Jacques Ellul explained in his seminal book, Propaganda: The Formation of Men’s Attitudes (1973), “When propaganda is addressed to the crowd, it must touch each individual in that crowd.” Customized digital media finally gives marketers and propagandists alike the ability to reach those individuals, not just through discrete messages, but through the creation and confirmation of the mental realities in which they live. And clearly, such activity favors those who rely on fear and outrage for their influence.
Many brilliant net theorists and policymakers are already offering various ways of mitigating the internet’s amplification of disinformation, extremism, and hate. Twitter’s refusal to run political ads, though a bit vague to operationalize, is a nice start. So, too, are Facebook’s efforts to manually and algorithmically check posts for the most egregious content, such as livestreamed massacres and beheadings. Fewer beheadings on social media is a good thing. But it doesn’t solve the underlying problem.
The more structural and effective solution is to make customized news illegal. Platforms cannot deliver information based on who they think we are. They have to deliver the same news about the world to all of us. If we really want certain kinds of stories filtered or emphasized, we should be doing this ourselves — the same way we pick which cable channel to watch or which newspaper articles to read. Imagine a dashboard, like the system preferences on a computer. Except instead of letting us choose which apps can send us notifications and banners, the control panel lets us choose which news services can send us headlines or which subjects we want emphasized in our feeds.
Such choices shouldn’t be made for us — least of all by profit-maximizing algorithms—no matter how much more “engagement” that provokes.
And under no circumstances should a platform make choices about what stories and images it delivers to us without disclosing the criteria it is using to do so.
Navigating the digital environment is difficult and lonely enough. We mustn’t let its individually customized realities convince us that we truly live in different worlds.