Thinking Outside the Black Box
What the algorithms can’t see may be the most human thing about us

By Douglas Rushkoff. Published in Medium on 9 January 2019

the main article image

It feels like we’ve finally reached “peak Facebook.”

Thanks in part to recent revelations about how the company gave access to user data and private messages to Netflix, Spotify, and others, as well as the dirty tricks campaign to smear Facebook critic George Soros, people are becoming aware that the platform doesn’t merely hurt society as a side effect. Facebook is an intentionally bad actor. That said, maybe there’s something to be gained or learned from the social network before it’s gone.

We may not have a lot of time. Facebook’s products have become too obviously destructive for almost anyone to justify. A decade of articles, documentaries, and school curriculums dedicated to explaining to users that they are not Facebook’s customers but its product, combined with evidence of its weaponized memetics and that icky feeling of being actively targeted by algorithms, has finally taken its toll: People use Facebook when they absolutely have to, but rarely because they want to. And now, with all the desperation of a cigarette company denying that its product is addictive, Facebook has revealed just how low it will go by blaming Russian spies for using the platform as designed or smearing Jewish philanthropist George Soros because, well, he’s an easy target and a surefire distraction.

Sure, Facebook may yet throw an audacious Hail Mary, like when a declining AOL went and bought Time Warner, but even the purchase of a Netflix or Disney would only temporarily slow the bleeding. The corporate brand is shot because Facebook has become the face of algorithmic malfeasance. It is the poster child for how technology can be turned against human agency. The company employs behavioral finance, privacy invasion, and machine learning to manipulate users in the fashion of Las Vegas slot machines, and then claims either innocence or ignorance when the social impact of these machinations is revealed.

But now that Facebook seems destined for irrelevance, I find myself wondering if there may be a positive use for the platform after all. No, not to make friends or communicate with people — neither of which were ever Facebook’s strong points anyway. The real value we can derive from Facebook comes from interacting directly and purposefully with its dark innards: the algorithms themselves.

Facebook’s most aggravating aspect is also its most intriguing: the way it attempts to predict our needs and desires. On the surface, it’s simple: The platform uses what it knows about us from our clicks and likes and shares and choices of friends to allow marketers to deliver ads to which we are likely to respond. Sometimes the data is analyzed by Facebook itself, and sometimes it is analyzed by Facebook’s clients — who can then combine the information they get about us on Facebook with the data they get from the web’s many other surveillance tools. That’s how the subject of an email thread or web search can end up following us around as ads. Thanks to data sharing between companies, cookies that track our web activity, and algorithms that read our public posts, most of the internet works this way. Retweeting a critical message about a progressive political candidate may get you targeted ads or articles decrying gun control, for instance.

I don’t know which I find more unnerving: when the ads and recommendations that stalk my every online move have too uncanny an understanding of who I am, or when they throw back at me a picture of myself that I don’t relate to. Like anyone, I’m disturbed when it seems as if the conversation I just had on my cellphone about a private medical issue seems to have informed the new ad chasing me around the web. More than likely, the algorithms diagnosed me based on surfing habits, age, and other metadata.

But what about when the algorithms start sending me ads and articles with more extreme views than I hold myself? Lists of Trump’s “treasonous” acts, the ways Russia is spying on my home, or how to stop immigrant invaders? How about when they seem to know my worst fears and then play on them and exaggerate them to get me to respond? In other words, clickbait, personalized to my psychological profile, as determined chiefly by an analysis of my online behavior. Anyone who has followed the recommendation engine on YouTube knows that after delivering one or two innocuous videos, the “Up Next” cue serves up increasingly extreme content. The algorithms push us to become caricatures of ourselves. They don’t merely predict our behavior; they shape it.

So even while while platforms like Facebook, YouTube, or Twitter may be terrible sources of news and information, the transparency of their manipulations and those of their client marketers offers us a window into the way the digital media environment implacably reconfigures itself — and, by extension, the world it controls — based on its narrow judgments and aggressive manipulations. Everything that shows up is, in one way or another, a reflection of our prior actions, as processed by the algorithms and placed in the service of corporate interests. It’s an entire ecosystem of news, marketing, advertising, and propaganda, tethered to algorithms, all sharing information among themselves about who we are, how we think, what we respond to, and what we ignore.

Those algorithms, as we’re now learning, determine a whole lot more than which ads we will see, the pricing of our airline tickets, or the conspiracy theories that find their way into our newsfeeds. They factor into decisions about our bank loans and mortgages, our visas and airport screening, our job applications, parole determinations, legal reputations, or even our ability to land a babysitting gig. The ways these algorithms assess our suitability is, of course, proprietary and secreted in black box technologies. But, as numerous researchers have determined, those black boxes are filled with the very same prejudices that have been reinforcing racial bias and other forms of oppression all along.

In just one example, judges now use an algorithmic tool called COMPAS to determine the sentences of convicted felons. The higher the expectation that someone will return to prison, the longer the sentence. But in calculating a convicted felon’s recidivism score, COMPAS doesn’t actually evaluate a person’s likelihood to commit a crime, but simply their likelihood of getting caught. In other words, the algorithm inevitably amplifies the institutional bias of police, who are more likely to arrest blacks than whites for the same crimes. The approach not only furthers racial injustice, but also undermines the supposed purpose of correctional facilities in the first place. And that’s just in the United States. In China and other more repressive states, one’s social media score punishes or rewards people based not merely on what they do or say, but also what their online connections do or say.

Making matters worse, all these decisions are being based on the outputs of proprietary technologies. If there’s an upside to Facebook’s clumsy efforts to mirror us back to ourselves, it’s that they help expose this otherwise opaque process. They’re a peek into the sorts of black boxes that are increasingly coming to dominate our society. And as uncomfortable as we feel staring an algorithm in the face, the ones shrouded in secrecy are even more damaging. By making the machinations of algorithms more visible, Facebook’s grasping bids for our attention reveal the faulty logic, pernicious speculation, and embedded prejudices of an algorithmically determined social, legal, and political order. Facebook has become a chilling case study in why algorithms should never be applied in this way — at least not at this juncture in our technological and social development. All they do is camouflage institutional biases in a cloak of technosolutionism.

And it must be emphasized that Facebook’s algorithms are not neutral; they understand us only through the lens of capitalism. They want to know only the things about us that can be monetized. The whole platform is built on that foundational understanding of human personality as defined by self-interested consumption. That’s how it helped reduce voting to a consumer choice.

If you really want a neutral internet, then start by using an anonymous browser, turn off all the cookies, search without a profile, and hide your IP address. Another response may be to flummox the algorithms entirely. I’ve tried altering my behavior to see if the platform would serve me different content. But no matter how well I resist clicking on sensationalist stories, the algorithms don’t seem to learn that I want to see an accurate picture of the world. They are committed to finding my personality “exploits” and provoking an impulsive or unconscious reaction. I’d be better off clicking on everything at random — except that there’s likely a category for the kind of person who does that and a handy way to monetize the gesture.

A former student of mine helped develop a browser extension called Ad Nauseam that not only blocks web ads, but goes ahead and clicks on every blocked ad in the background. User tracking becomes futile, because the algorithms can only conclude you are interested in everything. It’s the sort of strategy recommended by the author-activists behind the book Obfuscation, in which they argue that the best way to fight digital surveillance is to camouflage, push back, or even sabotage the algorithms. I’m all for that.

But as distorted as it may be in its current form, Facebook is still a kind of mirror. And even if it doesn’t represent us accurately — especially if it doesn’t represent us accurately — Facebook offers us a rare opportunity to explore the difference between what algorithms assume about us and who we really are. The aspects of ourselves that they can’t categorize meaningfully, the places where they falter, represent some of our most important qualities as human beings simply by virtue of the fact that algorithms can’t yet quantify them. Maybe the best way to know which aspects of our humanity are not computational is to pay closer attention to the limits of the algorithms in use all around us.

What do the algorithms used by judges in determining prison sentences miss about behavior, society, and justice? What do economic algorithms miss about human prosperity? What do medical algorithms miss about human vitality? What do social media algorithms miss about what truly connects us to one another?

Even if we reject algorithmic solutions to the world’s problems, by looking at how they fail, we can come to a better understanding of how we humans can succeed. For this, ironically, we have Facebook to thank.

The gap between who we are and who the platform’s algorithms say we are may just represent the tiny bit of human mystery we have left. We must cherish it and cultivate it in ourselves and everyone we meet. Then maybe we can finally log off and relearn how to be social without media.