Social Media's Slow March to Oblivion
pirate wires #65 // facebook-liking death threats, neutral commitments to state propaganda, and here comes your weird messy future
Tools of the state. Last week, Facebook and Instagram (parent company Meta) partially lifted a ban on death threats in Ukraine, making room for citizens of the besieged nation to call for the death of their invaders, as well as the death of Russian President Vladimir Putin — provided the threats aren’t credible. This followed a similar decision by Twitter, drafted after U.S. Senator Lindsay Graham shared his earnest hope that someone in the Kremlin would assassinate Russia’s leader. Most recent in a string of surprising tech decisions, it was inadvertently a flex of power by U.S. social media’s global hegemony that ironically indicated the hegemony’s fate, which relies on some illusion of neutrality, in the ash heap of history. Ten years from now, provided America doesn’t descend into real, political authoritarianism, social media as we know it — a few enormous companies, moderating “truth” as defined by the beliefs of Washington Post columnists only, used by everyone — will no longer exist. It has in some important dimensions already stopped existing.
On announcing the decision, Meta’s carefully-crafted language made clear the purpose of lifting the ban on death threats was only to allow sharing at a time of heightened emotion, and not to facilitate real-world harm. Rules navigating “hate speech,” by nature ambiguous, are an especially bizarre imposition when the neighborhood kids were just gunned down attempting to escape the country. But the call nonetheless evoked a sense that Mark Zuckerberg personally greenlit the world’s very first Russian hunting season. What followed was a cornucopia of useless tech press takes, from smug eye rolls, and the implication Facebook just worked murder into its terms of service, to convoluted think pieces arguing Meta’s decision signaled Silicon Valley’s acquiescence to status as weapon of the state, which was actually a good thing (???). But in terms of this particular decision, the boring truth is just we’ve never had a digital hive mind at global scale before, and there’s no rule book for moderating conversation this loud while cities are being bombed. Facebook is navigating, best it can, a difficult situation with no right (which is to say “not horrifying”) answer. This is war, after all. If it feels good, consider for a moment you might be evil.
The New York Times’ Mike Isaac hinted at the complexity of the situation when he took apart a reductive Reuters headline that triggered the earliest bad takes:
Emerson T. Brooking @etbrookingIf you are piling onto this, at least read past the headline. If someone is at war, it is reasonable for them to call for violence against soldiers who have invaded their country. "The Russians" almost always refers to Russian soldiers in these nations. https://t.co/5OjbClANjf
But this is also bigger than a little “death to Putin.”
The broader context of Meta’s decision is the largely unexamined trend of technology companies, which exercise global influence and instantaneous reach with no historical precedent, participating in war. I touched on this a few weeks ago, when our social media companies made their first, awkward steps in Ukraine, and Elon Musk delivered internet to the region. Then Apple and Google disrupted the Moscow metro system, and the industry nearly lock-step downranked or blocked Russian state media from the internet — yes, propaganda, but honestly, folks, what isn’t these days?
In many cases, tech’s war-time decisions have been made in direct response to local regulation, from the European Union’s demand for censorship to Russia’s blockade on western social media platforms following what it correctly perceived as platform antagonism (however so justified). But there has been no shortage of unilateral action taken, as evidenced most recently by search engine DuckDuckGo CEO’s admirable decision to publicly note, rather than quietly act upon, his allegiance to Ukraine before announcing his service would downrank Russian “disinformation” sources. While this decision was characterized by the New York Times’ Stuart Thompson as merely frustrating for the “far right,” a clear allegiance of this kind, publicly declared in the middle of a war, is obviously problematic for a company branded as a politically neutral tool for sourcing information. In the first place, for those of us interested in what the Russian government is actually communicating, a sense of what Times writers feel about the Russian government’s communication is nice but not enough — sorry, Stu, sure you’re a great guy though! In the second place, any clear indication of a powerful tool’s ability to meaningfully work against the state is invitation for further state control, any hint of which undermines the tool’s integrity.
On the topic of mis- and disinformation, we could have another censorship conversation. I could make my pitch (again) for a focus on turning down virality rather than amping up speech controls. I could argue (again) for the importance of openness, and point out (again) that our arbiters of truth have made unacceptable errors for years, from such important topics as corruption to public health, which should decisively end the argument that some people are deserving of such incredible responsibility as deciding what’s legally true, a question our most intelligent philosophers have grappled with from the beginning of recorded history. But all of these arguments presume most Americans in power want a free internet, which is clearly not the case. Then, this week, while I watched influential media personalities call for Facebook’s exertion of more, rather than less control as the company triggered a public reaction from a foreign government that only weeks ago threatened the world with nuclear annihilation, it occurred to me that possibly none of this matters. The concept of a global mono-narrative policed by five Stanford grads from their mansions in Mountain View was an interesting, if alarming experiment. The experiment failed. A third of the world is now closed to American social media platforms for what leaders behind the Silicon Curtain consider purposes of national security. Then, with our platforms’ evolution into what appear to be — despite much nuance here — tools for the mediation of our own state propaganda, their dominance across the west has begun to erode, and will continue to erode.
In Quentin Tarantino’s Inglorious Basterds, there’s a gripping scene in which a table of Allied spies masquerading as loyal Germans are caught in a drinking game with a handful of actual Nazis. But the real game is this: despite much laughter and teasing and drinks all around, the spies and the Nazis all understand that everyone is lying, and the moment even one person slips up, and makes this clear, the laughter will stop. Ultimately, the truth of course comes out. The game ends (violently).
It was a strange and dangerous dance for Silicon Valley, with everyone in power feigning not to know the truth. Social media giants wanted openness, which has always been their key to growth. Once the giants grew, the government and media wanted control, which would undermine the giants’ dominance. Even as unofficial state control of social media increased, and openness diminished, Americans were gaslit to believe their communication platforms were free and neutral — this isn’t censorship, this is “moderation,” this is “fact checking,” this is helpful, we are helping you. That we all played along with the lie was in some sense more important than any official state narrative, as the truth, that state propaganda is effectively law online, would not only diminish state influence, but undermine platform usefulness. Were the truth to come out, everyone in power would, to some degree, lose. Well, the platforms are fighting Russia now, and there is no such thing as neutral war.
The more our platforms demonstrate adherence to a single, state narrative, the more suspiciously the platforms, as well as the bluecheck hall monitors who dominate the platforms, will be regarded. In the old Soviet Union, the case was never that Soviets didn’t understand they were being lied to. They simply had no recourse in a prison state. Technology has changed the world considerably since the 1980s, and while a massive, bloated state media apparatus can probably still thrive in an authoritarian nation like China or — now — Russia, where alternatives are simply not allowed, people in even relatively free speech environments will sense they can’t find “real” information on censorship-rich platforms, and look elsewhere for the truth. There are tradeoffs here. Much of “elsewhere” is frankly very crazy. But when our choice is suffer lies or look for wisdom in the clown world, there is only one reasonable course of action: hand me the rubber nose, I’m going in.
Trends to podcasting and Substack will continue. Aggregation of anti-institutional voices into new institutions is inevitable. Barring legal manipulation from our government, which has until now mostly relied on unofficial powers of persuasion to exert its speech control, new tools for sharing information will proliferate. Our largest social media companies will bumble on, bloated and cartoonish, with ever greater commitments to woke propaganda none of us take seriously, as well as causes we mostly all support — glory to Ukraine, and many such chapters to come. But while the official conversation may still take place on these platforms, largely among state propagandists, and between the few dissidents smart enough to navigate an ever-changing set of rules left purposely unwritten, real news will be gathered elsewhere, and shared in private, digital whispers. As legal manipulation of speech is still unconstitutional, I suspect the future can’t be stopped, or at least not in America. This future will be weird and messy.
Then, take that with a grain of salt. At the end of the day, I’m still an optimist.