Dopamine Thirst Trap

pirate wires #27 // when algorithms run for president, dopamine thirst, instacart misinformation, "truth" by popular demand, and welcome to the pirate wires live chat
Mike Solana

Another quick note, I’ve decided to experiment with a brief library of links at the bottom of each week’s wire. I’ll be including noteworthy tweets, longer form pieces, and news stories I find interesting or important. In the future, feel free to scroll straight down for the links, or read the wire and ignore the links, or unsubscribe and send me hate mail. I don’t care what you call me, baby. Just call me.

As ever, feel free to comment on this week’s wire below, or let me know what you think about the link library, Pirate Waves, or whatever else is going on inside that beautiful mind of yours.

Okay, back to regularly scheduled programming.

The shape of things. Last week, I took another look at the anti-tech ‘extraction’ meme that’s been gaining traction among local government apologists who, we have improbably discovered, somehow exist in this country. But, in the context of Clubhouse, the aforementioned voice-based social media app, I also briefly touched on a dynamic I’d like to expand on today: new communication platforms, operating at scale, shape an entirely new kind of politician.

Via ‘Extraction Intensifies’:

This particular instantiation of voice chat has unlocked something powerful, and a new dimension of social media is emerging. Five years from now, it’s possible Clubhouse, or a similar application, will be something political leaders actually have to engage with in order to succeed. Were such a shift to happen, voice chat would have as dramatic an impact on the shape of politics as Twitter, without which, for example, neither the Republican presidency of Donald Trump nor the domination of Alexandria Ocasio-Cortez over the Democratic Party would be possible. In a Clubhouse world, we would see an entirely new kind of politician emerge.

Rogan for president? I’m not even joking.

Geoff Lewis recently raised an interesting point on the topic:

That a technology as newly-dominant as social media must necessarily present new, and potentially-calamitous challenges for human civilization is something I’ve written about at length. Last year, in Jump, I took a look at the potential for social media-induced information disaster, an entirely new class of catastrophe. We haven’t yet seen ‘The Big One,’ but there are warning signs everywhere, including one this week in the GameStop stock rally. In Tether, I explored our increasingly-warped sense of shared reality, the inherent malleability of information in the digital age, and the dramatic erosion of large-scale group identity endemic of a digital world. But what about the narrow question of politics, and politicians?

Twitter, Facebook, Instagram, TikTok: each of these apps has been designed to keep people engaged for as long as possible. Capturing attention was also the goal of print, radio, and television, but the Twentieth Century was an age before machine learning, which greatly amplified our ability to give people what they really want — or, at least, what they really want right now. The progression was natural, but inevitable. Social media companies replaced a small, elitist cabal of mass media truth arbiters (who have by the way never forgiven them) with the seething, chaotic human Id, individual by individual, at the scale of hundreds of millions or even billions. In aggregate, this tends to present as a ravenous, drug-addicted digital mob.

Basic human impulses have always shaped culture, but now artificially-intelligent algorithms directly reward positive social stimulus in the form of “likes” and “retweets” with frequent, daily dopamine hits to the brain. What people “like” and “retweet” in the greatest numbers tend to cluster into our most immediate, and animalistic desires: status, sex, and war, or conflict, itself the realm of fear and anger. We also get a lot of humor, a kind of pressure release (thank God). The dopamine reward naturally drives content creators, from journalists and performance artists to politicians, in subtle ways they themselves may not even be able to perceive. Without our daily (hourly?) dose of attention, many of us, and certainly most compulsive creators, start to feel the pangs of dopamine thirst. We reach for our phone. We post.

This is classically-presenting addiction shit. Do you really want that cigarette, or are you literally out of control right now?

In the narrow context of politicians, it’s important to remember that social media does not incentivize people to say what they think, it incentivizes people to say what they think is working. To some extent, this has probably always been true, and not only in politics. But the dopamine delivery device for social validation has not always lived in our pocket. Proximity to the drug has catalyzed frequent use, frequent use has led to dependence, and now this addiction — to attention, essentially — is shaping the political world. Have you ever scrolled to the beginning of an Instagram celebrity’s photos and noticed a jarring evolution from grainy pictures of food, sunsets, and laughing friends to a near-exclusive library of selfies and well-oiled abs by the pool? Well, the final boss Twitter version of the Instafamous professionally-pretty person appears to be a highly-polarizing populist demagogue, and politicians like this are now winning elections that once seemed impossible.

A lot of ink is spilled on the dangerous rhetoric of politicians like Trump and Bernie Sanders. But social media, a reward system and delivery device for our most impulsive, immediate desires, mainlined these people into culture. These politicians aren’t our greatest danger, and neither is their rhetoric. The system of rewards and delivery driving these politicians into existence is the problem we need to address, and given the realities of virality I think this problem, if left unaddressed, could be existential.

Technically true. Much has been said of the media’s political bias, but I’m beginning to wonder if some journalists might simply be suffering from extreme dopamine thirst. As is now evidenced beyond all doubt, a significantly thirsty writer will tweet just about anything for a hit of attention. This is sort of the implied meaning of “hot take,” a brief argument that may be correct, or interesting, but that is clearly written in a way intended to polarize. The hot take, however, is the stuff of opinion. The version of this we sometimes see in reporting is framed as “truth,” which is much more serious. As with almost everything on the internet, misleading headlines are not a new phenomenon. What’s new, and notable, is the scale of our exposure to the phenomenon. Today, on Twitter, many of us now spend a significant portion of our lives inside a virtual world of headlines.

This week, a piece of what was meant to be sober tech reporting was framed in a way that absolutely blew my mind: “Instacart is firing every employee who voted to unionize,” tweeted the Verge. The obvious read, here, is Instacart fired employees for unionizing, which was the sentiment shared by hundreds of comments on the story, many of which were themselves widely shared. This is how the internet works, which every reporter knows. Indeed, reporters often lament the fact: no one reads the actual story. The tweet, for most people exposed, is the story.

This all begs the question: if we understand the tweet and headline framing a story are of such dramatic consequence to public discourse, how can anyone justify a dishonest tweet or headline?

As it isn’t legal to fire employees for organizing a union, a move like this from Instacart would be extremely dangerous. In fact, the story seemed so insane I found myself compelled to do something truly rare: I briefly paid attention to the Verge, and clicked. After the headline, the subhead, and the first three paragraphs, what I finally learned was this:

Instacart did lay off ten grocery workers who voted to unionize. But these workers comprised a fraction of a percent of two thousand lay offs. The total number of grocery workers? Ten thousand. I also learned this story was basically a re-hash of a Motherboard piece, which was just as dishonestly framed. But so it goes. The Instacart lay-offs are newsworthy, but the purpose of framing the piece as a hit on unions was not to honestly inform. It was to enrage.

Dan Primack, business editor over at Axios, was quick to critique the move.

Zoe Schiffer, the writer responsible for the Verge piece, was quick to defend her work.

One commenter mentioned the offending headline (which was in the first place much more than a headline, let’s not forget) was “objectively true.” Lee Edwards countered:

This may all seem like a niche drama, but the frame was dishonest, anyone willing to look at this story with open eyes can see the frame was dishonest, and all of this is happening in a broader context of newly-ferocious efforts to suppress “inaccurate” or “misleading” speech. This brings us to Twitter.

One of the many problems inherent of a social media platform where top-down censorship of the “untrue” is made officially the law of the land is it effectively rubber stamps a giant “ACCURATE” tag over every piece of information that is not censored. In a world where the “untrue” is ostensibly removed from view, people naturally assume they can believe what they read. In this way, Twitter’s model of censorship is beginning to exacerbate the impact of misinformation, rather than the intended purpose (one charitably presumes) of truthfully informing the nation.

People often express confusion when I ask why, for example, the insane disinformation we are now seeing from the Chinese government is allowed on American social media platforms. “Aren’t you the free speech guy?” Folks, we do not live in a world of free speech. We live in a world of censorship. My argument, then, is: fine, whatever, you’re the one in oligarchical power, but can we at least have a coherent censorship? If your stated policy is to police content for misleading information, all misleading content needs to be policed. It’s really that simple. Anything short of impartial and total rule enforcement implies an obvious, and dangerous bias on the part of social media executives.

The mob is always right. Anyway, Twitter said “fuck it” this week and decided they could crowd source the truth.

I publicly critiqued Birdwatch for what I believed the obvious danger inherent of determining “truth” by popular vote. Like, from science to civil rights literally when has this ever worked out? A handful of commenters expressed frustration with the point, and asked what I would do about the problem of misinformation were I in charge. But misinformation is as old as human civilization. What’s new is the phenomenon of instantaneous information virality, and this is where we should focus.

If we can agree that instantaneous virality of inaccurate information, and the consequence of such virality, poses incredible danger to society, it follows that slowing down the speed of information sharing — broadly — would be far more beneficial than attempting to police the “truth,” which is a concept the greatest philosophical minds in human history have failed to define for literally thousands of years. But we do have levers for speed. Speed is something we can manage. My humble proposal to save the world: cap every social media post at a thousand shares. Cap every user to a million followers. While we’re at it, let’s go ahead and delete the concept of “verified user” badges, which imply accuracy where there is no such promise. I also think a slew of new tools for mass blocking would be helpful, if counterintuitively. The argument for keeping people exposed to mobs of others whom they can’t stand, or whom can’t stand them, is that such exposure reduces polarization, which… I mean clearly and obviously no, that is not the truth. With total exposure to each other, we have not become less polarized. Allowing people to group in smaller social silos would, as with the rest of these proposals, reduce the speed of information sharing.

The obvious problem here is the people in control of the narrative on misinformation are 1) social media executives who are incentivized primarily to catalyze growth and engagement, both of which are assisted by virality, and 2) social media influencers who experience analogous growth (and status (and money)) from virality, and intuitively don’t want to lose this power.

But mass communication is broken, and this problem isn’t going to fix itself. Eventually, we’ll all have to answer the same question: am I interested in improving our discourse, or do I actually just like censorship?

-SOLANA

0 free articles left

Please sign-in to comment