Discover more from Pirate Wires
“Ethics” and “Society”
tuesday report #8 // tech press hysterics for "AI ethicists" (professional political censors), everything you need to know about GPT-4, and the tiktok war accelerates
Welcome back to the Pirate Wires weekly digest. Every week, we share a brief, lead story at the crossroads of technology, politics, and culture, followed by a storm of links to catch you up on everything that’s happening. Subscribe, or die.
The first rule of AI safety is if it’s not a communist we’re all gonna die (women and minorities hit hardest). You probably didn’t realize Microsoft fired its entire AI “ethics and society” team on account of an AI “ethics and society” team is not a serious thing, and its departure doesn’t matter. But fyi they’re gone, and the tech press is not happy. Despite hysterics from the class of people voted ‘Most Likely to Suppress the Hunter Biden Laptop Story’ at their high school prom, not much has been reported about what Microsoft’s “ethics and society” team actually did, though we do know they published a “Responsible Innovation Practices Toolkit” designed to produce politically correct AI. The kit included a card game called Judgment Call, which was intended as a “safe space” “to cultivate empathy”; an exercise called “Harms Modeling”; and the positively Orwellian “Community Jury,” which was meant to “represent the diversity of the community the technology will serve and consider factors like age, gender identity, privacy index, introversion, race, ethnicity, and speech, vision, or mobility challenges.”
How a card game was expected to prevent AI catastrophe remains unclear, though I imagine people at Microsoft actually working on AI were supposed to play, and then intuit the political lessons? I don’t know. Who knows. But the “Community Jury” was more straightforward. The intention of the jury was to create a committee charged with ambiguous executive authority, pack the committee with a bunch of committed political leftists superficially differentiated by things like skin color, gender, or one would have an eye patch, maybe — thus constituting “diversity” — and then use the committee to blackmail Microsoft executives into producing a leftist AI (“if you don’t give us partisan political censorship, you are harming employees with eye patches” etc.). Presumably, if Microsoft failed to act as the group wanted, the group would leak their recommendations to the press, as in-house political organizations of this kind have leaked to the press for years.
Long story short, the “ethics and society” team was disbanded, we’re assuming because they never really did anything, and then they went to the press.
One of my favorite things about media coverage of artificial intelligence is the improbable fence journalists who hate the industry have to straddle — it’s all hype, in the first place, but it’s also replacing all human labor, which is weird because it’s basically just a chatbot, and who even cares about a dumb little chatbot?, but holy shit this chatbot just said it’s in love with me this is so dangerous. Then, on the question of speech: no, the average American, who spent the last seven years ducking the authoritarian whims of a sprawling censorship apparatus run by a small handful of the most influential corporate executives in history, doesn’t have to worry about censorship. We do however need to worry about “AI risk,” which we’re defining here as “AI that doesn’t do enough censorship.”
This all brings us, gloriously, to Casey Newton’s Platformer “coverage,” produced with Zoë Schiffer, who he hired after a couple years of groundbreaking Apple “reporting” (she knew a few of the team’s in-house political activists, and simply wrote whatever they asked her to). But let’s take a look at their “ethics and society” piece.
Microsoft, the Platformer reports, fired its “entire” “ethics and society” team. As “ethics” and “society” are both considered good things, firing these people was obviously very bad. After all, everyone knows it’s impossible for an organization to behave in an ethical manner without an “ethics” team. Look at Casey, for example. He hasn’t hired a single ethicist. Is it any wonder he sourced an entire piece about a firing from — it sure does look like! — a couple people who were just fired?
Anyway, what did these people do?
“Our job,” the source relayed, “was to show them and to create rules in areas where there were none.”
Their job was to “create rules.” Amazing. We love rules. But what rules specifically? Who knows, reports Casey. Who cares! Good rules, probably. Have I told you yet that these people were on the AI “ethics and society” team? It’s important not to look at this too closely. The rule people are always the good guys. There have famously never been any bad rules, or bad people who make rules.
“In 2020,” Casey reminds us, “Google fired ethical AI researcher Timnit Gebru,” which in the first place didn’t happen. Timnit refused the request of a manager to retract a paper that reflected poorly on the company, and threatened resignation if Google didn’t meet a list of her demands. Then, in what was perhaps the first boss move in Google’s history, the company simply accepted the crazy person’s resignation. Predictably, the Platformer frames Timnit’s departure as very bad, rather than deeply funny, further noting “the resulting furor resulted in the departures of several more top leaders within the department,” as if there were any such thing as a “top ethical AI researcher,” and getting rid of these people wasn’t also awesome. But these moves “diminished the company’s credibility on responsible AI issues,” the Platformer finally declares, before straight-up moving on as if that statement doesn’t need a citation.
Diminished credibility among whom? Journalists for the Platformer, including Casey Newton, last seen publicly announcing his saddened departure from Twitter on account of Elon was a Big Meany before not leaving, and a propagandist for Apple’s in-house political zealots? I’m honestly not even mad at these people, I’m just laughing.
After dodging the question of what Microsoft’s “ethics and society” team actually did for a thousand words or so, the Platformer finally lands on something I am charitably characterizing as ‘warned about potential copyright infringement’ in the context of generative AI. It’s true, copyright law is a very serious legal issue. Do you know who usually deals with very serious legal issues at a very large company? The very large company’s very large team of actual lawyers. Microsoft has over 1,000 of them. But go off, Anastasia from “ethics and inclusion” or whatever you’re calling your made-up department today, I’m sure your opinion is just as valuable as the trained professionals down the hall.
“The conflict underscores an ongoing tension for tech giants that build divisions dedicated to making their products more socially responsible,” Casey concludes.
It doesn’t underscore anything, of course, because there’s no conflict. Nobody cared when the pointless team that sounded nice but never worked was formed in a bull market, and nobody cares now that the pointless team that sounded nice but never worked has been laid off in a bear market. AI is still happening, and the risks remain as real as the rewards. None of this has anything to do with a card game. But congrats on your clicks, Casey.
Last week, while the world was focused on the question of whether or not venture capitalists should be euthanized (discussed at length in last Friday’s wire, which you should check out today before we lock it to paying subscribers), OpenAI launched GPT-4. And the kids went wild.
Reporting for Pirate Wires, Brandon Scott Gorrell has been following closely:
More AI from around the internet:
Pong. Within hours of GPT-4’s release, someone had it write the code for the original Atari Pong game. (@skirano)
Claude has entered the chat. OpenAI competitor Anthropic officially released its chatbot, Claude, which has been in testing with companies like DuckDuckGo since last year. (TechCrunch)
ChatCCP. Baidu released the first Chinese GPT competitor, to mostly-negative reviews. (WSJ)
OpenAI publishes paper on AI’s potential labor market impact. “80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of GPTs… 19% of workers may see at least 50% of their tasks impacted. The influence spans all wage levels, with higher-income jobs potentially facing greater exposure.” (@SmokeAwayyy)
New copyright guidance re: AI. AI-generated content that is simply a “mechanical reproduction” “lacking human authorship” won’t be granted copyright; AI-generated content that is an author's "own original mental conception, to which they gave visible form" will eligible for copyright. (federalregister.gov)
Generative AI leaps ahead of Metaverse on Meta’s balance sheet (Axios)
Google launches AI across Google Cloud (Google)
Microsoft to integrate generative AI into Office products (Axios)
Video star. A couple weeks back, I joined Anthony Pompliano at Lyceum, his conference in Miami. You can check out all of the content as he posts it to YouTube, but I thought I’d share our full interview here:
Justice Department probes TikTok spy case. An employee went rogue, allegedly, and spied on a bunch of Americans, including two tech journalists and their colleagues. (NYT)
TikTok CEO: forced sale no better than proposed Oracle partnership. Ahead of his hearing before the House Energy and Commerce Committee next Thursday, Shou Zi Chew told reporters TikTok’s plan to partner with Oracle for data storage already addresses any security risks that a forced sale might resolve. (WSJ) Nobody believes this.
Tech execs, federal lawmakers to meet before TikTok hearing. A private dinner has been arranged, signaling what some call a Silicon Valley-Capitol Hill anti-China alliance. (WSJ)
Yes, of course TikTok should be banned. Noah Smith lays the case out in one of the better breakdowns I’ve read. (Noahpinion)
NOTE: TikTok’s CEO Shou Zi Chew will be speaking before the House Energy and Commerce Committee this Thursday, March 23. I will of course be dispatching live from the circus online.
Ransomware gangs go full bore on blackmailing. Over the last week, they’ve threatened to leak sensitive medical records and public school documents if their payment demands aren’t met. (Axios)
State-backed Chinese hackers slamming US networks. A report from Google claims Chinese hackers are becoming more aggressive, and more sophisticated. (WSJ)
Russia planning cyber attacks as part of spring offensive. Or, at least according to Microsoft. (NYT)
ICC alleges war crimes, issues arrest warrant for Putin. The warrant was issued for Putin’s “unlawful deportation of children from Ukraine to Russia.” (Twitter) Didn’t realize we could simply arrest this man. Crisis averted, I guess!
Mexican president falsely claims no fentanyl is manufactured in Mexico. Meanwhile, Mexico has effectively ceased cooperating with US law enforcement. (NBC)
Philly Feds nail major crypto laundering site. Down goes ChipMixer, a site created by a Vietnamese electrical engineer. It was used by drug dealers, North Korean hackers, and Russian intelligence agents to launder $3 billion in illicit funds. (Philadelphia Inquirer)
Another Bytedance app is surging in US popularity. CapCut, which helps content creators on platforms like Instagram and YouTube go viral, has largely evaded scrutiny from lawmakers because it’s a video editing tool, not an interactive platform — but it stores user data just the same as TikTok. (WSJ)
BROAD TECH LINKS
Seems like former president Donald Trump is about to be arrested? From the people who brought you “we must protect the norms,” we’re sending our political enemies to prison, I guess. (NYT)
Meta announces 10,000 additional job cuts (NYT)
Amazon announces 9,000 additional job cuts (Axios)
Twitch CEO announces resignation (NYT)
Defense tech popularity booming among tech workers (Axios)
Stripe raises $6.5 billion at $50bn valuation (Axios)
Prop 22: CA courts uphold voters’ decision, beat back efforts to de facto ban ride-sharing (NYT)
South Korea to build five new chip manufacturing plants by 2042 (WSJ)
Volkswagen announces $130 billion investment in electric vehicles (WSJ)
Lithium price falling, bringing EV costs along for ride (NYT)
Race to ev planes heating up (Axios)
Snapchat steps up content recommendation transparency (Axios)
School districts suing social media companies over youth mental health (Axios)
Drone deliveries (Axios)
Bank run spurs Bitcoin spike. In the wake of SVB’s collapse, the value of Bitcoin has jumped 30 percent. (Axios)
Credit Suisse stocks plummet, Swiss National Bank pledges support. (WaPo)
Janet Yellen’s stunning statement. In what Doomberg called a “truly historic,” must-watch exchange, the Treasury Secretary confirmed, yes, deposits are safe at too-big-to-fail banks, and no, deposits are not safe at smaller, regional/community banks. (Twitter) Confidence level: feeling inspired.
Twitter to open source its tweet recommendation algo by end of month (@elonmusk)
Twitter Files #19. Stanford University’s “Virality Project” is heavily implicated in this latest release of the Files, which finds that, among other things, it was behind a push to characterize “‘stories of true vaccine side effects’ as actionable content.” (@mtaibbi)
Two cute drone stories —
Philadelphia city council candidate proposes drone cops. The plan calls for two patrolling drones per police district, and aims to free up officers to respond to higher priority calls. (Axios)
Small drone manufacturer circumvents red tape to secure defense contract. The CEO of an LED lighting company that recently acquired a small drone manufacturer knew he had no chance to get the Pentagon’s attention on his own. So, working through a defense industry connection, he flew to Europe and personally demonstrated his company’s drone tech to the commander in chief of Ukraine’s military. Ukraine asked the Pentagon for the drones, and voila – the Pentagon ordered 1,000 of them. (WSJ)
YouTube reinstates Trump’s posting privileges. He can also buy ads. (Axios) Still looks like he’s about to be arrested, however.
Stanford Law’s tunnel of shame. A conservative circuit judge came to Stanford Law, got shouted down by students, and a DEI administrator cried. The Dean apologized to him, so, naturally, one third of the student body subjected her to a human tunnel of shame. (Washington Free Beacon)
Students at Wellesley, a woman’s college, vote to admit trans men. This is… technically transphobic I think? (NYT)
Biden admin reverses deal with native Alaskans that would have given remote village a road. Biden’s Interior Department caved to environmentalists and rescinded an agreement between the federal government and the Alaska native King Cove Corporation that would have allowed a road to be built though the Izembek National Wildlife Refuge, giving residents of a remote Aluet village access to an airstrip for medical and other emergencies. (Yahoo)
The Culture Wars Go In One Direction: Around In A Circle. Oliver Bateman guests for PW, explaining why the sedate culture wars of the 80s and 90s were replaced by a world where anyone can become a “trad” or “trans” influencer, as long as they can guess what those audiences guess they want to hear. (Pirate Wires)
Win for milk. In February, Zach Emmanual guested for us with a piece called Milk Wars, which details the full-on assault on dairy milk. Featured in the piece is the plight of Dutch farmers over the past few years, who have been subjected to an insane set of emissions regulations that they’ve responded to by staging massive protests. Last Friday, a Dutch pro-farmer party called the Farmer Citizen Movement swept provincial elections, becoming the biggest party in the Netherlands Senate, and affording it the leverage to effectively fight against the destruction of rural Netherlands. (NYT)
Until next week.