Hallucinations in AIJul 3
everything you think you know about AI has been told to you by someone with an incentive to lie (and mostly to themselves)
John LuttigMetaâs multi-hundred million dollar comp offers and Googleâs multi-billion dollar Character AI and Windsurf deals signal that we are in a crazy AI talent bubble.
The talent mania could fizzle out as the winners and losers of the AI war emerge, but it represents a new normal for the foreseeable future. If the top 1 percent of companies drive the majority of VC returns, why shouldnât the same apply to talent? Our natural egalitarian bias makes this unpalatable to accept, but the 10x engineer meme doesnât go far enough â there are clearly people that are 1,000x the baseline impact.
This inequality certainly manifests at the founder level (Founders Fund exists for a reason), but applies to employees too. Key people have driven billions of dollars in value â look at Jony Iveâs contribution to the iPhone, or Jeff Deanâs implementation of distributed systems at Google, or Andy Jassyâs incubation of AWS.
The tech industry gradually scaled capital deployment, compounding for decades to reach trillions in market cap. The impact on the labor force has been inflationary, but predictable. But in the two and a half years post-ChatGPT, AI catch-up investment has gone parabolic, initially towards GPUs and mega training runs. As some labs learned that GPUs alone donât guarantee good models, the capital cannon is shifting towards talent.
Silicon Valley built up decades of trust â a combination of social contracts and faith in the mission. But the step-up in the capital deployment is what Deleuze would call a deterritorializing force, for both companies and talent pools. It breaks down the existing rules of engagement, from the social contract of company formation, to the loyalty of labor, to the duty to sustain an already-working product, to the conflict rules that investors used to follow.
Trust can no longer be assumed as an industry baseline. The social contracts between employees, startups, and investors must be rewritten. In the age-old tension between mission and money, missionary founders must prepare themselves for the step-function increase in mercenary firepower.
Hypercapitalist AI talent wars will rewrite employment contracts and investment norms, concentrate returns, and raise the bar for mission and capital required to create great new companies.
As a thought exercise, how much should Google have paid for DeepMind? In 2014, a $400M acquisition for a pre-revenue company seemed nonsensical. But with the leverage that comes with Google scale, the DCF value could be quite high â a few percentage points in net savings from their datacenter costs could make it a 100x+ return over a decade, and thatâs in a pre-LLM world! In the context of Google paying $3B for Noam, theyâve probably already earned back that investment in his help getting Gemini training runs unstuck; the deal even looks modest with a year of hindsight.
From the Big Tech point of view, if AI is a $10T+ revenue opportunity, and your research team scales sublinear to revenue with a cap of a few hundred researchers, is the difference between spending $5M/year/researcher and $10M/year and $20M/year enough to stop you? $10B per year in researcher comp is less than a quarter of Metaâs annual capex. No matter the odds of ultimate product-market fit, the sunk cost is too large to turn back now.
Even in 2014, the AI talent wars existed: Meta was reportedly the other bidder in the Google-DeepMind deal. But why didnât pricing for top talent run up sooner? The confluence of compute leverage, urgency, and supply constraint means the labor share of value is higher than prior technological waves:
No single analogy is perfect, but we can learn a lot from athletes, actors, and traders, where the best are worth 10x or 100x the average. All three categories have tremendous capital magnification â a superstar needs expensive infrastructure (compute clusters, risk systems, studio marketing, training facilities). Managing these superstars has a few unique properties:
Hypercapitalism erodes Silicon Valleyâs trust culture. Industry-level trust alone no longer guarantees loyalty between companies and talent. With trade secret leakage risk and money big enough to tear teams apart, vanilla at-will employment contracts donât protect either side.
The industry needs a SAFE equivalent for tech talent. New employment contracts must satisfy demands from both companies and talent:
Weâre in the early days of labor repricing â big techâs AI capex investments are so large that they already have the sunk cost, and the labor as a percentage of total investment is still low. Companies must re-think their recruiting and retention strategies.
The talent war is a net-consolidating force on the AI research frontier. At the research labs, big dollars for researchers makes it nearly impossible for new entrants to play. For the same reasons, itâs nearly impossible to start a new quant fund â you canât get the same leverage out of the talent that big players can.
In the tradeoff between money and mission, the money has gone parabolic. Founders use both to magnetize top talent to their companies, but as the capital opportunity cost increases, only the strongest missions can justify the economic sacrifice that candidates make. To the credit of both OpenAI and Anthropic, money alone has not been enough for the best researchers to defect â their cult status effectively creates a multiplier on their R&D budgets.
The labs feel the talent war most directly, but all startups now require extreme resource aggregation to make AI R&D bets. When the opportunity cost of top talent is higher (for both founders and engineers), it becomes harder to coordinate top talent around an early-stage bet. SSI, Thinking Machines, and Physical Intelligence all required massive funding rounds for a shot on goal. A single research hire can cost the entirety of a Series A fundraise, making AI R&D far more expensive for startups, pushing most to live above the APIs.
A startup industrial complex around the Seed â Series A â Series B progression emerged in the 2010s to support the growth of software companies. Some companies still follow this pattern successfully: Harvey, Abridge, Glean, and others. But I believe that going forward, an increasing share of startup successes will have a âfat pitchâ founding story: incubations with stacked founding teams, high institutional credibility on day one, and uniquely powerful missions.
Modern successes like SpaceX, Anduril, and OpenAI could not be built as lean startups. They are too long-horizon and capital intensive to work through the traditional apparatus. The most promising tech frontiers often have high activation energy â foundation models, robotics, biology â where mega-rounds are the only way to bridge to the future. The AI capital influx means that mega-projects no longer seem outlandishly expensive. This is good for the world!
On the big tech side, the talent wars thin the playing field to companies with 1) tens of billions in net income that they can cut into, and 2) leaders with founder-like agency that will heavily sacrifice earnings for a seat at the AI table. A sharper power law will create a new âgiga-capâ class, with multiple $10T companies by 2035.
Some subset of startup winners will have a similar formula to what worked in the 2010s: small, scrappy team, building iteratively until they crack product-market fit. An increasing percentage of the new winners will have large war chests and strong missions from day one. AI talent wars will be a net-consolidating force.
Being a rigid seed or Series A-only investor in 2025 is anachronistic. Should you simply ignore the most important tech companies of this generation?
Investors must be more flexible than prior generations. The best companies wonât map to the predictable fundraise sequence of the past 20 years. Rapid product adoption will require investors to swallow their pride and admit misses much more quickly. For some companies investors passed on six months ago, the right decision is to invest today at 2-3x the valuation.
At the early stage, a new deal consideration has emerged: investors evaluate companies with the team quality constituting their downside case. Character and other âtalent dealsâ make investors think that they canât lose money investing in top-tier research teams â itâs almost like investing in an AI researcher labor union. As long as the company exits for more than the cumulative fundraising amount, investors get paid back first. Investors then justify putting more dollars in at an earlier stage than they would otherwise.
SSI and Thinking Machines both follow this pattern. Investors donât even need to scrutinize the exact technical approach, because the upside of AGI is infinity (1 percent chance of breakthrough â $10T+ company). If you believe you canât lose money given the team quality, the upside case is almost like a âfreeâ call option.
But if VCs assess the talent wrong â overestimate the talent, or overestimate the talentâs commitment to the company â they can get nuked on big slugs of capital. Even if a team gets to a technical breakthrough, capturing the value isnât guaranteed. Research teams that achieve technical breakthroughs are not necessarily the ones that get product and sales right.
Historically, the social contract of starting a company meant the founders would see it through to an exit. But how big do the numbers need to get before that breaks? People didnât used to leave companies they founded, especially when theyâre nascent and/or highly valued, but the AI talent war is deterritorializing. This fragility enables a CEO or key execs to leave their company with minimal recourse.
Like the founder <> researcher social contract, investors also need to reconfigure the founder <> investor social contracts for this new world, particularly for research-heavy teams:
Only lead checkwriters of large rounds have enough leverage to command these terms. This makes it harder than ever to be a pure early-stage investor in AI research-driven companies.
As an investor, the founders you invest in need an answer to the talent war. Either they have a cult-like missionary following, or need a clear path to winning the mercenary game with higher stakes than ever.
In the 2010s software bull market, success in the startup world was broad, and it felt like everyone could win (anyone could start a software company, or at least join / invest in one). Thatâs still somewhat true; lots of people have built multimillion-dollar ARR AI businesses in short order. The one-person unicorn concept suggests that anyone can start a big company using AI.
But in the new world, the concentration of outcomes will be different, both at the talent and company levels. Fewer companies getting more funding and revenue, fewer employees getting paid more. Only the fiercest founders and strongest missions can offset the inflection in mercenary market forces.
High earners usually avoid attention, but splashy nine-figure researcher offers draw significant public interest. There is a human bias against accepting singular winners (of talent, of companies); it doesnât feel fair to have a few people run away with big markets. There is something uniquely unstable about a more uneven distribution of success. The French had a uniquely high Gini coefficient before the Revolution.
The M&A talent war is just beginning, raising compensation baselines and labor promiscuity. To protect against the deterritorialization, I expect new labor dynamics to emerge on both sides of the table: agents, unions, aggressive non-compete tactics. As the numbers get bigger for talent and companies, all sides need to reimagine the social contract. As the glue holding teams together, company mission matters more than ever.
The AI talent wars will rewire Silicon Valley.
âJohn Luttig
Thanks to Axel Ericsson, Philip Clark, Melisa Tokmak, Joey Krug, Cat Wu, Will Manidis, Robert Windesheim, Lachy Groom, and Will Depue for their thoughts and feedback on this article.
This originally appeared on Johnâs Substack, Luttigâs Learnings.