Why the AI Revolution is Stuck in Traffic
It’s Not an IQ Contest Out There
Artificial Intelligence is giving the internet a collective panic attack. If you believe the viral headlines, we are back in February 2020, watching an economic ‘virus’ prepare to wipe out half the global workforce. The prophets of doom – some of whom actually help build the technology – warn that cognitive labor is about to hit its ‘End of History.’
They aren’t entirely wrong about the power or the risks of AI tools, but they are fundamentally wrong about the velocity of change. Unlike the Covid-19 virus, AI is not doubling every 2.5 days. Although AI is indeed a ‘superpower,’ the revolution is running behind schedule because the real world is far stickier, messier, and more human than a Silicon Valley white paper suggests.
There are two groups of AI doomers, and they play off each other stereophonically. The first group worries about a technology avalanche. They believe we are no longer dealing with a better search engine, but with systems that are developing the ability to set their own goals and manipulate their environment to achieve them.1
The second group argues that we are on the precipice of an economic avalanche. HyperWrite CEO Matt Shumer’s (“Something Big is Happening”) post is typical. Shumer and writers like him see the capabilities of Claude 4 and GPT-5 and warn of an economic apocalypse. But they don’t consider several real-world complexities.2 Specifically, they ignore:
Bottlenecks that slow the deployment of any general-purpose technology. The real world is sticky, political, and messy.
New work that emerges to fill unanticipated and infinite human desires. These desires create demand for new services. We rarely see this coming.
Elastic demand. As costs decline and prices fall, we sometimes purchase more. The result is more work, not less.
Complement or substitute? Technology can give you superpowers. Or it can take your job. It often does both. Predicting this in advance is difficult, and it is less subject to planning and intervention than many researchers think.
Comparative advantage. This idea is deeply counterintuitive and confuses everyone: humans have a comparative advantage over any technology, even when the technology has an absolute advantage.
Big changes are afoot, but most of those waiting for an avalanche are looking up the wrong mountain. AI could easily worsen economic inequality and social isolation. It can facilitate bioterrorism, compromise cybersecurity, and enable large-scale theft. It could waste a lot of money and crash markets. It will reshape wages in ways that are impossible to foretell. But Artificial intelligence is not going to lead to widespread, long-term structural unemployment.
Solonomics
Surely, everyone needs a favorite economist. Mine is the late Bob Solow, who could cut through the fog like San Francisco sunshine. Solow won the Nobel Prize in 1987 for his groundbreaking work on economic growth, and four of his students also won Nobel Prizes.3 Solow cared deeply about labor markets, and for good reasons. Plus, he could be very funny.
Shortly after Solow won his prize, he observed that productivity growth in the United States remained stagnant despite huge investment in computers and IT during the 1970s and 1980s. In what became known as the “Solow paradox”, he wrote that “You can see the computer age everywhere but in the productivity statistics”.
Solow knew it took time for a General Purpose Technology (GPT) to transform work. A GPT is an innovation like the steam engine, the microprocessor, or electricity that eventually drives widespread economic growth. GPTs transform work, they alter social structures, and they reshape multiple industrial sectors. As adoption spreads, they improve steadily and spawn additional innovations. Eventually, they are pervasive, like power outlets, chips, or the internet. There is little doubt that AI is a GPT.4
Solow was not the first to highlight the time lag between the introduction of new technology and its actual impact on economic output. He had studied under Wassily Leontief, who received the 1973 Nobel Prize in Economics for explaining the time lags between innovation, adoption, and social impact.
Leontief became famous for observing the obsolescence of horses. He noted that for many years, tractors and horses worked side-by-side. As tractors began to replace horses, horse “productivity” rose because the animals were used only for a small set of essential tasks. Eventually, however, horse productivity went to zero as horses lost their economic function entirely. His observation led Leontief to argue that the slow initial impact of automation should not fool us. Involuntary unemployment arrives only after the structural adjustment period is over. What Leontief emphasized less is that, unlike horses, humans generate the demand that keeps us relevant.5
“It’s Not an IQ Contest Out There”
Historically, GPTs follow a specific path: their capabilities “wow” us initially, but productivity statistics don’t improve for decades, as organizations take years to restructure and fully utilize the new technology. The canonical example is the shift from factories powered by a large central steam engine to those powered by electricity running individual machines. While electricity was available starting in the 1880s, it was not until the 1920s that a majority of manufacturers reconfigured their factories from centralized steam-driven line shafts with miles of belts and hundreds of pulleys to decentralized electric motors plugged into the power grid.
ChatGPT has been able to pass the bar exam for several years, but we are not seeing a mass layoff of junior associates at law firms. Many AI tools are better at customer service than people are, but adoption is slow at present.
The reason is bottlenecks. Like a solid rocket booster on the space shuttle brought down by faulty O-rings, the failure or slowness of a single component can hold up or destroy an entire system. Even as AI solves the “intelligence” bottleneck, it will highlight several others:
Institutional Inertia: Most companies are not high-trust, high-velocity startups. They are governed by irritable humans who dislike change.
The “Last Mile” Problem: AI can draft a contract, but it cannot (yet) navigate the office politics of a merger or the emotional volatility of a courtroom.
Regulatory Sludge: In 2024, the Bureau of Labor Statistics reported that the most AI-vulnerable sectors (legal, software, finance) experienced increased employment. Why? Because complexity grew faster than automation.
In every organization, as the cost of intelligence drops to near zero, the value of the remaining non-automated bottlenecks—trust, physical presence, and human accountability—soars. Or as a wise union leader trying to advise his young, overconfident hotshot once told me: “Marty, it’s not an IQ contest out there.”
The Invention of New “Work”
We constantly invent new categories of labor. Not long ago, “professional YouTuber” or “Yoga Influencer” would have sounded like a mental illness. Today, these careers are made possible by the massive surplus generated by our agricultural and industrial efficiency. As AI generates more surplus, we will shift labor into new areas.
High-Touch Services. Some of this will be teaching, counseling, coaching, or other emotional labor that AI tools may supplement. Likewise, niche artisanal crafts. To the extent these are in-person services not shaped by AI or other information technologies, they may be less efficient. As a result, they pay poorly. As Paul Krugman memorably put it, “Productivity isn’t everything, but in the long run it is almost everything.”
Status Games: We may see more jobs that exist primarily to signal prestige or human effort. In many industries, we spend extra on handmade crafts. Many artists command premiums, as do many online influencers. Today, between 8 and 12 million people earn more than $100,000/year as online influencers. It’s only 15% of all influencers, but it’s more than the number of accountants making six-figure incomes worldwide and about the same as the number of software engineers.
Management or orchestration. Almost everyone doing knowledge work will likely manage a team of AI agents. My physician estimates that he currently manages three agents. One of them listens to every patient conversation he has and writes summaries for him to review and approve. He loves it - and so do I. Managing a fleet of agents will likely be a part of every job and will confer what today looks like superpowers.
Research, invention, and prototyping. Fifty percent of Gen Z adults say in surveys that they plan to start a new business or side hustle in 2026, compared with 44% of millennials and 31% of Gen X. Most won’t, and many others will simply try to become social media influencers. But as AI grows in capability, some scientific research, product engineering, and product prototyping may move outside of large enterprises and become cottage businesses.
AI is already giving younger, less experienced workers basic competence across a range of skills needed to launch a business (coding, writing, marketing, etc.). Ideas have become easier to test, and startups easier to launch. For this reason, and because Y Combinator is pivoting to incubating nerds again, the median age of a YC founder is 24, down from 30 in 2022.
Complements or substitutes?
Some things can be known ex ante, meaning “before the event”, based on the information available at the time. I can know in advance when the men’s Olympic biathlon finals will take place. Other things are only knowable ex post, meaning “after the fact.” We do not know ex ante who will win the gold medal; we can only learn this ex post.
Even though our current (and extraordinary) Nobel laureate in economics argues otherwise, it is very hard to know ex ante which tools enhance human skills and which replace them. Technology that simplifies a task today may help to eliminate it tomorrow. Travel agents were excited about the internet ex ante because it empowered them to discover new destinations. Looking back ex post a decade later, they realized it had enabled consumers to book travel directly, and 70% of their jobs had disappeared. But more recently, travel agent jobs have recovered, in part thanks to creative AI tools that enable agents to specialize in high-complexity, high-trust, high-stakes travel (e.g., luxury safaris, corporate logistics) that are less easily automated. But would you bet today, ex ante, that we will have many travel agents in five years? I wouldn’t – but I could well be wrong. These patterns—initial complementarity, followed by obsolescence—make technological forecasting a treacherous endeavor.
Even though we cannot usually tell ex ante which technologies will make our jobs easier and which will undermine them, an individual professional should embrace any new GPT and try to learn to use it. It is especially valuable to learn to use AI tools, since we can be confident they will reshape work, even if we are uncertain how.
In 1979, the first widely available spreadsheet came out. VisiCalc helped make the Apple II really popular. It was kind of magic, and an accountant friend learned to use it immediately. He liked to surf and soon found that VisiCalc let him get a week’s worth of work done in about 12 hours. So for a few years, he surfed a lot, while the rest of the world caught up. He could have produced more, of course. He could have started an accounting firm, trained his team to use spreadsheets, and made some money. But he preferred to surf.
He often predicted that spreadsheets would destroy the market for accountants, but spreadsheets did not destroy the market for anything. His prediction made sense ex ante, but it was wrong ex post.
Trying to predict how a specific technology will affect specific jobs is mostly a mug’s game. A massive survey of CEOs and employees, published this week, reflects how utterly confused most opinions are.
Elastic demand
Will we run out of work? Only if demand for human output is fixed or “inelastic”. But history suggests the opposite. Often, as a resource becomes more efficient to use, we don’t use less of it; we use vastly more.
Technology raises our standard of living by making goods and services cheaper. When things become cheaper, we sometimes spend less and pocket the savings. Food was this way. As agriculture modernized and made food cheaper, we did not eat much more because demand for food was fairly inelastic. So we needed fewer farmers.
Other times, however, we want to buy more of whatever just became cheaper. There are many examples of demand that rises as costs fall. As lighting became cheaper, we bought more lights. As cars became more fuel-efficient, we didn’t spend less on fuel; we drove more. As AI makes coding cheaper, the world doesn’t decide it has “enough” software. Instead, every small business suddenly wants a custom app, every legacy database needs an AI overhaul, and the total “surface area” of the digital economy expands.6
Elastic demand is one reason a cost-reducing technology does not always eliminate jobs. This is very tough to predict ex ante.
Comparative vs. Absolute Advantage
The most common error in AI discourse is confusing absolute advantage with comparative advantage. Even if AI becomes better than a human at every single task (absolute advantage), we are not entitled to conclude that it will remove all human work.
In a world of finite resources (specifically, finite compute and energy), AI will be deployed where its marginal utility is highest. As long as a human can contribute anything to a process—even just as a “final checker” or a “preference setter”—and as long as that human’s opportunity cost is lower than the cost of more GPUs, the human remains employed.
We see this in software engineering. Tools like Claude Code have led engineers to believe they are twice or ten times more productive. Yet, according to data from Indeed and JOLTS, job postings for specialized developers haven’t cratered; they’ve shifted. At the moment, anyway, cyborg programmers (humans plus AI) are the most efficient unit of production in the global economy. Will this change? Yep. Everything will change.7
The idea of comparative advantage is not intuitive. Competitive advantage (aka absolute advantage) is easy to understand. It means being the most efficient or highest-quality producer of a good or service. If an AI can write code faster, with fewer bugs, and at a lower cost than a human programmer, the AI has a competitive advantage in coding.
In a world of pure “competitive” thinking, being better at a task means winning 100% of that task. If AI is “better” than humans at medicine, law, and plumbing, a competitive-advantage mindset suggests humans will be unemployed in all three fields. But this ignores the reality of scarce resources.
Comparative advantage is not about who is the best at a task, but about what each person (or machine) is best at doing relative to everything else they could be doing. It relates closely to the concept of opportunity cost – the forgone value of the next best alternative.
As a presidential candidate, Barack Obama understood comparative advantage. He used to say to his campaign team, “I am better than you at your job. I am a better organizer, a better speechwriter, a better speechmaker, a better fundraiser, a better political analyst, and a better policy analyst.” And he was right – he actually was better at all of these things. But he could not possibly do all of these things, and he knew it. He had a competitive advantage in every task, but his comparative advantage was running for president. Even though he was better at a range of tasks, the opportunity cost of doing that work was high, whereas the opportunity cost for most of his staff was much lower. Every hour he spent knocking on doors, drafting press releases, analyzing Congressional Districts, or working out policy details was an hour he was not spending meeting with large numbers of people to win a presidential election. Obama was vastly better off hiring people to do work that he could do better and focusing on his comparative advantage – being the candidate.
We need to apply comparative advantage to AI. Even if AI becomes better than humans at everything, it still faces constraints. Compute, electricity, and time are all finite. If a super-intelligent AI can either solve nuclear fusion or write a marketing jingle, the world will want it to solve fusion.
We may never get “God-like” AI, but if we do, the opportunity cost of using it to do mundane tasks like nursing, teaching, or plumbing is the high-value breakthroughs it could have been working on instead. Like the Obama campaign, even if AI is “better” at plumbing than a human, the human retains a comparative advantage in plumbing because the human isn’t capable of solving nuclear fusion. Our time is “cheaper” to use for those tasks.
This distinction is vital: competitive advantage is a race to be the most efficient. Comparative advantage is about what trade-offs people with specialized skills make every day. So long as AI resources are not infinite, there will always be tasks where the opportunity cost of using AI is too high. Humans will not only perform those tasks, but in a world made hyper-wealthy by AI, they will likely find themselves highly compensated for the tasks that only they are “cheap” enough to do.
Embrace Your AI Exoskeleton
To be clear: AI will force us to rebuild our world. But we are not in a race against the machine; we are in a race to see who can best wear the machine as an exoskeleton.
The ‘Solow Paradox’ tells us that the economy takes decades to rewire itself, but it doesn’t promise that you have decades to wait. AI confers professional superpowers—the ability to orchestrate agents, to prototype at lightspeed, and to solve the ‘last mile’ problems that remain.
The danger isn’t that a robot will walk into your office and take your chair tomorrow. The danger is that you will neglect to develop these superpowers while your competitors and your colleagues embrace them. In a world of ‘human-plus’ labor, standing still is the only true existential threat. The revolution is running late, but it is still coming. Don’t let it find you unarmed.
ICYMI
A theory about why men and lesbians dominate comedy
I’d rather candidates from California and New York sit out the 2028 presidential election, but The New Yorker did a solid profile on Gavin Newsom anyway.
A guy who can really write describes a guy who can really talk: Remnick on Rogan.
The slogan for a new academic journal devoted to papers written by AI is “Taking ideas seriously, not ourselves.”
The excellent Connor Dougherty on whether America can create new cities by copying Irvine, California. Answer: probably not.
T-Mobile is offering to translate your phone calls into another language in real time.
This group includes Geoff Hinton and Yoshua Bengio, two of the “godfathers” of AI, both Turing Award winners and very accomplished engineers. Both focus on “alignment” problems – meaning the possibility that AI develops autonomous goals. They are joined by industry CEOs like Dario Amodei of Anthropic, who estimates a 25% chance of catastrophic outcomes, and Sam Altman of OpenAI, who has flagged cybersecurity and bioterrorism risks, noting that AI agents are now capable of finding critical software vulnerabilities that human defenders might miss. It is unwise to ignore the warnings of people this close to the technology.
Dario Amodei has been vocal about the coming displacement of software engineers. Mustafa Suleyman (CEO of Microsoft AI) recently predicted that AI will be able to perform “most, if not all,” professional white-collar tasks within the next 12 to 18 months. He specifically called out lawyers, accountants, marketers, and project managers as being in the crosshairs for full automation. Kai-Fu Lee of Sinovation Ventures and author of AI Superpowers, has long predicted that 40–50% of jobs are at risk. In early 2026, he predicted that the year would see digital autonomous workers begin handling routine labor end-to-end, leaving humans with a “leadership gap” they aren’t prepared to fill.
Solow’s students included Nobel Prize winners George Akerlof (2001) for his analysis of markets with asymmetric information; Joe Stiglitz (2001) for pioneering the economics of information and how it affects market behaviors; Peter Diamond (2010) for his analysis of markets with search frictions (specifically labor market dynamics), and Bill Nordhaus (2018) for integrating climate change into long-run macroeconomic analysis. Solow also deeply influenced the thinking of Nobel Laureate Paul Krugman and CEA Chair Laura Tyson, although he did not formally serve as their PhD advisor.
It’s easy to imagine that ChatGPT stands for General Purpose Technology, but it actually stands for Chat Generative Pre-trained Transformer.
Like Solow, Leontief mentored four Nobel Prize winners as doctoral students. Solow was one of them. Others included Paul Samuelson (1970) for building the mathematical foundations of a unified economic theory; Vernon Smith (2002) for founding experimental economics, testing market theories in controlled settings; and Thomas Schelling (2005) for pioneering the application of game theory to conflict, nuclear strategy, and climate change. (This royal lineage is unmatched in economics, but Nobel academic legacies in physics and chemistry are famously concentrated. Some 29 Nobel Prize winners descend academically from Sir J.J. Thomson, who discovered the electron and won the 1906 Nobel Prize in Physics.)
Technological improvements that increase the efficiency of resource use and lead consumers to demand more of that resource rather than spend less are sometimes known as the Jevons Paradox. William Stanley Jevons was an economist who noticed that England’s consumption of coal soared after James Watt massively improved the efficiency of the coal-fired steam engine. Watt’s engine used much less coal, but this made coal a more cost-effective power source and led to much more coal use overall, even though the amount of coal required for any particular application fell.
As this article went to press, Erik Brynjolfsson (who was mentored by Bob Solow at MIT) published some of the first evidence of AI-driven Solow effects. Writing in the FT, he reports that:
“Data released this week offers a striking corrective to the narrative that AI has yet to have an impact on the US economy as a whole. While initial reports suggested a year of steady labour expansion in the US, the new figures reveal that total payroll growth was revised downward by approximately 403,000 jobs. Crucially, this downward revision occurred while real GDP remained robust, including a 3.7 per cent growth rate in the fourth quarter. This decoupling — maintaining high output with significantly lower labour input — is the hallmark of productivity growth.”

