Will Google Dominate Artificial Intelligence?
Or Will AI Kill the Best Business Ever?
Imagine a company that accidentally strikes the most valuable oil patch in the history of capitalism. The gusher generates absurd profits. Cash piles up faster than executives can invent new places to spend it. And then—inside the company’s own labs—its scientists invent something that might make oil obsolete.
This is not a parable. It’s the essence of the challenge facing Google.
For two decades, Google has dominated paid search, the most elegant money machine ever devised. They turn curiosity into commerce with surgical precision. You ask a question; Google sells the answer. The system is so profitable that Google (now Alphabet) earns more net income than any company in American history. After funding moonshots like self-driving cars and quantum computing, Google still has so much left over that it regularly shrugs and hands billions back to its shareholders.
And yet Google’s defining technological breakthrough of the past decade—large language models—may threaten the very mechanism that makes it rich.
This is Clayton Christensen’s innovator’s dilemma at planetary scale. But this incumbent is not asleep. It is not stupid. It is too successful to move fast without risking self-harm. Google’s problem is not that it missed out on artificial intelligence. It’s that the AI it helped create is better than search.
The Accidental Birth of a Spectacular Business Model
In the late 1990s, Google was a brilliant search tool that made no money. It was cleaner and faster than AltaVista and more effective than Yahoo’s directory maze. But its founders were deeply uninterested in advertising.
I discovered this firsthand in late 1999. I had started an e-commerce company that sold used and rare “books you thought you’d never find.” Because search engines were essential to us, I visited Google, a promising startup that two years earlier had been a Stanford research project called Back Rub.
The Google office looked like any other tech startup: several coders with headphones hunched over Macs. Two dogs asleep on the floor. But everyone knew that Google’s big challenge was revenue, not technology. They were so poor that a year before our visit, the founders had offered to sell their business to Yahoo for $1 million. Yahoo passed, to its eternal regret.1
We asked Google whether we could pay them to feature our website when users search for author names or book titles. They emphasized the beauty of their organic search results and the corruption implied by advertising. As we got back in our car to drive home, I remember saying to a colleague. “Smart guys. I love the company name. But they need a business model.”
Google soon realized they were wrong in the most profitable way imaginable.
In 2000, they launched AdWords, a self-service platform that let advertisers bid on keywords. Soon, the company pivoted from charging per impression to per-click—a small tweak that sparked an economic revolution. Advertisers like us only paid when users acted. Waste vanished. ROI exploded. Paid search became the rarest of things: a massive market with high margins and near-perfect measurability.
By the time competitors realized what had happened, Google owned roughly 90 percent of the market. Microsoft CEO Steve Ballmer jealously accused Google of relying so heavily on paid search that it was “a one-trick pony.” A story circulated that Google CEO Eric Schmidt responded, “True. But it’s one hell of a trick.”
Google’s gusher was the result of its market, margins, and monopoly. Ads are a trillion-dollar-plus global market. Google has high gross margins because online advertising is highly targetable, making it very cost-effective for advertisers. And with a 90% share of paid search, Google has built a monopoly. Multiply a massive market by high margins and a dominant market share, and the result is an exceedingly profitable business.
Google sustained its dominant position by creating an ecosystem that reinforced user habits and created massive scale advantages and other competitive barriers. The company became a verb. Default placements on browsers and phones eliminated friction.2 Ownership of Chrome, Android, Maps, YouTube, and the ad-tech plumbing itself ensured that Google didn’t just sell ads—it controlled the pipes through which most online advertising flowed.3
The result was not just a monopoly of traffic, but a monopoly of intent. Like Harry Potter’s Mirror of Erised (“desire” reversed), Google could see what people wanted before they bought it, while they bought it, and sometimes before they knew that they wanted it at all.
Then AI showed up.
Google Was Always an AI Company. That’s a Problem
Long before ChatGPT, Google’s founders saw search as artificial intelligence in disguise. Google founder Larry Page was the son of a professor who earned his PhD in machine learning and artificial intelligence at a time when so many prominent AI theories had failed that these topics were discredited backwaters in computer science.
Larry inherited some of his father’s iconoclasm. Even his PageRank algorithm (named for him, not for the web pages it ranks) is a statistical method that some have classified as an AI technique within computer science. In a rare video from 2000, not long after he and Sergei Brin founded the company, Page said,
“Artificial intelligence would be the ultimate version of Google. If we had the ultimate search engine, it would understand everything on the web, it would understand exactly what you wanted, and it would give you the right thing. That’s obviously artificial intelligence. We’re nowhere near doing that now. However, we can get incrementally closer, and that is basically what we work on here.”
That vision required AI. It always had.
Google used its cash to invest massively in research. They hired every leading AI researcher they could find.4 Some built the transformer architecture that now powers every modern language model (it’s the T in ChatGPT). Others created TensorFlow, one of the leading deep learning frameworks for building and training large models at scale. Still others invented specialized AI chips, dubbed Tensor Processing Units, or TPUs, designed to be linked together in large clusters, allowing cloud customers to scale AI workloads cost-effectively. Together, these researchers developed the distributed systems—MapReduce, Kubernetes, Google File System—that make training massive models possible.
Several of the engineers who did this work eventually left to start OpenAI in order to commercialize a large language model.5 When they launched ChatGPT in late 2022, the shock wasn’t that Google had been out-innovated. It was that Google had been out-commercialized.
ChatGPT was the fastest-growing app in history. It attracted one million users in five days and one hundred million in two months – a milestone that had taken TikTok nine months and Instagram two years to reach. Google immediately realized that ChatGPT didn’t just answer questions. It replaced the need to search.
For Google, ChatGPT was an existential threat to Search. Search works because it surfaces links that advertisers can monetize. But language models collapse the funnel. They summarize. They synthesize. They answer directly. And in doing so, they reduce clicks—the oxygen of Google’s revenue engine.
AI isn’t worse than search. It’s often better. For Google, that’s a problem.
Code Red
Google’s response was swift and uncharacteristically public. Executives declared a “Code Red” and refocused and reorganized teams. Caution gave way to urgency. In haste, they launched an LLM called Bard that hallucinated a response to a question about the James Webb Space Telescope. Investors freaked out, and Alphabet lost $100 billion in market cap overnight.
But beneath the panic, a deeper reset was underway. Google consolidated its AI research under DeepMind, the British AI company it had purchased in 2014. It rebranded Bard as Gemini. It launched subscription tiers. It embedded generative AI across Search, Docs, Gmail, Chrome, Android, and Workspace. It began treating AI not as a research feature, but as a part of Google’s core product stack.
Investors noticed. Today, Alphabet’s stock has more than doubled from its post-ChatGPT lows.
Is Google Earth’s Best or Worst-Positioned AI Company?
Google emerged from the Bard fiasco with renewed confidence after realizing that AI businesses depend on four assets – and only Google owns all four.
Frontier models. A handful of companies, including Google, OpenAI, Anthropic, Meta, and xAI, train their own top-tier, multimodal models and sell inference to companies and consumers.6 Frontier models are expensive to build and operate. It is unclear if they will be profitable on a standalone basis. But Google search finances Gemini, its own frontier model.
Chips. NVIDIA, AMD, and Google build the specialized AI accelerator chips needed to train and operate LLMs. Specialized AI chips are massively profitable, which is why Nvidia is more valuable than Google.7 But Google builds its own chips and is estimated to have deployed 2–3 million TPUs. This approaches Nvidia’s footprint, but Google dedicates its chips to its own workloads and Google Cloud customers. Not paying Nvidia for chips gives Google a massive cost advantage.
Cloud Services. Amazon and Microsoft dominate this space, but Google Cloud Platform is now a ~$50B+ business and growing faster than other hyperscalers, mainly due to AI workloads.
Applications. Most AI startups focus on building applications, but Google operates Search, YouTube, Android, Maps, Gmail, Docs, etc., all of which are enormous opportunities to embed AI and test it on real-world user problems.
No other company combines all four layers of the AI stack at scale. No other company can deploy a new AI capability instantly to Search, YouTube, Maps, Docs, and Android—then watch how real humans use it, monetize it, or abandon it.
This is an extraordinary advantage. But it sharpens the dilemma rather than resolving it because every improvement to AI risks further cannibalizing search. Every hesitation risks losing relevance. Google must decide not whether to disrupt itself, but how much pain it is willing to endure in the process.
The Most Delicate Dance in Tech
Today, Google dances on a knife-edge. It uses AI selectively, deploying generative summaries for low-commercial-intent queries while preserving traditional paid search for high-value ones. A question about Venezuelan oil production triggers an AI overview. A search for “rowing machine” delivers sponsored links.
This is no accident; it’s behavioral economics powered by two decades of data. Google knows which questions signal curiosity and which signal spending. It knows where AI enhances the experience—and where it threatens the cash register. Roughly $140 billion in annual profit depends on Google getting this balance right.
Paradoxically, that risk has created urgency. Google can invest in massive training runs, custom silicon, and long-horizon bets precisely because its legacy business is so profitable. The danger is not that Google lacks resources. The danger is in managing a risky transition with surgical precision as regulators, competitors, and public opinion circle overhead.
Google’s future will not be decided by whether it is “good at AI.” It already is. The question is whether it is willing to let AI become good enough to threaten search—and whether it can do so while navigating antitrust scrutiny, public anxiety, and legitimate concerns about AI safety and privacy.
The innovator’s dilemma rarely ends well for incumbents. But Google is no typical incumbent. It owns the past, the present, and—if it chooses—the future of information itself.
The coming decade will reveal whether Google can do the hardest thing in business: kill its greatest invention before someone else does.
If it succeeds, it may remain the most powerful technology company ever built. If it fails, history will record that the company that taught the world how to find answers could not survive a world where answers found us first.
Musical Coda
For many users in the late 90s, Yahoo was the internet. Along with AOL, it was the first major portal. At its peak around January 2000, its market value exceeded the combined value of the “Big Three” automakers, Ford, GM, and Chrysler.
By 2002, Yahoo realized that passing on the chance to buy Google had been a terrible mistake. They tried to buy the company for $3 billion. Google asked for $5 billion, and the deal fell through again. It’s easy to imagine that Yahoo thus failed twice, but it is unlikely that the directory-brained Yahoo leadership would have commercialized a strong search engine, even if they had managed to purchase the best technology.
Google Toolbar was a clever and underappreciated early Google initiative. Included with every Google product, Toolbar was effectively a Trojan horse that lived in your browser or other program to make searching on Google easier and more common. In 2004, Google hired a young product manager to take over the applications team that included Google Toolbar. That PM, Sundar Pichai, rose to become Google’s current CEO.
The Department of Justice provides a good overview of this plumbing by breakinbuybuyg Google’s “ad tech stack” into publisher tools that help websites earn money by selling ads (Google Ad Manager, formerly DoubleClick for Publishers), an ad exchange where ads are auctioned in real time (AdX), and advertiser-side tools that bring buyer demand into those auctions, including Google’s major buying channels (Google Ads and DV360). The interaction between these three markets is complex. For an excellent summary of the issues involved and of how various regulators view the adtech market, see Brian Albrecht’s Primer on the Google AdTech Antitrust Case.
Google spent about $50 billion on research in 2024. For comparison, Apple spent about $30 billion, and the university with the largest research budget, Johns Hopkins, spent just over $3 billion.
In the mid-2010s, nearly every major figure in modern AI passed through Google or DeepMind, including Ilya Sutskever, Geoff Hinton, Demis Hassabis, Dario Amodei, Andrej Karpathy, Andrew Ng, Sebastian Thrun, and Noam Shazeer. The engineers who left Google to join or co-found OpenAI included Sutskever, one of the three co-inventors of the Transformer model, Lukasz Kaiser, another co-author of the “Attention Is All You Need” paper, Jacob Devlin, who later returned to Google, Wojciech Zaremba, and Darin Fisher, a former Google VP of Engineering. Of course, many other Google AI experts left for AI startups such as Anthropic and Character.AI.
Analysts typically define “frontier AI labs” as commercial organizations that train state‑of‑the‑art foundation models and control the full stack, from data and training runs through deployment, rather than fine‑tuning or hosting third‑party models. US companies include OpenAI, Google (Gemini), Anthropic (Claude), Meta (Llama), and xAI (Grok). NVIDIA, Cohere, and AI21 Labs also build LLMs. Europe has Black Forest Labs. China has Alibaba (Tongyi, Qwen), Baidu (ERNIE), and Tencent.
NVIDIA famously became history’s most valuable company by teaching the graphics processing units that powered its video game cards to perform the massive matrix-multiplication operations required by large-scale neural networks and LLMs. AMD also produces chips for AI workloads, and Google has designed and deployed in-house Tensor Processing Units (TPUs) at a massive scale, primarily in its data centers.


Excellent primer. One small tweak: Before search, Google did have a business doing corporate search - yellow pizza-box servers that would sort and organize all the data inside a corporation. They gave this up in favor of the far more lucrative ad business after GoTo, later Overture, did very well in the paid search business. Google used the best parts of that model and tweaked it so it was better for consumers, which meant it got more traffic. Today, of course, Google does use paid search.
This highlights a unique corporate strength, the ability to target and neutralize competition way ahead of time. The online office productivity apps, like docs and spreadsheets, was a low-cost way to drain Microsoft, a company that particularly worried Eric Schmidt after his experience running Novell (which was similarly destroyed by Microsoft.) OpenAI would seem like a miss, except even Sam Altman was amazed by how GPT-3 took off. As you say, Google then reacted. Today, OpenAI is finding its own monetization difficulties, while it tries to be a "Life OS," or all-purpose agent. As with search and social media, there's now a hunt for monetization in that.