Technologies like social media start out as ideas. They evolve quickly as entrepreneurs use them to fuel companies that interact more deeply with the prevailing culture and politics. The result is like a combustion reaction—a series of small explosions caused by the unexpected mixing of fuel and oxygen.
The past two decades of social media illustrate how reactions from the encounter of technology, culture, and politics can transform all three — and not always for the better.
The Promise of Web 2.0
In 2004, technologist Tim O’Reilly announced the coming of Web 2.0. Tim realized that new technologies were transforming the web from static, “read-only” websites to dynamic, interactive, and user-centric ones. This change was possible thanks to new technical architectures like AJAX and PHP, higher bandwidth, and lower production costs from open-source software, APIs, and hosted infrastructure. Suddenly, companies could quickly assemble dynamic, interactive, participatory platforms. Web 2.0 promised users they could generate content using blogs, photos, wikis, and social media to reshape the web. Wikipedia is a monument to Web 2.0.
By 2010, however, the internet had given rise not to a decentralized and participatory web but to large, centralized, algorithmically optimized social media platforms like Facebook, YouTube, and Twitter. These businesses thrived on network effects that increased their value exponentially as new users joined. Wikipedia was a sideshow.1
Social media was an economic bonanza but a cultural disaster. To capture the powerful network effects that fuel their growth, social media companies devote themselves to increasing user engagement. Engaged users attract more users and view more advertisements, so increasing engagement drives every decision and product feature. Likes, retweets, shares, hashtags, mentions, and groups only exist because they increase engagement.
Another term for “boosting engagement” is “addicting you.” Addiction lets platforms grow revenue faster than users. For example, Facebook grew revenue 15.6% last year, but only grew users 3.4% because they increased engagement. The ratio of daily to monthly users is another key engagement/addiction metric. Twenty percent is considered good; 50% is world-class. Facebook often exceeds 65%. They are very good at keeping you hooked.
Social media was a great way to make money but a terrible way to build community. Technology and engagement-fueled economics could not change a surprising cultural fact: when forced into one big room, we humans don’t like each other all that much. Discussions became fights. Misinformation, hostile bots, deepfakes, doxing, trolling, and shit-posting became the new normal. Twitter and other sites became cesspools dominated by a small number of accounts.
Facebook/Instagram and other platforms decided to centrally moderate content to address these issues. Not surprisingly, reviewing billions of daily interactions in hundreds of languages and cultures is an extraordinary challenge. Even though most content moderation is automated, Meta still devotes 40,000 people to the task. Of course, one person’s content moderation is another's censorship. Facebook initially suppressed and then decided to allow stories that a lab leak may have spawned the Covid virus. Even Meta board member Marc Andreessen realized this was not a great look. Elon Musk’s erratic decisions following his acquisition of Twitter further polarized users and advertisers.
Today, social media increasingly creates spaces for individuals to find communities that align with their values and interests. Private messaging apps like WhatsApp, Signal, and Discord became popular as people sought smaller, ideologically aligned groups for discussion. Facebook began emphasizing smaller forums, a shift founder Mark Zuckerberg likened to a move “from the town square to the living room.” Paradoxically, this is, in many respects, a return to the early Internet, where decentralized forums and chat rooms allowed users to curate their communities.
Now social media companies are stirring in generative AI. Only a fool would try to predict what will emerge from this combustible mix — but that’s why you have me. From here, it looks like AI will accelerate the transformation of social media into personalized media (this has already happened with TikTok, which does not run on your social graph and could care less who your friends are). As evidence mounts that social media amplifies social pathologies and AI algorithms transform platforms into publishers and users into consumers, our creaky laws may hold them more accountable for the damage they cause. Enter politics.
The Politics of Social Media: Section 230
The late nineties was a great time to start an e-commerce company. Booming internet usage fueled growth. Venture capitalists begged founders to take their money. And if you were a platform like my company was, the law had your back. Alibris enabled independent booksellers to upload tens of millions of used and rare books to sell online. We had no idea what was in these books, and thanks to Section 230 of the 1996 Communications and Decency Act, we didn’t care.
Section 230 said you could not sue us for the content of our sellers' books. We would remove bombmaking or assassination manuals when we noticed them, but we didn’t have to. We offered tens of millions of books for sale, many of which undoubtedly nourished our more deranged customers. Needless to say, I appreciated Section 230.
Changes in technology and consumer tastes are transforming social media in ways that the Al Gore-era legislators who drafted Section 230 and judges and scholars who debated the law never foresaw. As often happens, rapidly evolving technology and markets are outrunning legislation, regulation, and judicial decision-making.
Several individuals and organizations have argued that by algorithmically boosting content, social media companies become publishers and should be liable for the content. The case is strongest concerning minors, where there is a decent legal argument that social media companies owe a duty of care and are liable for algorithms that promote compulsive use and harmful content. On at least one occasion, the US government has agreed. In Gonsalez v. Google, the Department of Justice argued before the Supreme Court that algorithms are a form of content creation and that content creators are outside the scope of Section 230 immunity.
The bewildered reaction of the Supreme Court was worthy of an SNL cold open. Justice Thomas acknowledged that he was “confused”. Justice Jackson said she was “thoroughly confused.” Justice Alito was “completely confused”. Justice Kagan memorably confessed, “We don’t know about these things. You know, these are not like the nine greatest experts on the Internet.” (To nobody’s surprise, the Court punted on Section 230 and resolved Gonsalez on unrelated grounds.)
As AI transforms social media into AI-powered media, Section 230 will become relevant again. Social media companies have long used machine learning for feed recommenders and ad matching, but they are now deploying generative AI in every algorithm. Meta invested $40 billion in AI across all its products this year.
Will social media companies build tailored AI agents to be your friends? Will their AIs originate or customize short-form media content? If so, they are publishing, not hosting, and will not be protected by Section 230. On the other hand, they will have more control over the content. An AI that is thoughtful, funny, respectful, and remembers your birthday would be an improvement over the average brother-in-law, much less the average social media follower.
Will they use AI to create fully personalized ads (“Marty, since we are discussing tool watches, I know a collector in Buenos Aires who would trade your Tudor Ranger for his Pelagos FXD. Interested?)” These sorts of ads would not be protected under most interpretations of Section 230. On the other hand, AI is famously opaque. Will we have sufficient visibility into what AI tools are doing to demote or promote content to litigate Section 230 concerns?
Social Media and Culture
In a recent Substack, researchers at NYU argued that AI-powered “social media acts like a funhouse mirror, exaggerating and amplifying certain voices and behaviors while diminishing others…because social media platforms are dominated by a small, vocal minority of users who post extreme opinions or content. Algorithms further amplify these extremes, giving people the impression that such views and behaviors are the norm, even when they are not.”
Evidence is mounting that social media platforms amplify “outrage entrepreneurs.” One research summary found that only 3% of active accounts produce toxic material, but it accounts for a third of all viewed content. Worse, it often produces echo chambers, so one percent of communities start three-quarters of online conflicts, and one user in one thousand accounts for 80% of fake news. Metastudies like this suggest that hostile and unusually attention-seeking people gain much more attention on social media than they can offline.
Companies know this. Internal documents from companies like Meta demonstrate that social media platforms are aware of their algorithms' detrimental effects but continue prioritizing engagement. In 2022 alone, platforms earned nearly $11 billion in advertising revenue from U.S. children under 17 years old.
But hold on. Is this just another moral panic over what media kids consume? We have all seen parents go nuts about the pernicious effects of rap music, television, racy novels, comic books, or violent video games. Most of them confuse risk with harm and overlook common sense explanations for troubled kids. After all, corrosive family dynamics, drug use, lousy neighborhoods and schools, and social isolation are widespread. I have long been reluctant to blame mobile phones or TikTok for the mental health and social challenges facing teens and young adults.
But evidence is accumulating that I am wrong. Research on the damage caused by social media falls into two categories: observational and experimental. The first is simple: scholars document growing social media use and rising mental health problems and assert a correlation between the two.
The observations can be alarming. Young people spend an average of 109 days each year looking at a screen. In 1980, we spent 40% of our time consuming information; now it is 80%. We see 208 ads per hour — 10 times more than a generation ago. Young people are more anxious, distracted, and depressed than any generation in history. Is social media to blame?
Some reports draw a tight connection. Meta whistleblower Frances Haugen disclosed internal findings showing Facebook's awareness of its platforms’ negative impact on body image among teenagers. Wall Street Journal investigators registered as 13-year-olds and created a dozen automated accounts on TikTok. They discovered that the site’s algorithm served them tens of thousands of weight-loss and pro-drug videos within a few weeks of joining the platform.
Nonetheless, reports and observational studies always risk overlooking essential variables. This forces researchers to conduct tedious experimental studies that create two comparable groups and reduce one group's social media consumption. This has happened, and dozens of well-controlled trials have been replicated. We need to take their findings seriously.
Several studies that ask a treatment group to stop using social media find that subjects are significantly happier, socialize more, and obsess less about politics. Here is a quick sample for the curious:
See Allcott et al. for Facebook, Lambert et al. (2022) for Twitter, Instagram, and TikTok as well as Facebook, Tromholt (2016) for Facebook in Denmark, Brailovskaia et al. (2022) for social media in Germany, Sagioglou & Greitemeyer (2014) for Facebook a decade ago, and Hunt et al. (2018) for Facebook, Instagram, and Snapchat.
Researchers like Yuen et al. (2019) and Braghieri et al. (2022) broadly associate social media with decreased mental health.
Kushlev and Leitao (2020) find that social media use isolates parents and children.
A couple of experimental studies do not find a significant effect on social media use – but there are far fewer of these, and they have smaller sample sizes (fewer than 150 participants) than more highly powered research.
Is Social Media a Form of Pollution?
When an economist confronts neighbors who blast loud music at all hours, she may accuse them of causing a “negative externality.” The rest of us use more colorful terms.
Pollution, congested roads, and full volume Rammstein at 3 a.m. are negative externalities because the social costs of these activities exceed their private costs. Because producers do not pay for the full social cost of their actions, they overproduce. This is why unregulated markets produce more pollution or congestion than is socially optimal. Social media algorithms create negative externalities by amplifying extreme and toxic content that costs us more as a society than we pay as individuals.2
This is not a minor side effect. Social media is now a quarter-trillion-dollar industry that doubles every five years. It is as big as global broadcast television, which is stagnant, and twice as big as newspapers, which are shrinking. These companies will not quietly submit to restrictions on user or AI-generated content, pollution or not.
If restricting content is impractical, perhaps we should tax social media the way we do pollution and cigarettes. Unfortunately, taxing social media is challenging. Measuring the externalities is complex, public acceptance of the harm is low, and implementing an effective tax would require too much cooperation from the companies themselves. It’s hard to imagine a tax that would reduce social media use the way that cigarette taxes reduce smoking.
Uganda gave it a try. In July 2018, the Ugandan government decided to tax social media usage. Users were required to pay a daily fee of 200 Ugandan shillings (about a US nickel) to access platforms like Facebook, Twitter, and WhatsApp. The President wanted to curb gossip (presumably about him) and raise revenue.
It did not go well. Within three months, internet use in Uganda decreased from 47% of the population to 35%. Many of those who stayed used VPNs to avoid paying the tax, which generated very little revenue. After three years, the government abandoned the plan.
Many social media taxes could be counterproductive. A penny-per-post tax would increase the reach of the 3% of users who produce toxic content and create an incentive for users to produce more controversial content to maximize the reach of each post.
Restrict Bots, Anonymous Accounts, and Phones at School
There are three reforms that are more straightforward: tax bots, ban anonymous accounts, and restrict phones at school.
Social media platforms are infected by bots, although it’s not always easy to measure. What share of Twitter/X is bots? You can find estimates of 15%, 25%, and 68%. One analysis of a million tweets found several automated accounts that post every two minutes around the clock. In some studies, bots account for as much as 73% of all internet traffic. Many sites have guidelines that vaguely restrict bots, often as part of policies against spam, fake accounts, and artificial amplification. Meta (Facebook and Instagram) have more stringent measures; bots have an easier time on Reddit and X.
Controlling bots is not always in the interest of social media companies. Bots can artificially inflate user engagement metrics, making the platforms appear more popular to advertisers. Bots that serve legitimate customer service and marketing purposes also complicate enforcement. And dragnets designed to catch bots invariably snag humans.
A government-certified audit to measure, disclose, and tax bots would be a good start. Advertisers would support it because it ensures that their money isn’t being wasted. And to the extent that toxic material is being artificially generated, it would clean up the neighborhood.
There is also a good case for banning anonymous accounts and requiring platforms to confirm user identities. Anonymous accounts are far more likely to engage in destructive, anti-social behavior. LinkedIn verifies accounts, as does TikTok, YouTube, and Snapchat. Instagram and Facebook permit partial anonymity using secondary accounts or alternate profiles. Analysts estimate that about a quarter of Twitter/X is wholly or partially anonymous.
There are free speech arguments for permitting anonymous accounts, especially in repressive countries. Some of these concerns can be addressed by allowing verified accounts to use pseudonyms. People who have an absolute need for anonymity because they are subject to political repression will need to use highly secure platforms like Signal or Theema. On balance, I would have the audit that inspects sites for bots also audit, disclose, and fine platforms for hosting anonymous accounts.
Many high schools now limit social media use by restricting or banning mobile phones. Most US schools now restrict phone use. Pew finds that 82% of K-12 teachers in the U.S. say their school or district restricts phones. Middle school teachers (94%) are especially likely to say this, followed by elementary (84%) and high school (71%) teachers. Despite strong public support for these bans, 30% of teachers whose schools or districts restrict phone use say the rules are very or somewhat difficult to enforce. Many parents who like to text their kids at school oppose bans.
Has restricting phones helped? Banning phones has been linked to better academic outcomes, particularly for low-achieving students and those from low socioeconomic backgrounds. Teachers report that phone bans help students concentrate more on lessons and group work, reducing distractions. The biggest teacher’s union supports phone bans. Has it improved mental health? Some studies suggest yes, but the overall research case is mixed.
Facebook’s parent, Meta, understands that teen mental health represents a significant business threat. Half of all people on earth now use a Meta product. Meta knows that no society grants a privileged megaphone to its most toxic and vile members or will long tolerate a technology that does. The loneliness and teen mental health crisis have once again placed them in the regulatory cross-hairs.
But social media techies see the friendship crisis as an opportunity — at least until culture or politics slow them down. Expect Facebook to respond to the sharp decline in time spent with friends by providing you with AI buddies who quietly slip product suggestions into your daily conversations. Your kids or theirs may have more AI friends than human ones. Moreover, they may enjoy the experience.
The other place that the Web 2.0 vision survived intact was the rapid growth of service-oriented architectures. O’Reilly personally convinced Jeff Bezos to require all teams to expose their data and functionality through service interfaces and laid the groundwork for what would become Amazon Web Services.
Positive externalities work the other way. If social benefits like education and vaccines exceed individual costs, we produce less than is socially optimal. In a perfect world, governments subsidize positive externalities and tax, regulate, or penalize negative ones.