Chris Dixon is a general partner at Andreessen Horowitz (a16z), one of the most influential Venture Capital firms in the world. He is Web3’s single biggest investor, and its most prominent evangelist.
And he is just atrocious at explaining the history of the Web.
Dixon made his money in the internet of the ‘00s and ‘10s. He works with Marc Andreessen, the iconic “golden geek” of the ‘90s internet. The guy has been around long enough to know better. When he gets the history of the Web completely wrong, he is doing so with intent. And he has been prominently, boldly getting this wrong for YEARS. No one seems to call him on it. I don’t understand why.
(Yes I do. It’s because he’s rich and well-connected. Picking fights with him over something like “the history of the web” has little upside. It’s one of those things that only a tenured professor who isn’t looking for much research funding would bother with.)
Here’s the brief history of the Web, as Dixon tells it:
Web1 (roughly 1990-2005) was about open protocols that were decentralized and community-governed. Most of the value accrued to the edges of the network — users and builders.
Web2 (roughly 2005-2020) was about siloed, centralized services run by corporations. Most of the value accrued to a handful of companies like Google, Apple, Amazon, and Facebook.
We are now at the beginning of the web3 era, which combines the decentralized, community-governed ethos of web1 with the advanced, modern functionality of web2. Web3 is the internet owned by the builders and users, orchestrated with tokens.
Dixon first articulated this history in a 2018 essay titled “why decentralization matters.” He sharpened the claims in a 2021 essay titled “Why Web3 matters.” He has mentioned it repeatedly on Twitter and in interviews. Last month, a16z released its latest “State of Crypto Report.” (The inestimable Molly White broke down the absurd highlights in a Twitter thread. ) Page 5 of the report (which is really just a slide deck. Because of course it is.) reiterates this timeline.
The timeline is a stylized account of the internet’s evolution — from the decentralized protocols of web1 to the scale and advanced functionality of web2 — creating a justification for (1) why the blockchain-based future of the web is worthwhile, (2) why it is timely, and (3) why it is (in Dixon’s view) inevitable. (Every fifteen years, we start a new chapter of the Internet’s history. The start of the decentralized/ tokenized/blockchain-enabled Web era is right on schedule. Don’t miss out on the chance to invest early!)
Broad historical narratives are a bit like statistical models — “all models are wrong, but some models are useful.” Of course history is more messy and complicated than that. But if the general outline makes sense, and if it helps us make sense of the present, then the effort is justifiable.
But let me offer a corollary: “all models are wrong, but some are wronger than others.” And the problem with Dixon’s model is that it extremely, ceaselessly, aggressively wrong. It’s the type of wrong that might be useful for hawking unregistered Web3 security products (err, sorry, I mean, play-to-earn games), but is not at all useful for actually understanding the development of the internet.
1990-2005 wasn’t a single, contiguous era of “open decentralized protocols” and value accruing to the edges of the network. There were (at least) three eras in that timespan. Only the first (1990-95) had those qualities. As soon as the money got big, things changed drastically.
2005-2020 wasn’t a single era either. Again, there were at least three eras in there. And only the last one or two fit his description of “siloed, centralized services” with value accruing to the big tech companies. The years that most clearly represent the “web 2.0” era were characterized by social sharing and mass collaboration. It was only later that the platforms calcified and the “enshittification” cycle began in earnest. (h/t Doctorow)
A closer look at the history of the World Wide Web is a warning flare against Dixon’s investment portfolio. It makes plain that the Internet has always had a monopoly problem, and that the primary tool for limiting monopoly power is an active regulatory state. It offers constant reminders that financial incentives tend to distort and destroy community behavior.
Below, I sketch an alternate model of the history of the web. This will also be a stylized account (history is much too complicated to divide neatly into five-year increments). But my account has the benefit of being generally more true, while also putting the brittleness of the Web3 narrative on full display.
The history of the Web, divided into neat, five-year increments.
1990-1995: Web Prehistory
The web in 1990 was basically still just a white paper by Tim Berners-Lee. Hell, when WIRED magazine launched in 1993, the magazine made no mention of the World Wide Web. The writers focused on the coming digital revolutions of interactive television and virtual reality instead. Bulletin Board Services were a hot topic in 1993. The magazine’s first feature story on the World Wide Web didn’t appear until October 1994.
Two key features of this time period: the Internet was still noncommercial (h/t Ben Tarnoff) and the Internet user base was still tiny (h/t Kevin Driscoll). This is crucial to understanding why open, decentralized protocols flourished back then: to be an internet user circa 1991, you were either hanging out on local Bulletin Board Systems or you were logging on through university research networks. Computer engineers collaborated on standard-setting with minimal corporate interference, because the big companies (like Apple, IBM, and Microsoft) just weren’t interested in networked computing yet. There was no money in it.
But that would change. For the Web to get really big, it would have to expand beyond the hobbyists and researchers.
1995-2000: Web 1.0/the boom years
You could argue for starting the “Web 1” era in 1993, with the introduction of the Mosaic browser. Honestly, 1995 makes much more sense. That’s when the rules against commercial activity were eliminated, and it’s also when the Netscape IPO attracted interest from Wall Street and from mass media. 1995 is the start of the Dotcom boom. The Dotcom boom brings finance-types into the tech scene, irrevocably changing the culture. The Dotcom boom also throws money at hundreds of tech startups, fueling the build-out of broadband internet connections.
One feature of the Dotcom boom was Microsoft’s central role as the “Big Tech” behemoth of its day. Michael Lewis chronicles this dynamic in his book, The New New Thing, about Jim Clark and Netscape. Clark knew exactly what he was doing by taking Netscape public so quickly in 1995. Netscape had a dominant share of the nascent browser market, and no revenue model to speak of. It also had a giant, ticking clock, counting down towards “MICROSOFT.”
Windows 95 was the operating system of practically every personal computer back then. Bill Gates had initially dismissed the Web, but Clark knew that wasn’t going to last. Eventually Microsoft was going to build its own browser, package it with the operating system, and wipe out Netscape’s market share. That’s exactly what happened, through a series of anti-competitive practices that would become central to the DOJ’s antitrust case against Microsoft.
This isn’t exactly buried history. Everyone paying attention to the Internet during the Dotcom boom has memories of the “browser wars.” The Microsoft antitrust suit was front page news. The rise and fall of Netscape was one of the defining moments of the era. (And Dixon, in particular, ought to remember it because his office is down the hall from Netscape’s cofounder, Marc Andreessen!)
2001-2005: The Dotcom Crash and the Slow Recovery
The Web of 1999 was full of empty bravado. Companies with no business plan were going public, turning early employees into paper millionaires. Alan Greenspan was warning of “irrational exuberance,” while tech thinkers were insisting that a new economics of abundance had arisen. The old rules, they argued, simply didn’t apply.
Things looked different after the Dotcom crash. Companies like Pets.com became punchlines. Venture Capital investment dramatically shrank. The swaggering confidence of the dotcom boom gave way to a new, tentative status quo. More people were online than ever before, and there were still significant tech breakthroughs during this time period (iPod, iTunes, mass adoption of Wifi, mass adoption of social network sites, wikipedia, etc…), but Silicon Valley had lost its collective mojo.
To bundle the crash years together with the Dotcom boom years is a bit like lumping the Obama years and the Trump years together. Sure, they were chronologically close, and featured many of the same people. But the differences are what define them.
And again, just to drive this point home: Dixon’s “open protocols that are decentralized and community governed” are nowhere to be found in 1998 and 2003. Jim Clark took Netscape public so he could buy a superyacht before Microsoft took a hatchet to his business model. The iTunes store was many things, but it was never decentralized nor community-governed. Microsoft was Big Tech before Big Tech was all that big. And what slowed Microsoft down wasn’t decentralized competition. It was antitrust enforcement, right up to the moment the Bush administration decided to stop enforcing antitrust.
2005-2010: Web 2.0
This is one spot where the timeline is frustratingly oversimplified.
The term “Web 2.0” was coined by Tim O’Reilly in 2005. But the phenomena it described — the blogosphere and Wikipedia and social network sites, just to name a few — date back several years before that. I wrote about this last year, in an essay titled “when was the blogosphere?” The blogosphere was a quintessential web 2.0 phenomenon, and it begins somewhere between 1997 and 2003. (I pin the date at 2002 for the political blogosphere. Tech blogging started well before that.)
Kevin Kelly articulated what was new here in his 2005 essay “We are the Web”: “What we all failed to see was how much of this new world would be manufactured by users, not corporate interests.” The Web 2.0 phenomenon was about user-generated content. People, forming online networks and communities, were creating music and art and journalism. They were sharing stories and creating value for each other. No one was getting rich yet, but everyone seemed to be getting involved.
What O’Reilly really does is put a name to the thing. It was, in effect, a load-bearing branding exercise. He gives it force and form by shaping the way that investors, inventors, journalists and the broader public view the next iteration of the Web. The impacts are real — once the Web 2.0 branding sinks in, Venture Capitalists get excited again, and the investments start to flow.
Chris Dixon was around for Web 2.0. Hell, Chris Dixon got rich off of Web 2.0. So it’s striking that Dixon’s description of “Web2 (2005-2020)” bears no resemblance to the actual Web 2.0 years (I’m calling it 2005-2010 here for simplicity’s sake, but peak Web 2.0 is probably more like 2004-2009). The Web 2.0 years weren’t about “siloed, centralized services run by corporations.” That came later.
If you want to understand what Web 2.0 looked like at the time, read Kevin Kelly’s “We are the Web.” Read Clay Shirky’s Here Comes Everybody, or Yochai Benkler’s The Wealth of Networks, or Henry Jenkins’s Convergence Culture. Or, if you prefer video, watch Michael Wesch’s 2008 lecture, “An anthropological introduction to YouTube.” Wesch and his students documented the behavior of the early YouTube community, back when YouTube still was a community… before the money get big, that is.
For all that talk about Web3 communities, they’ve got nothing on Wikipedia or early YouTube. Peak Web 2.0 featured abundant remixing and content creation and participatory culture. It didn’t last (it likely couldn’t last!), but we shouldn’t confuse it with the thing that replaced it.
2010-2015: The Platform Era.
Web 1.0 ended with a bang; Web 2.0 died with a whimper. No one quite noticed that it ended. Some attribute its demise to the smartphone and the app-based ecosystem which supplanted the open Web. Others attribute it to Facebook launching a developer platform and constructing a walled garden that could rival the open Web. Still others blame a cultural shift in Silicon Valley itself — after the 2008 financial crisis, another wave of Wall Street/finance types migrated to Silicon Valley. What once had been home of A/V Club nerds was finally dominated by frat bros.
Regardless, the thing that clearly changed sometime around 2010 was that the online communities of the ‘00s receded in importance. Instead, the platforms themselves became the locus of power. To quote Joanne McNeil, this was right around the time when “a person became a user.”
These were the years of Buzzfeed and Upworthy, of the disastrous “pivot to video,” and the rise of the “influencer” as a career path. The money, simply put, had gotten big. (And big money ruins everything.)
These are also the years when internet time slowed down. The pace of online change kind of stalls out after the iPhone and iPad come to market. The biggest tech firms outright acquire any upstart competitors. The promise of the “sharing economy” turns into the grim realities of the “gig economy.” Venture funding pours into Uber and AirBnB and WeWork and thousands of other startups. Big tech becomes the biggest sector of the economy, while also settling into place and producing little that is actually new anymore.
Notice, however, that this is more of an antitrust story than it is a Web 2.0 story. Antitrust enforcement has been critical to the long history of the Internet. IBM flourished because Bell was wary of getting into hardware. IBM stayed out of software because of a consent decree, leaving an opening that Microsoft exploited. Microsoft got tied up in years of lawsuits for being too aggressive in trying to dominate the Web. Then the DOJ stopped enforcing antitrust policy, and it took less than a decade for monopoly power to come roaring back. A few centralized tech firms come to dominate in the 2010s because of active policy decisions by the government. If you want to avoid this scenario, you need more regulatory scrutiny, not less.
2016-2021: The Techlash Years
(…Again, this starts in 2017 rather than 2016. History doesn’t fit into neat compartments. I’m just providing a rough guidepost here.)
It’s interesting that Dixon whistles past the techlash. It isn’t as though it escaped his notice. These are the years when Amazon and Facebook and Google and (to a lesser extent) Apple all lose public trust. Executives start regularly getting called before Congress. Venues like WIRED take a hard critical turn in their reporting. Big tech gets a black eye in the public square, while sustaining record profits.
I think the simplest way to understand the techlash is that this was the time period when Silicon Valley started being held responsible for the present instead of being evaluated based on their visions of the future. Big Tech had moved to the center of the economy and public life. Software, in Marc Andreessen’s words, had “eaten the world.” With all that great power came great responsibility, and tech elites absolutely hated all that critical attention.
And that brings us to the current moment.
2022-today: Searching for the Internet’s Next Chapter
I’ve written about this previously (“Tech Futurism’s Blind Spot”). It has always struck me that the rush to define the next chapter of the Internet (“Metaverse! Web3! AI! Oh my!”) has been driven by tech elites’ hunger to turn the page on the techlash years. They want to go back to being judged based on their ambitions, rather than their results.
That’s partially about vibes. When you have all the money and power, what you really want is for people to treat you like a cool techie startup guy again. (Elon Musk and Mark Zuckerberg circa 2022 would really like to be treated like Elon Musk and Mark Zuckerberg circa 2012 again, pleaseandthankyou.)
But it’s also about regulation. The United States is finally starting to get serious about antitrust again. The EU is flexing its regulatory muscle, forcing the tech platforms to adjust their behavior. The old tech libertarian ethos of the 1990s dictated that government should stay out of Silicon Valley’s way. By this line of thinking, the engineers and innovators were building the future much too fast, and all government could possibly do was get in the way. Cracks have formed in the facade of the old Silicon Valley ideology. The pace of online innovation slowed down and stabilized in the 2010s. Better to get back into the business of boldly creating the future than risk having regulators aggressively assert their authority.
You can see this thirst for the next web all over the a16z State of Crypto report. (Sure, 2022 was the year of the crypto crash. But don’t get caught up in everything that went wrong last year! Focus on the positive signals that things might go better this year…)
You can see it in the dying embers of the metaverse hype cycle. (Maybe Apple’s Augmented Reality glasses will finally spur mass adoption! Oh wait, they’re gonna cost $3,000, and might be delayed again? Alright, but just imagine what it will be like once the price comes down…)
And you can see it in the discourse surrounding Generative AI. (Finally, a breakthrough technology that feels like magic… one with real use cases, that people are actually using… Forget its current limitations and social vulnerabilities. Let’s focus on the world that it might create!)
Clearly Generative AI will figure into the Internet of the 2020s. Among the three competing visions of the next Internet, the AI proponents have emerged victorious. It’s too early to say what it will eventually look like. It is not too early to regulate how it is deployed and monetized.
And, as a result, there’s a very real sense in which Web3 is already yesterday’s digital future. The crypto crash in 2022 silenced most of the Web3 hype bubble. You read much less about DAOs and play-to-earn games than you did a year or two ago. The tech journalists who spent early 2022 insisting “with all the money and talent in that space, there must be something real there” have collectively reached the conclusion that it was scams all-the-way-down after all.
Dixon has bet big in Web3 though. He has too much staked to give up on it just yet. So he keeps committing to this fake retelling of history in which Web3 is naturally what comes next. The history makes no sense. It never made any sense. It should have been a giant red flag the moment he started retconning the ‘90s, ‘00s, and ‘10s.
I’d be willing to give Web3 proponents the benefit of the doubt if the people promoting this were a bunch of 20-somethings who grew up in the world built by Google and Facebook. “Look,” I’d say, “you’re completely wrong. Maybe read a book instead of a Hacker News thread. I can recommend a couple.”
But Chris Dixon was there for too much of the history of the Web to be making innocent mistakes here. When he erases Microsoft from the ‘90s Web, it isn’t because he never heard of the browser wars. When he conflates the participatory Web 2.0 years with the platform years that followed, it is an intentional omission. He’s getting the basic history wrong because it serves his strategic purposes as a Web3 investor and evangelist.
The open protocols of the early 90s were an artifact of a time when the web was still pre-commercial. The participatory web of the mid-00s was a nice cultural moment that only lasted until the money got huge. That’s why and how “take rates” become A Thing That Chris Dixon Could Complain About. Nobody worried about MySpace or Friendster’s take rate, because nobody was making money through those sites to begin with.
You know why Web3 has turned out to be so much scammier than the internet of the 90s and 00s? The answer is simple. It’s the same reason why Willie Sutton robbed banks (“Because that’s where the money is!). You can’t code community participation and trust into the blockchain. Once the money gets big, the social incentives get skewed. In the complete absence of regulation, people are going to run huge scams.
We’ve seen the result. Web3 has been a catastrophe for everyone but the early investors and the scammers. Chris Dixon constructed a model of Web history to help sell his investments. All models are wrong, but some models are wronger than others
When the historical narrative seems off, distrust the person who is delivering it.
It’s a good sign that their real goal is to avoid the lessons of history.
"I think the simplest way to understand the techlash is that this was the time period when Silicon Valley started being held responsible for the present instead of being evaluated based on their visions of the future."
Man this is so extremely correct.
You might add the following as examples in the section about this sentence "Antitrust enforcement has been critical to the long history of the Internet." Without the Carterfone decision the wizards in Basking Ridge would never have let those noisy modems on their cherished network. And, without the AT&T decree I feel like we'd never have the cheap backhaul that makes the Internet economically work. And last, in my telecom trifecta, the 1996 Telecom Act (I mean really, have you EVER tried to set up ISDN? I did. Verizon billed me for 8 months though it never ever worked).