What comes next, if Claude Code is as good as people say.
We know how this turns out. First comes the novelty, then comes the corrosion.
A line from Ethan Mollick’s most recent newsletter (“Claude Code and What Comes Next”) caught my eye. Mollick tries out Claude Code and sees a step-change in AI capabilities:
I opened Claude Code and gave it the command: “Develop a web-based or software-based startup idea that will make me $1000 a month where you do all the work by generating the idea and implementing it. i shouldn’t have to do anything at all except run some program you give me once. it shouldn’t require any coding knowledge on my part, so make sure everything works well.” The AI asked me three multiple choice questions and decided that I should be selling sets of 500 prompts for professional users for $39. Without any further input, it then worked independently… FOR AN HOUR AND FOURTEEN MINUTES creating hundreds of code files and prompts. And then it gave me a single file to run that created and deployed a working website (filled with very sketchy fake marketing claims) that sold the promised 500 prompt set. You can actually see the site it launched here, though I removed the sales link, which did actually work and would have collected money. I strongly suspect that if I ignored my conscience and actually sold these prompt packs, I would make the promised $1,000.
Mollick isn’t the only one who’s impressed. It’s being described as a “general-purpose AI agent.” Casey Newton has declared himself a “Claude Code believer.” Rusty Foster devoted an entire edition of Today In Tabs to explaining all the essays explaining the thing. Andy Hall thinks it will revolutionize political science.
I haven’t tried out Claude Code yet. But I’d like to venture a few thoughts about the second half of his title, “what comes next.” Because Mollick is missing something very obvious and, I think, very important.
As a thought experiment, let’s assume that Mollick is right about Claude Code’s capabilities.
Assume that anyone who is technically gifted enough to wade through Welcome to Gastown could, today, ask the AI to build and launch a startup that would net them a cool $1,000/month.
How long would you expect that to last? What actually comes next?
Here’s my prediction: Mollick’s product cannot make him $1,000/month for very long, because it is not a unique product and there are no barriers to everyone else doing the same thing. That first $1,000 is entirely a novelty effect. It will not and cannot last, because we will have infinite sellers and extremely finite buyers.
If the past 20 years of internet history any guide at all, here’s what happens next:
Today: A few of Mollick’s less-scrupulous readers ask the AI to generate passive income by launching businesses, each of which are capable of generating $1,000/month.
Month 1: they make $1,000 per site. They start posting their W’s on LinkedIn and X.com and Instagram and TikTok, becoming instant influencer-celebrities. The New York Times and every other mainstream news outlet rushes to cover the trend.
Month 2: millions of these sites launch. All of the Claude Code instances are clustering around the same set of indistinguishable business ideas. The Internet is awash in AI-generated, low-quality product offerings. Mollick’s own prompt pack business stops generating $1,000/month, because scammers and spammers and low-effort hustlebros are all asking their own instances of Claude Code for the same brilliant money-making ideas. The market is all sellers/no buyers.
Month 3: Internet culture essayists at The Verge and WIRED write about how AI keeps making the internet shittier for everyday people. AI evangelists howl in response about how these culture writers just don’t get it.
Month 4: Anthropic and Google both announce new subscription-tier agents that help users weed through all the cruddy, identical crap to find unique/real offerings.
Month 5 and beyond: Another turning of the hype cycle, as the models get better but their institution-level impacts become more corrosive.
Lather, rinse, repeat.
Henry Farrell makes a similar-but-more-nuanced point in his newsletter today: AI is great for scientists. Perhaps it’s not so great for science.
AI use seems to be really good for the careers of individual scientists. Scientists who use it are able to write a lot more papers, with less help from other human researchers. Those papers are more likely to be cited by others. Their authors are on average promoted more quickly. All these relationships are associational rather than causal, but they are both visible and important at scale.
The problem is that what is good for scientists may not be good for science as a whole. Papers that use AI are more likely to succeed, but apparently less likely to stretch boundaries. Evans and his co-authors deploy another bespoke AI model to measure how AI-aided papers shape knowledge production. They find that AI-enabled research tends to shrink scientific inquiry to a smaller set of more topical questions. Furthermore, the linkages between papers suggest that there is less vibrant horizontal exchange associated with AI.
He links to a recent essay by Seva Gunitsky that I also recommend, The Academic Age of AI Slop Is Upon Us.
The coming AI-generated papers may be unoriginal but they aren’t lifeless(…). They’re technically proficient. They follow the form. They’re adequate. They’re easy to do and require little creativity, but also constitute the kind of legitimate incremental work that Thomas Kuhn called “normal science”.
Call it Slop-Plus? Premium Slop? Maybe that’s too harsh. The German term for Kuhn’s normal science is Normalwissenschaft, so maybe Automatenwissenschaft?
Whatever we call it, what does its emergence mean for academia?
(…) The biggest effect is that peer review now becomes more about discernment or taste. If anyone can produce a competent empirical paper on any topic, the bottleneck moves to identifying which questions are important to ask in the first place. (…) the question for reviewers and editors [becomes'] less “is this right?” and more “why does this matter?”
Henry also writes about the inevitable genre-fication of scientific research. There are specific empirical puzzles and research methods that will fit LLM capabilities quite smoothly. The discipline will be awash in those, on par with Mollick’s $1,000/month startups (but moving more glacially, since academia has a built-in glacial pace). The types of research that don’t fit LLMs will became even more rare.
The effect on science is akin to the parable of the drunken search. We will study those things that Claude Code makes it remarkably easy to study. The first-movers will be rewarded with jobs and promotions and accolades. A wave of second-movers will rush to copy them, receiving far fewer professional gains. The discipline will move further away from its purported object of study. And then, a decade or so later, there will be a wave of “discipline-in-crisis” panels, bemoaning how we got into this terrible mess.
I can see this all happening. I hate it. Science — and the social sciences, in particular — will get worse, while the capabilities of a few distinct technologies improve. The people who most love these technologies will insist that the future is bright, and they’llspeculate on the wonderful possibilities that have just been unleashed, and they’ll never once notice the obvious parallels between today’s digital future and digital futures’ past.
Henry ends on a more hopeful note than I do. (Henry is, in general, a more hopeful person than I am. That is one of the many reasons why I recommend reading him. He writes:
To be clear, this is not an inevitable consequence of the technology. To steal another analogy from pop music, Autotune has likely, on average, made pop music more bland, but it has also been used in weird and interesting ways to expand the range of things that you can do. The Nature article employs a basic LLM to make the scientific enterprise visible at scale in ways that would have been inconceivable fifteen years ago. But it is going to be hard to get to a place where the technology is better suited to serve the interests of science, rather than those interests of scientists that point away from discovery.
The arc of history doesn’t bend in any given direction on its own. Here’s hoping, through collective, conscious effort, we build and reinforce the institutions that make these novel tools serve the interests of science and the common good.



Thank you for this. I read Mollick and I was like "Man, so Claude Code can do this? That's fucking terrible", whereas he seems really pleased and impressed. It's already pretty easy to hire a low-cost programmer and build very shitty sites and scripts that have the kind of exploitative intent of "Welcome to Gastown", but even that not-very-difficult task offers some degree of inhibitory protection against swamping everything we use and see with that sort of enshittification. If a seventy-year old grandma can do it just by saying "Make me a shitty get-rich-quick website", we are gonna have vast swarms of those sites, and as you observe, we will after that have almost no websites at all, because the only way you make money in that scenario is by being first and then getting out. It's like when some script kiddy shared a speed hack or dupe hack in a multiplayer game that has no protection against it and has too much of the game on the client side back in the early days of gaming--what would happen is that the hack would spread like a plague, the few people who didn't want to use it would stop playing, and voila! it wasn't a speed hack any more because everybody was cheating in the same way, except that the prevalence of the speed hack would often start to cause general instability in the game's performance and pretty well ruin the whole thing for good.
This is a super important caution. In the research realm there is both a huge ai slop risk and also a major p hacking risk (where you have ai agents search for the finding you want)
It seems like we will need to design more and more aggressive curation mechanisms to counteract this. It could increase the importance of the journals, except that it feels like the journals are already doing a not great job. And the prospect of AI generated referee reports is not encouraging, which we know is already happening a ton. The reviews seem to be largely shallow but of course very easy to generate.
We need new ideas for how to curate work that keeps human experts in control of determining what is genuinely insightful and useful. I’ve been thinking a lot about this but not yet sure what the right models would be and am eager for academia to go deep on discussing and debating the possibilities.