25 Comments
User's avatar
Timothy Burke's avatar

Thank you for this. I read Mollick and I was like "Man, so Claude Code can do this? That's fucking terrible", whereas he seems really pleased and impressed. It's already pretty easy to hire a low-cost programmer and build very shitty sites and scripts that have the kind of exploitative intent of "Welcome to Gastown", but even that not-very-difficult task offers some degree of inhibitory protection against swamping everything we use and see with that sort of enshittification. If a seventy-year old grandma can do it just by saying "Make me a shitty get-rich-quick website", we are gonna have vast swarms of those sites, and as you observe, we will after that have almost no websites at all, because the only way you make money in that scenario is by being first and then getting out. It's like when some script kiddy shared a speed hack or dupe hack in a multiplayer game that has no protection against it and has too much of the game on the client side back in the early days of gaming--what would happen is that the hack would spread like a plague, the few people who didn't want to use it would stop playing, and voila! it wasn't a speed hack any more because everybody was cheating in the same way, except that the prevalence of the speed hack would often start to cause general instability in the game's performance and pretty well ruin the whole thing for good.

Rob Nelson's avatar

The repeated claim that LLMs are a game-changer is made without much attention to what game is being played. This analogy illuminates how "the obvious parallels between today’s digital future and digital futures’ past" point to places the enthusiasts are unable or unwilling to imagine.

Cheez Whiz's avatar

I call this Whiz's 1st Law of Cybernetics: given a system, someone will try to game it.

Timothy Burke's avatar

The pity is that over and over and over system designers have to find out the hard way that systems that are built without hardened defenses against getting gamed end up failing or massively underperforming their potential usefulness. I can't tell if that's capitalist short-termism or if it's just the hubris of tech designers, but it sure is frustrating to watch and worse still to repeatedly experience.

Mabuse7's avatar

I think it’s a selection effect. People who are very conscientious of unintended consequences tend not to become system designers since for them it’s an invitation to endless revision and paranoia.

Myra Ferree's avatar

It’s multilevel marketing but just of digital crap.

Andy Hall's avatar

This is a super important caution. In the research realm there is both a huge ai slop risk and also a major p hacking risk (where you have ai agents search for the finding you want)

It seems like we will need to design more and more aggressive curation mechanisms to counteract this. It could increase the importance of the journals, except that it feels like the journals are already doing a not great job. And the prospect of AI generated referee reports is not encouraging, which we know is already happening a ton. The reviews seem to be largely shallow but of course very easy to generate.

We need new ideas for how to curate work that keeps human experts in control of determining what is genuinely insightful and useful. I’ve been thinking a lot about this but not yet sure what the right models would be and am eager for academia to go deep on discussing and debating the possibilities.

Dave Karpf's avatar

Fully agree that's what we need, but I honestly can't think of a single option that can work given other existing trends.

Peer review as an institution was already stretched pretty much to the breaking point. It relies on voluntary labor contributions based on a gift culture of shared obligation. As the labor force skewed heavily towards precarious adjuncts, and the number of papers in the pipeline increased, the journals were already fighting a losing battle.

I can't think of a single curation mechanism that can work atop those unstable-and-worsening foundations.

That's not to say it isn't worth trying (yes, we need new ideas. And they might have to be radical new ideas if the status quo can't be maintained). I'm fully open to discussing them, but also start from a well of deep pessimism about how bad the near-term available options are likely to be.

Indy Neogy's avatar

Nothing innovative in my suggestion, but I don’t see any way forward without paying reviewers.

bastien r-c's avatar

Hello Dave and thank you for this piece.

I feel like the fight for quality in science is now harsher than it ever was: for long it was mostly scientists against bureaucrats who imposed norms and quotas. The fight against publish or perish is far from being won with the introduction of ai-enabled science or premium slop as you call it. Here in France academia in social sciences has taken strong positions against the extensive use of AI in research. Our faith in the quality of science and for a free university (in all the senses of the definition) may be strong, but I fear the forces of bureaucracy are stronger.

We are a lot of phd students here talking about this between us. We don't really know how we are expected to all be excellent while the system will reward ai-slop, we collectively fear that some free riders (or "ai-riders"?) will benefit the most from the extensive and dangerous use of ai in their research. The quality of the phd thesis will probably drop, there will still be less and less of academic positions and we will indeed ask ourselves in 30 years how the quality of our production has gotten so low.

The only optimism I see here worth to be considered is if we collectively decide to organize another form of university, but will our national administrations allow it ? I doubt it.

Idriss Jellyfish's avatar

Well, my current horrification of how slop+ is playing out is the polycule of cynicism, gambling and fintech. https://www.theringer.com/2026/01/14/tech/prediction-markets-betting-explained-meaning-polymarket-kalshi

Prediction market platforms that allow you to wager on the outcome of almost any future event, they are not regulated as stocks nor as gambling and thus escape virtually any oversight, are blowing up with billions of dollars flowing through them every month, they can oversee hundreds of millions of dollars in wagers in a single day, and are becoming inescapable in the culture.

Some people may make money "creating" a product that others pay for with the help of Claude Code; my fear is how many are going to be scamming themselves, pumping all their assets into claude code predicted gambles. It's crypto speculation steroids.

When Trump1 came into office, I felt our constitutional democratic system of civics was going to have a major stress test, and the hope was it could be built back stronger. I don't know what hope looks like now (well, apart from Team Mamdani), but I try to reinforce the idea that small acts of service and connection repeated are worthwhile.

RM Gregg's avatar

Does everyone realize this shit is not free? Is something like a trillion dollars being spent on building the infrastructure so someone can create a scam website in less then 2 hours? What exactly is the business model? Putting advertisement on the page that you input the prompts to create that scam website? Advertisements for what? Some shitcoin?

Cheez Whiz's avatar

The scam website is more a proof of concept than a business plan, automation of direct mail marketing hustles. Once one scam is played out, you move on to the next one. And MLMs can be implemented on much less hardware than the Big 4 are building out, that's a completely different scam. We are smack in the middle of the Roaring 20's 2.0 and its a question of when the bubbles all burst, not if.

picklefactory's avatar

after the wreckage of YouTube, what new tragedy of the commons will Google force upon us, one but-it's-useful-sometimes bro at a time?

Alex Tolley's avatar

Remember Rule 34 - pornography is an early adopter. The internet is littered with "Build your [sexy] AI girlfriend". LLM companies are helping this along, with xAI's Grok leading the way with undressing images of women and even children [CSAM?]. Addictive AI companions are a definite problem too. I would predict that even those webcam girls will find their incomes disappear as AI builds the same content that responds to paying viewers. The unfavorable economics of GenAI will drive the enshittification of this process to try to capture actual profits. Addictive "companions" will be offered at 'dime bag" rates, and then the hooked user will be squeezed forhigher payments, possibly with new features, just as with online gambling.

Welcome to our 21st century dystopia.

Seth Finkelstein's avatar

We went through this in science with the advent of computers themselves. There was also then a great moaning and groaning, that the science which lent itself to computation would be favored, as opposed to good solid theory work. And there was a grain of truth to that. But overall, the benefits were enormous.

I think I still have somewhere an old article about how the pocket calculator was harmful to thought compared to the slide rule. Does anyone nowadays even know how to use a slide rule? (I'm afraid to ask if people even know what a slide rule *is*).

But every advance in technology brings with it a period of early adopters who can do well using it, and also scammers who exploit it.

Iroi One's avatar
2dEdited

How does one scam scientific research using a slide rule, calculator, or computation?

Seth Finkelstein's avatar

When computers first became prominent, there were complaints that people were trying to puff up their publication numbers by doing what we'd call "AI SLOP!" - basically throwing a bunch of junk data at a program and extracting very dubious data analysis that didn't mean anything. Come to think of it, that's still a complaint down to the present day, it's just not part of punditry about why computers are destructive to science. The saying "Garbage In, Garbage Out" is a cliche now, but it was a new once upon a time.

Ricardo Reis's avatar

A perfect grey rhyno problem description… will it, in the end, result in a model where discerning powers (with money & “taste”) fund directly hand-picked “artists” to advance new ways under their patronage? Besides also being a black prospect, that also presupposes some “*discerning* enlightened patronage”… I wonder were the discernment will come in the long term.

The challenge of local incentives alignement (originating bounded rationality) goes against the enterprise higher goal (which all would benefit). To correct the oruborus effect, the snake must be cut somewhere…

Neural Foundry's avatar

The academia parallel hits hard. We're already seeing that pattern play out in real-time with ChatGPT-assisted lit reviews and data analysis. The scary part isn't just the glacial movement toward easier research questions but the invisibilty of it all. Nobodys gonna admit their research agenda is now dictated by what Claude does well vs poorly. Would love to see more on how disciplines can build better guardrails here.

Joe Jordan's avatar

Tldr machines cannot produce surplus value.

On this point the Marxists and MBAs agree, though the MBAs call it NPVGO instead of surplus value.

CansaFis Foote's avatar

…the world could actually use more lamplighters, not less…not only is that a rad job, but imagine a city lit by oil lamps, sure instafluencers would eventually destroy it, but dear gersh the beauty…

Rachel Jacobs's avatar

From what I've heard from professional scientists, a lot of them are crappy writers. AI may on average improve their writing. But on the other hand, if lots of them use AI to write their papers, the ones who write their own are going to stand out. And standing out is very valuable. I think the next step after lots of mediocre AI papers is that the pendulum will swing the other way and there will be more promotion for those who do not use AI, because they will stand out.

Victualis's avatar

The problem is that with increasing use of LLM reviews it may well be a liability to stand out. If human-written text is harder to publish then people will stop writing text. In some fields (chemistry comes to mind) most papers are already expected to conform to a super rigid template which gives LLMs a big advantage up front.

Rachel Jacobs's avatar

We'll see. I work for a company that often competes in arenas where we have to sign saying that we didn't use AI. We have to stand out to a particular chosen group of people as the best. I imagine some of our competitors cheat, but we do well enough that it can't be helping them that much. So as of now, when it's worth it, humans can still compete with AI pretty well.