My advisor in college was named Paul Dawson. Dawson was a legend, a master of his craft. He taught an 8am, 200-person intro to politics course that kept the entire room dialed-in for the full two hours.
I’ve been thinking about one of the lessons from his class recently, ever since reading Jonathan Katz’s Atlantic piece, “Substack has a nazi problem.” The lesson was about resource allocation decisions. And, also, about mouse-shit-in-cereal-boxes.
The class session begins with a simple question: “what is the maximum amount of mouse poop that ought to be allowed in your breakfast cereal?”
The question hangs in the air for a moment. Everyone senses there must be some hitch. Then some brave, foolish soul raises their hand and offers the obvious answer. “uh… zero? I think there should be zero mouse poop in my cereal box.”
From there, Dawson walks us through a Socratic exercise. How might the USDA enforce this zero-mouse-droppings mandate? Should it place round-the-clock observers at every cereal factory? Open, test, and repackage every box of cereal before delivery to the grocery store? Enforce a strict, one-strike policy where, if ever a trace amount of mouse poop is found, the company responsible is permanently shuttered?
We reject each of these options as obviously too draconian. There should be government inspections, yes. There ought to be penalties for corporate negligence. But the price tag associated with making absolutely sure that there is not one drop of mouse excrement in our breakfast cereal would be astronomical. We do not actually expect cereal factories to be hermetically sealed clean rooms that no rodent could ever penetrate.
The reality settles in for the class (who, mind you, have just eaten breakfast!). The maximum amount of mouse crap that we actually think the government should permit in our breakfast cereal is greater than zero. Preferably not much greater than zero. But, when we really pause to think about it, there is a ceiling on the amount of resources we think ought to be spent on this problem.
As a practical matter, we — the breakfast cereal-eating public — would prefer not to think about it. If and when the amount of mouse droppings passes the threshold where we would hear about it, that’ll be a signal that yeah this is too high and it requires more resources/regulatory scrutiny.
Keep in mind, this is an essay about Substack’s Nazi problem. And, yes, I’m going to argue to you that the Nazis in this case are a bunch of mouse shit. Allow me to add one more point first.
Tech libertarianism is, fundamentally, an ideology for people who are both cheap and lazy. That is the great advantage that attracts businesspeople to adopt a libertarian perspective on speech regulation. If your first instinct about content moderation is “I would rather not think about this, it shouldn’t be my problem, and I definitely don’t want to spend any resources on it,” then libertarianism is the ideology for you.
And there are contexts where, philosophically, they’re not entirely wrong. When Matthew Prince, the CEO of Cloudflare, decided in 2017 to rescind protections for Stormfront (the website where white nationalists had hatched their plan for the "Unite the Right” Charlottesville rally), he also published a long, thoughtful blog post explaining why he had made the decision, but also why he thought it was ultimately a bad thing that the CEO of a private company was in the position of making these choices.
It is bad and weird that Google, Facebook, Apple, and the rest of big tech have been left to play the role of regulator-of-last-resort. Their executives at times complain, at times correctly, that even if they have the right as private businesses to make these decisions, we would all be better off with some other entity making them.
(The hitch here, of course, is that one reason we have reduced government regulatory capacity to make and enforce these decisions is that these same companies have worked tirelessly to whittle down the size and scale of the administrative state. It has been a project of attaining great power while foreswearing any responsibility. Which is, y’know, really not great!)
But the philosophical issues are secondary to the pragmatic ones. Pragmatically, it’s really quite simple. Content moderation is costly. It is a first-order revenue sink, not a revenue-generator. (I say “first-order” because if you skimp on content moderation, eventually you’re going to have a cascade of problems that cost you a lot of money <*cough* ElonyouidiotNilayPatelwarnedyou *cough*>.) But the KPIs for the content moderation team are never going to be “look how much new business we brought in for the company.” When content moderation is going well, you don’t hear much about it. That’s kind of the goal.
This is why every tech CEO loves the libertarian approach to speech issues. Tech libertarianism holds that someone else (or no one at all) should expend resources on setting and enforcing boundaries for how your product is used. The essence of the position is “I shouldn’t have to spend money on any of this. And I shouldn’t ever face negative consequences for not spending money on this.”
(It’s a bit like someone who refuses to tip at a restaurant and insists its because they believe philosophically that the whole system is unjust and restaurants ought to pay fair wages to their workers. Sure! Fair point! But in the meantime, here and now, you’re still being a cheapskate asshole.)
Which brings us back to Substack’s Nazi problem.
I contributed to and co-signed the “Substackers against Nazis” open letter. I stand behind what we wrote, and if Substack’s leadership decides to ignore the issue, that’s going to be a signal that it’s time for me to move this newsletter to another platform.
And, also, the maximum number of internet Nazis that Substack is going to allow on its platform is, in fact, slightly greater than zero.
Substack has adopted a rather laissez-faire approach to content moderation. They have done so for pragmatic reasons (it’s so cheap!) and stapled on a convenient, high-minded philosophical justification (it’s the marketplace of ideas! We ought not be in the business of policing speech!).
The larger Substack grows and the more valuable the product becomes, the more the company is going to have to expend resources on these mouse-shit-internet-Nazis.
Because what is absolutely untenable is for the number of internet Nazis using the platform to get so large that Jonathan Katz can notice it, and write a whole Atlantic piece about it. Once the Atlantic is reaching out to your comms team about the internet-Nazi-mouse-shit, that’s a sign that you are well above the practically acceptable amount of internet Nazis using your site.
Substack can’t stop these assholes from creating new newsletters. But Substack certainly ought to staff their content moderation team well enough that, once those newsletters are monetized, once they’ve been around spreading hate long enough to get some traction, Substack ought to catch and nuke them.
The way to keep Substack effectively internet-Nazi free is to make it clear that, even if the company might not catch them launching a sad angry white nationalist newsletter, the newsletter will surely be vaporized at the first sign of success.
This will require resources from Substack. Resources that the company would rather not expend, because doing so is a first-order revenue sink. But it’s also both the right thing to do and the smart thing to do. Because otherwise over a hundred of your authors will simultaneously post articles with titles like “Substackers against Nazis,” and will start looking hard at exporting their newsletters to competing platforms.
(which, of course we will. What did they think we were going to do? Nothing?)
Substack’s answer thus far to its Nazi problem is to pretend it doesn’t have one, while offering tired tech libertarian pablum about providing everyone with the tools for reaching their audience. That’s a bad answer.
The correct amount of white nationalist newsletters on this platform is not exactly zero. That’s practically unworkable. But it’s approximately zero. There should never be more than trace amounts of internet Nazis on a big publishing platform with ambitions for becoming bigger. Once you have enough internet Nazis that readers and writers can notice, it’s time to put resources into cleaning the place up some more.
Best piece I read in a long time about the "free market/profit/capitalism versus ethics" conundrum. Thank you.
Doing nothing effectively is passively accepting/normalising nazi content. Libertarians naively overestimate human intelligence. But quantity does have its own 'quality'. Volume matters. Zero isn't realistic but near-zero is necessary to protect democracy and the rule of law.
My analogy was how much black mold is acceptable to have in your home but I'm landing the same place you are and I think you've added some important context. Thanks Dave.