I won’t pretend to know what the hell is happening at OpenAI. The story is changing by the hour.
It’s chaos though. And that’s a good thing.
For the past few years, OpenAI has told a near-perfect story: The company was founded by tech luminaries — people skilled at noticing developing technologies and operating as though the future had already arrived. They recognized that we were on the cusp of a breakthrough that would transform, well, everything. And they saw the potential for that technology to be very good or very bad for society.
So these genius-inventors-of-the-future set aside the profit motive and established a company that would develop artificial general intelligence to benefit society. And then, when they realized the sheer computational costs of ushering in the A.I. future, they launched a for-profit wing that could generate the revenues necessary to win the A.I. race.
The results have been astonishing. Dall-E2. ChatGPT. GPT4. Copilot. The company has managed to make itself synonymous with generative AI in the same way that Google became synonymous with search.
And Sam Altman has played the role of the daring-but-careful CEO to perfection. He has mounted an incredibly effective charm offensive — simultaneously promising that the age of artificial intelligence has arrived, and warning that the technology is too powerful to be left completely unregulated. Legislators have been thrilled with how helpful and forthcoming he is.
Skeptics (like myself) have noted that there is something disingenuous about his message — Altman insists that AI should be regulated, and all of his proposed regulations happen to benefit his company (regulatory capture for me, but not for thee). Skeptics have also pointed out that OpenAI is basically just a well-branded front for Microsoft’s AI development efforts. When your primary revenue model is “take thirteen billion dollars from Microsoft, spend it on compute costs to develop gigantic LLMs, then license the product back to Microsoft,” then it doesn’t much matter if you are formally governed by a nonprofit board. You work for Microsoft.
But the skeptics haven’t had much success breaking through. The story has just been too good. The cadence of OpenAI’s product releases has been perfectly timed so that, just when people start to notice that the current generative AI tools have a ton of flaws, there is something new to focus on.
Back in May, when he testified before the Senate, Altman was asked what sort of agency ought to be established to regulate AI. One Senator then asked if he would be interested in running the agency. Altman replied no thanks, he was happy in his current job.
I wrote at the time that about how uneasy this made me:
One of the reasons why I don’t trust Sam Altman is that he has just been a little too perfect in how he has framed his company. It has a strong whiff of Sam Bankman-Fried circa 2021. Altman is the good one, patiently explaining the opportunities and threats posed by his company’s technology, and proactively calling for responsible regulation (that just happens to support his business and constrain his competitors). And my lord how the big tech journalists and elected officials are eating it up.
Last Friday, the whole story came apart at the seams.
As I write this, Sam Altman and Greg Brockman have been hired by Microsoft. Something like 700 OpenAI employees are threatening to join them. No one quite knows what the hell is going on. OpenAI might not exist next week.
It appears this wasn’t due to some major financial or personal scandal. The reporting so far suggests that Altman was trying to push out new products too fast for the Rationalists/Effective Altruists who make up his nonprofit board. (Including Adam D’Angelo, the CEO of Quora. I did not realize that Quora had a CEO. I remain unconvinced that it needs one. Are we sure that Quora is a real company?)
This is being framed in some internet circles as the first major conflict between Effective Altruists and Effective Accelerationists. And here I should point out that you really ought to read Henry Farrell’s latest post, “What OpenAI Shares with Scientology”
I’ve never read a text on rationalism, whether by true believers, by hangers-on, or by bitter enemies (often erstwhile true believers), that really gets the totality of what you see if you dive into its core texts and apocrypha. And I won’t even try to provide one here. It is some Very Weird Shit and there is really great religious sociology to be written about it. The fights around Roko’s Basilisk are perhaps the best known example of rationalism in action outside the community, and give you some flavor of the style of debate. But the very short version is that Eliezer Yudkowsky, and his multitudes of online fans embarked on a massive collective intellectual project, which can reasonably be described as resurrecting David Langford’s hoary 1980s SF cliche, and treating it as the most urgent dilemma facing human beings today. We are about to create God. What comes next? Add Bayes’ Theorem to Vinge’s core ideas, sez rationalism, and you’ll likely find the answer.
The consequences are what you might expect when a crowd of bright but rather naive (and occasionally creepy) computer science and adjacent people try to re-invent theology from first principles, to model what human-created gods might do, and how they ought be constrained. They include the following, non-comprehensive list: all sorts of strange mental exercises, postulated superhuman entities benign and malign and how to think about them; the jumbling of parts from fan-fiction, computer science, home-brewed philosophy and ARGs to create grotesque and interesting intellectual chimeras; Nick Bostrom, and a crew of very well funded philosophers; Effective Altruism, whose fancier adherents often prefer not to acknowledge the approach’s somewhat disreputable origins.
Farrell concludes that “The OpenAI saga is a fight between God and Money,” and money will most likely win. And, yes, I think that’s right.
But what I keep fixating on is how quickly the entire story has unwound itself. Sam Altman and OpenAI were pitching a perfect game. The company was a $90 billion non-profit. It was the White Knight of the AI race, the responsible player that would make sure we didn’t repeat the mistakes of the rise of social media platforms. And sure, there were questions to be answered about copyright and AI hallucinations and deepfakes and X-risk. But OpenAI was going to collaborate with government to work that all out.
Now, instead, OpenAI is a company full of weird internet nerds that burned the company down over their weird internet philosophical arguments. And the whole company might actually be employed by Microsoft before the new year. Which means the AI race isn’t being led by a courageous, responsible nonprofit — it’s being led by the oldest of the existing rival tech titans.
These do not look like serious people. They look like a mix of ridiculous ideologues and untrustworthy grifters.
And that is, I suspect, a very good thing. The development of generative AI will proceed along a healthier, more socially productive path if we distrust the companies and individuals who are developing it.
The story Altman had been telling was too good, too compelling.
He will be far less effective at telling that story now. People are going to ask tougher questions of him and his peers. They might even ask follow-ups to his glib replies. I could hardly imagine a better outcome.
This chaos is good. It is instructive.
Let them fight.
Glad you brought up a minor but intriguing element of this. Quora has a CEO? Really? WTF. Especially a CEO who sits on the board of OpenAI? Considering that Quora generates most of its content the old-fashioned way, by plagiarism, and far as I can tell, generates revenue the way catfish get their dinner, by eating off of whatever shit sinks to the bottom, I was like "wait, what? there's a CEO and he's actually inside an important company's board?"
Another day, another grifter exposed. More regulation for tech now.