A Memo to OpenAI: You Are Not the Protagonists
Bury their new Voice Engine program in an unmarked grave
(This is going to be a short one. I just need to vent.)
Cade Metz reported this Friday on OpenAI’s newest demo, Voice Engine. It can take a 15-second clip of you speaking and use it to recreate your voice. OpenAI isn’t making the product publicly available yet. They’re testing it to make sure its safe. And, in the meantime, they want everyone to know what a responsible and futuristic company they are.
It isn’t safe. Of course it isn’t. Are you fucking kidding me?!? The use cases here are (1) crime, (2) crime (3) CRIME, and (4) TBD.
Here’s another idea we ought to test: let’s mashup a couple of tech billionaire fantasies by sticking Sam Altman on a SpaceX rocket and firing it at Mars. The benefit to humanity is potentially immense. (Just think… at a bare minimum, we wouldn’t have to continue to wade through Altman’s bullshit anymore!)
OpenAI has been playing this game for years now. They want to both look like the leading-edge company that is building the future and also look like the responsible company that is determined to get it right.
This reflects what I believe to be Altman’s one unique skill. The guy is like if ChatGPT created a tech CEO. He speaks and behaves as though he has ingested the performances of past CEOs and is reconstituting their performances according to a script.
Altman knows that a tech CEO in 2024 is supposed to mix old-school techno-optimism with acknowledgements that the potential social impacts of his technology are so awesome that they will require regulation. (but, mind you, it has to be just-the-right-regulation. Preferably regulation that accommodates OpenAI and impedes its less-responsible competitors.) He knows that the company is supposed to always tease new products that keep us focused on what’s coming next instead of how the products work today.
As a scholar of strategic political communication, I marvel at what an impressive job Altman and the OpenAI comms team have done. With the exception of, y’know, that week when he got fired by the board, and then staged a counter-coup to overthrow the board, the company has put on an absolute clinic for how you stage and sequence product releases to maximize market share while minimizing blowback.
But that’s still ultimately just comms. The folks at OpenAI have adopted the pose of a protagonist: (1) The AI revolution is coming. (2) It will be an awesome social transformation. (3) It could turn out turn out great or terribly. (4) thank God we have OpenAI leading the way. They are the responsible ones. We’re in good hands so long as we give them all the data and all the funding and don’t bog them down with too many lawsuits or regulations.
And the thing is… No. Just no. They are not the protagonists. OpenAI are not the “good guys.” They aren’t the responsible ones. (They aren’t even a real nonprofit!)
They have crafted a marvelous story that, much like a Sora video, falls apart upon closer inspection.
They are a profit-maximizing company with a voracious appetite for money, energy, data, and compute. They are led by a CEO who embodies the ethos of Silicon Valley, but lacks the introspection to recognize why that is not entirely a compliment.
They keep rolling out these new products in order to dazzle journalists. That’s a good way to keep up the pace of positive media cycles. But it isn’t good for humanity.
The responsible way to release a product that can clone people’s voices based on a 15-second sound clip is to not create that product in the first place. You don’t need a ton of user-testing to figure out the obvious harms. Bad people will use this for bad ends! The bad will so heavily outweigh the good!
The only reason you should even be developing this product in the first place would be to design adversarial tools that would instantly break it. Or to design watermarking/fingerprinting technology that could help governments immediately find people who deploy something similar, and then give those people a seat on that SpaceX rocket-to-Mars right next to Altman.
It is 2024. The hottest year on record. There’s like a 50/50 chance that an authoritarian demagogue will take over the government later this year and completely erase the administrative state. There are so very many things that good people — real protagonists — could be doing this year in order to improve the world. And what is OpenAI doing? Burning massive energy (not to mention the water costs) to train a neural network to spoof your voice.
There are worse companies than OpenAI. But that doesn’t mean the folks at OpenAI are the good guys. The company is putting so much effort into maintaining main-character-energy.
So I just want to be very clear about this. They might manage to be one of the main characters of 2024. But they definitely are not the protagonist.
There is no use case for this technology. Not everything that can be invented should be invented
I feel the same way about Sora - there isn’t a single good reason for something like that to exist. Outlaw it.