Bullet Points: A couple predictions for AI in 2024
The Ghost of Napster, and an upgrade to existing machine learning
Happy New Year, everyone!
Two thoughts to share as we kick off 2024:
(1) I have a new piece in Foreign Policy that explains why “An AI Future is much shakier than you think.” Here’s the heart of it:
The story that I often hear from AI evangelists is that technologies such as ChatGPT are here, and they are inevitable. You can’t put this genie back in the bottle. If outdated copyright laws are at odds with the scraping behavior of large language models, then our copyright law will surely need to bend as a result.
And to them I can only say: Remember the Ghost of Napster. We do not live in the future that seemed certain during the Napster era. We need not live in the future that seems certain to AI evangelists today.
(This is my first time writing for FP, so I would be much obliged if you’d click the link and help me make a good impression.)
(2) Something I’ve been pondering these past couple months: What if generative AI just turns out to be a big upgrade to existing machine learning systems?
Most of what I noticed from AI last year was bad in precisely the ways we all thought it would be bad. It’s a cheap misinformation engine, deployed by private equity to cannibalize newsrooms and undercut creative industries. ((Re)read Ted Chiang’s “Will A.I. Become the New McKinsey” He captures the force and form of what we’re dealing with.)
There have been two exceptions, though.
The first was a conversation with a friend who works at one of the major tech platforms. He is a level-headed guy, and is also rather AI-pilled.
His explanation was simple enough: he has used machine learning to automate parts of his workflow for years — stuff like automated tagging and flagging pieces of content for human review. Now he is using generative AI to manage those same tasks and… it’s just radically better. It’s not that the AI never makes errors, or is poised to replace him or his colleagues. It’s just that it does all the machine learning tasks 10x better, with 1/10th the effort on his part. The parts of his job that were huge time sinks have just been simplified.
That… makes a lot of sense to me. I am both unsurprised and unconcerned that LLMs do the job better than support vector machines or other classifiers. But it is a type of AI enthusiasm that has nothing in common with the Sam Altman extended fantasy universe.
The second exception came up while I was attending MoveOn.org’s 25th anniversary summit. They held a session on how MoveOn is making use of A.I. for activism. I approached that session with a sense of foreboding — I don’t think I’ve heard of a single A.I.-for-activism tactic that has any real force behind it. I wondered if they were chasing the latest shiny object.
I wrote a whole 2016 book about how groups like MoveOn use analytics and machine learning to hone their tactics and strategies. And what I learned in the AI session was that they are trying out AI for the same set of tasks that they were already using machine learning for. They are being mindful of security risks and of AI biases. They using it to enhance and simplify their existing systems, not to generate original content. AI, in this case, is functionally a system upgrade.
Think of it like replacing a Playstation 3 with a Playstation 5. That’s a huge improvement. But it’s still a gaming console. It isn’t going to write your novel or cook your dinner. (That would be weird. Don’t do that. Why would you use a gaming console for that?)
Of course, as gaming consoles go through massive upgrades, we also collectively spend more time on gaming. But this is clearly evolution, not revolution.
The thing that stands out to me is that each of these use-cases is essentially unproblematic. Strip away the hype-veneer by substituting “improved machine learning” for “AI.” We are then left with an amplification of existing trends. The mundane uses of machine learning are rendered still-mundane-but-more-effective.
By contrast, there are many existing uses of machine learning that are deeply problematic (*cough* Palantir *cough*). Improved machine learning will make all those cases much worse. We needed to do something about them before. We desperately need to do something about them now.
But still… here we are, one year into the ChatGPT “revolution,” and what we have to show for it so far is a hype bubble, a bunch of lawsuits, some boardroom drama, a ton of startup funding (bc of the hype bubble), and a bunch of upgrades on existing machine learning capabilities.
Compared to Web3 and the Metaverse, that still makes generative AI much more real. (If Generative AI is a PS5, then the Metaverse is the Meta Quest 2 that I stashed in the closet after getting bored of Beat Saber. And Web3 is Axie Infinity, a game that a16z is now pretending never existed.)
And I do expect that AI will continue to improve. There’s a GPU shortage at the moment. That will change. Reinforcement learning will, by its very nature, improve over time. But that doesn’t mean we’re on the path to artificial general intelligence. It seems more likely that generative AI’s impact will eventually be on par with word processing.
I’ve mentioned this before, but it’s worth reiterating: Take a look at James Fallows’s 1982 essay, “Living with a Computer.” Word Processing took years before the bugs got worked out and writers adapted to it. And it wasn’t the biggest innovation since fire, but it was still significant and even transformative within boundaries. (I couldn’t write my book without a word processor. But I also know not to expect it to cook my dinner or drive my car.)
So those are my two main predictions for AI in 2024. (1) I think the industry is going to be visited by the Ghost of Napster, and will bend in response to the force of copyright law and industrial-strength copyright-holders. And (2) I think, as the hype bubble starts to deflate, we’ll find that the biggest impacts of the technology are in areas where machine learning was already being used.
Thanks for reading. Happy New Year.
-DK
In 1980, as an undergraduate with ADHD and a typewriter, I was not capable of writing something as long as a dissertation. In 1987, using the lab’s shared Apple computer and word processor, I wrote a dissertation. Cut, Copy, and Paste makes it possible for me to write thing. Spellcheck too.
It is worth adding that - as Cory Doctorow pointed out in a recent essay for Locus Magazine - and I paraphrase: "this sh!t is expensive!" The massive resources required to run the liquid-cooled server farms is enormous. This is not financially viable as a profit model if all we have to show for it in the long run are "low-stakes, high cost" gains (e.g upgrading from ps 3 to ps 5). Copyright regulation is a barrier (the ghost of Napster-past) and so are hype-driven investment subsidies that makes the service seem more profitable than it actually is (the ghost of Uber-present)