The cartoon you posted brought to mind an experience of mine.
In 1989 I was in my last semester of library school and working at Princeton University Library Checking in periodicals. My assignment was to check in all titles beginning with H through K. One day a huge pile of one tiny fascicle of the Journal of Electroanalytic Chemistry appeared in my box. It turned out that Elsevier had decided that the single article published there was of such widespread interest that they would supply a copy of it to each subscriber of a list of several of their publications. Princeton had more than a half dozen branch libraries in various sciences, and many of them subscribed to several of the listed articles. So I had to route 28 copies.
This was the Pons and Fleischman article on cold fusion which had been being teased for months. It was about 4 pages and had 2 footnotes, if I remember correctly. It had a single equation. I'm not much at math, chemistry or physics, but this didn't seem like much. Basically one side indicated the palladium electrode in heavy water. Then there was a BIG ARROW labeled "COLD FUSION" and then an indication of energy output and the residual chemicals. (Other scientists were not impressed & later demonstrated that the extra energy wasn't really produced, but more of a sleight of hand thing + sloppiness)
I was was finishing my chemical engineering Ph.D in 1987 when Pons and Fleischman were doing the experiments that lead to the 1989 paper. My co-adviser was Dr. Allen Bard, the father of modern electrochemistry, and I worked in his lab. Bard knew Pons and Fleischman well, the field of electrochemistry was not that big. As soon as he heard about what Pons and Fleischman were doing, he set a post-doc to the task of trying to reproduce it. For a few weeks, there was plenty of talk in the lab about cold fusion and the mechanisms in a palladium electrode would all two deuterium nuclei to overcome electrostatic repulsion and fuse. There was plenty of optimism. No one in Bard's group could reproduce what Pons and Fleischman, and unfortunately, neither could anyone else. It was fun while it lasted.
1. The predictions of the 1990s about the internet (everyone has perfect information, democracy everywhere, a 'new economy' where everything is free, etc.) are a good lesson for the predictions of today. These predictions are not about what will happen, but about what we dream will happen. Imagination, by the way, is key to intelligence (a self-driving car will never really work well without being able to imagine what isn't there but what *could* be there: https://www.linkedin.com/pulse/key-real-intelligence-might-imagination-gerben-wierda-rv0ye/)
3. If we make lots of energy, that energy *itself* heats up the atmosphere. Currently, the direct heat added by us is about 1% of the effect of CO2 (which is how we produce that energy), which is why we all are focusing on greenhouse gases. But when we use 100 times as much energy (which — remember, energy conservation — has nowhere else to go that into the atmosphere) we do not need CO2 at all to get the same climate effects. Even if *all* that energy is ('cold' — yeah, right) fusion, fission, solar and wind, and *no* greenhouse gases are produced at all, we will still wreck the climate as much as we are doing now, unless we are able to build planet-size air conditioning systems.
4. The 2017 'transformer' breakthrough (8 years ago...) enabled us to train models that are orders of magnitudes larger than what was possible before. If we have by now proven anything, it is clear that (even with all the crazy engineering the hell around the issues) while this produces convincing language, there is no way whatsoever that this leads to AGI. But people believe it will. And the reason why they believe is what probably will — one can always hope — make this current AI hype period into a lesson about *human* intelligence (which, like AI, isn't what — for the last millennia — it has been cracked up to be).
1 reminds me of a Philip K. Dick novel (Ubik?) with an alternate history where the US parachuted TV sets i to small villages in Africa that looped instructiins on how to dig wells and sewer systems and plant crops. What could be.
4 they seem to think if they build a model of the human brain consisting of "connections" big and complex enough "a miracle will occur". They probably have a bunch of clever arguments about modeling brain chemistry with feedback mechanisms to validate "good" data and connections somehow. These people aren't stupid, but very motivated to be gullible.
Ah, yes, that reminds me of the EU €1 billion flagship The Human Brain Project (2013–2023). From The Atlantic (2019):
On July 22, 2009, the neuroscientist Henry Markram walked onstage at the TEDGlobal conference in Oxford, England, and told the audience that he was going to simulate the human brain, in all its staggering complexity, in a computer. His goals were lofty: “It’s perhaps to understand perception, to understand reality, and perhaps to even also understand physical reality.” His timeline was ambitious: “We can do it within 10 years, and if we do succeed, we will send to TED, in 10 years, a hologram to talk to you.”
The GenAI people aren't even building anything related to brain structures. Their 'neurons' are about statistical relation between tokens (meaningless character strings, that some are words *humans* recognise doesn't change that) or pixels, and with that they can *approximate* output from actual understanding — sometimes amazingly well, sometimes horribly bad — without the actual understanding. It's like understanding ink distributions so well, you can approximate the result of understanding a text.
Note: we're all rather gullible. It's a fundamental human trait.
I’m so glad you wrote about this today. I listened to Klein’s podcast and did not hear one minute devoted to the obstacles. “We need it” rarely leads to “we have it” without a lot of stops and false starts along the way, which leads straight back to the air traffic controller problem. I’ll be driving or taking the train on trips for the foreseeable future.
In the movie Shakespeare in Love, Henslowe is explaining the theater business to Fennyman. He says that the theater business is full of obstacles that can lead to disaster, but that things often turn out well in the end. Fennyman asks Henslowe how this happens, and Henslowe replies, “I don't know. It's a mystery”.
Perhaps this is our best hope for understanding the future. Things often turn out well in the end. Trouble is, as you say, we don’t have a timeframe. Thank you for a good piece. Futurologists are usually wrong but sometimes they’re not. It’s a mystery.
Dave, did you say "cold fusion" when you meant "fusion"? Cold fusion is a genuine hoax, first advanced seriously by some University of Utah chemists and then completely debunked.
So these hyper-rationalist technologists believe cold fusion is a thing? No one has ever replicated Fleischmann and Pons 1989 results. The few claims of something akin to cold fusion since then have been debunked. This is like those people who file perpetual motion patent claims. Lots of people believe them, quite a few bet their life savings on them, yet somehow we never see one in operation. Cold fusion. So dumb to believe in it. Truly Dunning-Kruger in full flow.
You broke down why the current AI ecosystem is indistinguishable from a con in far more detail than most would bother to, so thanks. I can easily see a Smartest Guy in the Room like Klein easily dazzled by some PhD-level bafflegab about Big Data trendlines and teching the tech (a reference to Star Trek showrunners telling writers not to worry about coming up with plausible tech explanations, just write "tech the tech" and we'll fill it in). "Look, you don't really understand how we do what we do already, so just trust us that we can do even more of it".
If I squint I can imagine some AI-driven air traffic control system being adopted, but governments simply dissolving themselves because a computer tells them its a good idea? Have you met Governments? It seems AI proponents believe the principle product of AGI will be the kind of bullshit they use to promoting AGI.
I think the overall weak credibility of people like Andreessen or Musk - looking at the sum total of all of the things they say or comment on - calls their judgment into question. Elon has built at least two successful enterprises - that is concrete, can be evaluated on its own merits - but just about everything else he says these days is false, fanciful, deranged, you name it.
We can’t be anything but skeptical on tech discourse about things that haven’t happened yet, when they are so full of shit in so many different contexts. Show it, otherwise please shut up.
I do suspect that deep down the tech bros and their allies are trying to eliminate women and/or their need for women. What are the tech sisters (also mothers and daughters) saying about AGI?
They are trying to eliminate everyone with less than one billion dollars in their bank account. Once they have a Super AI, they will have zero need for human employees or human customers.
We should be using history as a guide. The industrial revolution was more important then than AI/AGI will be in the future. How that panned out depended on governments. It led to many good things, but also appalling mechanized world wars. Today, we have a solution to climate change (but not to biodiversity loss) with renewable energy, but governments and vested fossil fuel industries block the needed response and currently are doubling down on more carbon extraction. Suppose we do get fusion (hot not the debunked cold variety) will that in any way stop/reduce carbon extraction?
The demand for ever larger server farms for AI/AGI is like predicting computer development in a 1950s world of vacuum tubes, and later individual transistors. Are we really at the end of miniaturization (TSMC is now at 1.2 nm)? If a human brain is pretty good at running on up to 100W with just a few pounds of wetware, isn't it likely that we will eventually get AGIs with educated human-level intelligence with about the same computing requirement? LLMs with access to RAGs of specialty domain will likely fit on current cell phones in a few years. What will that mean to jobs - we don't know, but it wouldn't surprise me that the concerns of the "Luddites" to weaving machines, and early 1980s concerns over 8-bit computers and unemployment didn't work out as expected. Humans will still demand dignity, because if not - the pitchforks, torches, and tumbrels (or their smart equivalent) will be deployed. [The super-wealthy elites may find themselves targeted by tiny, smart, killer drones.]
Today we could have self-driving semi-smart cars IF they were separated from dumb cars and pedestrians. We could have AI-piloted ships., even trains and planes, although having a human in some sort of control is still far more comfortable.
However, I don't expect AGI to solve our difficult problems, and what if the only solution is to radically reduce human populations? AGI invoking lethal conflicts across the globe [this might be trivially easy, but starting with the greatest consumers]? As Trump might say - it will be a difficult transition, but the end will be great.
Technologies are always 2-edged swords. We just hope, on balance, that they offer a net 1% improvement in positive vs negative outcomes. Sometimes that net outcome seems to reverse itself over time - who even thought that plastics could be so harmful as late as the end of the 1960s. We thought nuclear power would be a huge boon. Oops. While we wouldn't want a planned economy like that tried in the USSR, we face overproduction with unrestrained "free market" capitalism and that is causing problems of waste that we have failed to manage.
Will AI (even AGI) solve these problems? I really doubt it, if only because those problems are not of interest to their owners. What will likely happen is that AIs used maliciously will increase, making life more complex and difficult to manage. AIs and AGI (if the is possible) could make life less tolerable for all but a few.
To be skeptical of exaggerated claims regarding AGI has become a bit of a signal of projecting a balanced, adult-in-the-room interpretation of what's going on. But in this piece, the pendulum of "balance" is swinging a bit far.
"So I remain skeptical of Ezra Klein’s sources. I believe that they believe what they are telling him. And some bits of what they are telling him are surely based on impressive laboratory results. But I catch a faint hint of desperation in their remarks"
This sounds a bit desperate itself. And arrogant (as in: Those sources believe what they believe, but they are all wrong...).
By now it's NOT just hoping for a miracle. the progress towards AGI as measured in various benchmarks is real, and faster than even the likes of Sam Altman or Mario Amodei believed just a few years ago. Their message is not just: AGI is coming. Their message is: We thought AGI is coming in 5-10 years, but it's coming in 2 years.
Many critics point out that due to some limitations like missing abstraction and generalization etc. LLMs won't produce true AGI. That's fair. But I think it's likely that with worldwide intense attention on this, true AGI (as in what most humans can do) will arrive soon (inside 2 years). And articles like this don't help with what I thought was Ezra Klein's main point: We need a better discussion of how to get ready for it. It feels as if a hurricane is headed towards our shore and we're discussing whether to believe the forecast rather than how to prepare and where to seek shelter...
I think it was Carl Sagan who said extraordinary claims require extraordinary proof, or something along those lines. These benchmarks, are they measuring what is claimed to be measured? Do they describe how the gap between token analysis and "thought" is going to be crossed? How do you prepare for something that no one knows the capabilities of? Why should computer AGI be a superhuman omniscient consiousness anyway? To paraphrase A Fish Called Wanda, just because a computer knows everything doesn't mean it understands it.
The cartoon you posted brought to mind an experience of mine.
In 1989 I was in my last semester of library school and working at Princeton University Library Checking in periodicals. My assignment was to check in all titles beginning with H through K. One day a huge pile of one tiny fascicle of the Journal of Electroanalytic Chemistry appeared in my box. It turned out that Elsevier had decided that the single article published there was of such widespread interest that they would supply a copy of it to each subscriber of a list of several of their publications. Princeton had more than a half dozen branch libraries in various sciences, and many of them subscribed to several of the listed articles. So I had to route 28 copies.
This was the Pons and Fleischman article on cold fusion which had been being teased for months. It was about 4 pages and had 2 footnotes, if I remember correctly. It had a single equation. I'm not much at math, chemistry or physics, but this didn't seem like much. Basically one side indicated the palladium electrode in heavy water. Then there was a BIG ARROW labeled "COLD FUSION" and then an indication of energy output and the residual chemicals. (Other scientists were not impressed & later demonstrated that the extra energy wasn't really produced, but more of a sleight of hand thing + sloppiness)
Amazing.
I was was finishing my chemical engineering Ph.D in 1987 when Pons and Fleischman were doing the experiments that lead to the 1989 paper. My co-adviser was Dr. Allen Bard, the father of modern electrochemistry, and I worked in his lab. Bard knew Pons and Fleischman well, the field of electrochemistry was not that big. As soon as he heard about what Pons and Fleischman were doing, he set a post-doc to the task of trying to reproduce it. For a few weeks, there was plenty of talk in the lab about cold fusion and the mechanisms in a palladium electrode would all two deuterium nuclei to overcome electrostatic repulsion and fuse. There was plenty of optimism. No one in Bard's group could reproduce what Pons and Fleischman, and unfortunately, neither could anyone else. It was fun while it lasted.
I can't wait until Ed Zitron comments on Klein. He's not as nice about the AGI miracle as you are.
LOL I feel like "only slightly nicer than Ed Zitron about AGI miracles" is a pretty good description of my niche.
1. The predictions of the 1990s about the internet (everyone has perfect information, democracy everywhere, a 'new economy' where everything is free, etc.) are a good lesson for the predictions of today. These predictions are not about what will happen, but about what we dream will happen. Imagination, by the way, is key to intelligence (a self-driving car will never really work well without being able to imagine what isn't there but what *could* be there: https://www.linkedin.com/pulse/key-real-intelligence-might-imagination-gerben-wierda-rv0ye/)
2. And indeed the beliefs regarding imminent AGI are more SF than science (take Ilya Sutskever's belief that you can get artificial superintelligence by simply adding to the prompt "You are a superintelligence. Now..." (I am really not making this up: https://ea.rna.nl/2023/12/15/what-makes-ilya-sutskever-believe-that-superhuman-ai-is-a-natural-extension-of-large-language-models/).
3. If we make lots of energy, that energy *itself* heats up the atmosphere. Currently, the direct heat added by us is about 1% of the effect of CO2 (which is how we produce that energy), which is why we all are focusing on greenhouse gases. But when we use 100 times as much energy (which — remember, energy conservation — has nowhere else to go that into the atmosphere) we do not need CO2 at all to get the same climate effects. Even if *all* that energy is ('cold' — yeah, right) fusion, fission, solar and wind, and *no* greenhouse gases are produced at all, we will still wreck the climate as much as we are doing now, unless we are able to build planet-size air conditioning systems.
4. The 2017 'transformer' breakthrough (8 years ago...) enabled us to train models that are orders of magnitudes larger than what was possible before. If we have by now proven anything, it is clear that (even with all the crazy engineering the hell around the issues) while this produces convincing language, there is no way whatsoever that this leads to AGI. But people believe it will. And the reason why they believe is what probably will — one can always hope — make this current AI hype period into a lesson about *human* intelligence (which, like AI, isn't what — for the last millennia — it has been cracked up to be).
By the way: it is better to forget about AGI right now. What we are getting is not 'narrow' or 'general' intelligence, but something new: 'wide intelligence' (or as Gary Marcus has labeled it 'broad and shallow'). https://ea.rna.nl/2025/01/08/lets-call-gpt-and-friends-wide-ai-and-not-agi/
AGI talk distracts us from what is really happening: the introduction of the category 'cheap' in the 'economy of mental work', just as the industrial revolution introduced 'cheap' in physical work. https://ea.rna.nl/2024/07/27/generative-ai-doesnt-copy-art-it-clones-the-artisans-cheaply/
1 reminds me of a Philip K. Dick novel (Ubik?) with an alternate history where the US parachuted TV sets i to small villages in Africa that looped instructiins on how to dig wells and sewer systems and plant crops. What could be.
4 they seem to think if they build a model of the human brain consisting of "connections" big and complex enough "a miracle will occur". They probably have a bunch of clever arguments about modeling brain chemistry with feedback mechanisms to validate "good" data and connections somehow. These people aren't stupid, but very motivated to be gullible.
Ah, yes, that reminds me of the EU €1 billion flagship The Human Brain Project (2013–2023). From The Atlantic (2019):
On July 22, 2009, the neuroscientist Henry Markram walked onstage at the TEDGlobal conference in Oxford, England, and told the audience that he was going to simulate the human brain, in all its staggering complexity, in a computer. His goals were lofty: “It’s perhaps to understand perception, to understand reality, and perhaps to even also understand physical reality.” His timeline was ambitious: “We can do it within 10 years, and if we do succeed, we will send to TED, in 10 years, a hologram to talk to you.”
The GenAI people aren't even building anything related to brain structures. Their 'neurons' are about statistical relation between tokens (meaningless character strings, that some are words *humans* recognise doesn't change that) or pixels, and with that they can *approximate* output from actual understanding — sometimes amazingly well, sometimes horribly bad — without the actual understanding. It's like understanding ink distributions so well, you can approximate the result of understanding a text.
Note: we're all rather gullible. It's a fundamental human trait.
I’m so glad you wrote about this today. I listened to Klein’s podcast and did not hear one minute devoted to the obstacles. “We need it” rarely leads to “we have it” without a lot of stops and false starts along the way, which leads straight back to the air traffic controller problem. I’ll be driving or taking the train on trips for the foreseeable future.
In the movie Shakespeare in Love, Henslowe is explaining the theater business to Fennyman. He says that the theater business is full of obstacles that can lead to disaster, but that things often turn out well in the end. Fennyman asks Henslowe how this happens, and Henslowe replies, “I don't know. It's a mystery”.
Perhaps this is our best hope for understanding the future. Things often turn out well in the end. Trouble is, as you say, we don’t have a timeframe. Thank you for a good piece. Futurologists are usually wrong but sometimes they’re not. It’s a mystery.
Dave, did you say "cold fusion" when you meant "fusion"? Cold fusion is a genuine hoax, first advanced seriously by some University of Utah chemists and then completely debunked.
So these hyper-rationalist technologists believe cold fusion is a thing? No one has ever replicated Fleischmann and Pons 1989 results. The few claims of something akin to cold fusion since then have been debunked. This is like those people who file perpetual motion patent claims. Lots of people believe them, quite a few bet their life savings on them, yet somehow we never see one in operation. Cold fusion. So dumb to believe in it. Truly Dunning-Kruger in full flow.
You broke down why the current AI ecosystem is indistinguishable from a con in far more detail than most would bother to, so thanks. I can easily see a Smartest Guy in the Room like Klein easily dazzled by some PhD-level bafflegab about Big Data trendlines and teching the tech (a reference to Star Trek showrunners telling writers not to worry about coming up with plausible tech explanations, just write "tech the tech" and we'll fill it in). "Look, you don't really understand how we do what we do already, so just trust us that we can do even more of it".
If I squint I can imagine some AI-driven air traffic control system being adopted, but governments simply dissolving themselves because a computer tells them its a good idea? Have you met Governments? It seems AI proponents believe the principle product of AGI will be the kind of bullshit they use to promoting AGI.
I think the overall weak credibility of people like Andreessen or Musk - looking at the sum total of all of the things they say or comment on - calls their judgment into question. Elon has built at least two successful enterprises - that is concrete, can be evaluated on its own merits - but just about everything else he says these days is false, fanciful, deranged, you name it.
We can’t be anything but skeptical on tech discourse about things that haven’t happened yet, when they are so full of shit in so many different contexts. Show it, otherwise please shut up.
I do suspect that deep down the tech bros and their allies are trying to eliminate women and/or their need for women. What are the tech sisters (also mothers and daughters) saying about AGI?
They are trying to eliminate everyone with less than one billion dollars in their bank account. Once they have a Super AI, they will have zero need for human employees or human customers.
We should be using history as a guide. The industrial revolution was more important then than AI/AGI will be in the future. How that panned out depended on governments. It led to many good things, but also appalling mechanized world wars. Today, we have a solution to climate change (but not to biodiversity loss) with renewable energy, but governments and vested fossil fuel industries block the needed response and currently are doubling down on more carbon extraction. Suppose we do get fusion (hot not the debunked cold variety) will that in any way stop/reduce carbon extraction?
The demand for ever larger server farms for AI/AGI is like predicting computer development in a 1950s world of vacuum tubes, and later individual transistors. Are we really at the end of miniaturization (TSMC is now at 1.2 nm)? If a human brain is pretty good at running on up to 100W with just a few pounds of wetware, isn't it likely that we will eventually get AGIs with educated human-level intelligence with about the same computing requirement? LLMs with access to RAGs of specialty domain will likely fit on current cell phones in a few years. What will that mean to jobs - we don't know, but it wouldn't surprise me that the concerns of the "Luddites" to weaving machines, and early 1980s concerns over 8-bit computers and unemployment didn't work out as expected. Humans will still demand dignity, because if not - the pitchforks, torches, and tumbrels (or their smart equivalent) will be deployed. [The super-wealthy elites may find themselves targeted by tiny, smart, killer drones.]
Today we could have self-driving semi-smart cars IF they were separated from dumb cars and pedestrians. We could have AI-piloted ships., even trains and planes, although having a human in some sort of control is still far more comfortable.
However, I don't expect AGI to solve our difficult problems, and what if the only solution is to radically reduce human populations? AGI invoking lethal conflicts across the globe [this might be trivially easy, but starting with the greatest consumers]? As Trump might say - it will be a difficult transition, but the end will be great.
Technologies are always 2-edged swords. We just hope, on balance, that they offer a net 1% improvement in positive vs negative outcomes. Sometimes that net outcome seems to reverse itself over time - who even thought that plastics could be so harmful as late as the end of the 1960s. We thought nuclear power would be a huge boon. Oops. While we wouldn't want a planned economy like that tried in the USSR, we face overproduction with unrestrained "free market" capitalism and that is causing problems of waste that we have failed to manage.
Will AI (even AGI) solve these problems? I really doubt it, if only because those problems are not of interest to their owners. What will likely happen is that AIs used maliciously will increase, making life more complex and difficult to manage. AIs and AGI (if the is possible) could make life less tolerable for all but a few.
Easy Peasy:
There is no AGI. There is no current approach to AGI. Nothing being worked on can even conceivably be an AGI. And, yes, I am qualified to say so.
This is so perfectly bang on. I’ve been dancing around about how to think about Sci-fi and VC and you just drew a straight line between the two.
To be skeptical of exaggerated claims regarding AGI has become a bit of a signal of projecting a balanced, adult-in-the-room interpretation of what's going on. But in this piece, the pendulum of "balance" is swinging a bit far.
"So I remain skeptical of Ezra Klein’s sources. I believe that they believe what they are telling him. And some bits of what they are telling him are surely based on impressive laboratory results. But I catch a faint hint of desperation in their remarks"
This sounds a bit desperate itself. And arrogant (as in: Those sources believe what they believe, but they are all wrong...).
By now it's NOT just hoping for a miracle. the progress towards AGI as measured in various benchmarks is real, and faster than even the likes of Sam Altman or Mario Amodei believed just a few years ago. Their message is not just: AGI is coming. Their message is: We thought AGI is coming in 5-10 years, but it's coming in 2 years.
Many critics point out that due to some limitations like missing abstraction and generalization etc. LLMs won't produce true AGI. That's fair. But I think it's likely that with worldwide intense attention on this, true AGI (as in what most humans can do) will arrive soon (inside 2 years). And articles like this don't help with what I thought was Ezra Klein's main point: We need a better discussion of how to get ready for it. It feels as if a hurricane is headed towards our shore and we're discussing whether to believe the forecast rather than how to prepare and where to seek shelter...
I think it was Carl Sagan who said extraordinary claims require extraordinary proof, or something along those lines. These benchmarks, are they measuring what is claimed to be measured? Do they describe how the gap between token analysis and "thought" is going to be crossed? How do you prepare for something that no one knows the capabilities of? Why should computer AGI be a superhuman omniscient consiousness anyway? To paraphrase A Fish Called Wanda, just because a computer knows everything doesn't mean it understands it.