I’m worried the Tesla’s self-driving cars become a reality in the next four years, not because they get any better, but because the government gives them them a “get out of liability free” card.
This scares me. I have been almost hit walking my dog by people in Teslas and I wonder if they are using the FSD package.
My colleague at work has a late-ish Model 3 with the FSD, and even driving to lunch, 1.8 miles, he has multiple times he has to take the wheel, so I am more than a little terrified by it in the wild, and Musk getting the regulators to relax the scrutiny scares the hell out of me
They won't because it breaks the "[insert unviable Musk tech of choice] will become a reality in FIVE years" paradigm. It's always five. We're safe from Tesla if it's only four.
Worth mentioning Henry Farrell's thoughts about the likely uses of AI (specifically LLMs) -- which does a good job of acknowledging and thinking through the actual likely functionality of the technology: https://www.programmablemutter.com/p/the-management-singularity
"... If you have worked for such an organization, you will know that they rely extensively on written material. They spend a lot of time and resources on organizing and manipulating this information. Past a certain organizational size, this is really hard to do well. No individual can actually know what the organization knows as a whole - there is far, far too much knowledge, and it is far too badly organized. Hence, large organizations devote a lot of human and organizational resources to gathering information, mixing it together with other kinds of information, sharing it with people who need to have it, summarizing it for those who don’t have time to read it all, reconciling different summaries and summarizing them in turn, figuring out ex post that some crucial bit of information has been left out and adding it back in or finding a tolerable proxy, or, worse, not figuring it out and having to improvise hastily on the spot. And so on.
LLMs provide big organizations with a brand new toolkit for organizing and manipulating information. It is far from a perfect toolkit: actually existing LLMs work best where you are prepared to tolerate a certain amount of slop, tend to bland out things so that the interesting weirdnesses disappear etc. But there is a lot that it can do, and there are a few applications (discussed below), where they will work very well indeed."
There's a word that has to be dealt with regarding most tech magic, whether self-driving cars or "AI" however you define it, and that word is "liability". When tech magic kills people, imprisons or libels them, it has to be dealt with in our legal system. Look for pre-emptive legislation putting limits around their liabilty before being released to the general public.
The one thing I don't understand (and I do see this mentioned once in a while). What happens when Altman gets all of his money, and the AI has digested as much text information as it can. For shits and giggles let's say ALL of it somehow. And in the meantime, has not paid any content creators any money for using their work. And they have perfected it to spit out its bland but mostly right answers. What happens when mostly the only content it is consuming is content the AI has created (shows, movies, books, help docs, Wikipedia summaries, whatever). Then what? Am I supposed to believe it will be creating content that is indistinguishable from human created. And let's say it does. Then what?
Personally, I am in the old man get off my lawn point of my life and the more tech advances the more tech averse I become. And I can't be the only one. Neo-Luddism here I come. The coolest thing I have seen AI do so far is for Halloween yard decoration skeletons to automatically recognize costumes of people walking by and make appropriate comments about it. Wow :-/ I don't think any of us would have predicted what the killer application for the internet would be good for (porn) or social media (propaganda and false information) or phones (relationship replacement).
I mean you are right. Somehow, they have to justify the vast amounts of money being spent. But for regular joes like me it is like watching physicists argue about dark matter. None of it matters. For all of Microsoft's money spent on this I still basically uses basic Outlook and .xls and .doc with a smattering of .ppt at work for the last 40 years. And that is MOST people in the world who use it. All of it just feels so bubbly just like the past for us who have experienced this multiple times. But what do I know I am admittedly an idiot. But just like politicians I don't think tech bros have any clue what a normal person deals with every today and how much we don't care about any of this. I mean everyone says this but REALLY isn't it just Clippy. Isn't it just organizing and regurgitation all the support and product documentation that nobody wants to organize or write. Is it telling after MS's last big advancement MS 365 I still open all my files on the desktop. And their last real big advancement is Game Pass? Has Google EVER REALLY moved past Search, Ads, and maybe gmail and photos? Be honest. I dunno. Call me a skeptic. This all just feel like more tech BS.
One thing that happens when Sam gets all that money is that the energy required alone will — even if it is all carbon-free — put enough heat into the atmosphere that the warming is equivalent to what we now get with CO2/CH4 driven greenhouse effects.
Good point. I left that out. The potential environmental damage is ALWAY conveniently glossed over. So NOW we can try nuclear power again? To power super clippy? Geez.
And the numbers are: direct heat into the atmosphere from our energy production is at this point roughly 1% of the problem that carbon greenhouse is. But increase the use a hundredfold and even with everything added being fusion/fission/solar, the atmospheric warming of that production is equivalent to carbon effects now. Of course that is the 'good' scenario. The bad one is that that growth will effectively come from more greenhouse-inducing energy production. Sam may want to 'solve all physics', but physics tells us his physics will be a problem, not a solution.
"What happens when mostly the only content it is consuming is content the AI has created (shows, movies, books, help docs, Wikipedia summaries, whatever). Then what? Am I supposed to believe it will be creating content that is indistinguishable from human created."
It won't. Or, more specifically, it won't be able to create much of the kind of content humans are able to create, even if everything it does create appears human. And that's because GenAI requires prior human work for it to copy, and is incapable of the kind of creativity that transcends what came before (e.g. new styles of music and visual art). This is absolutely foundational to the technology; anyone who tells you otherwise doesn't understand how it works. When the AI people talk about "training" AI, they're referring to a process by which it practices mimicking what it's seen until it's able to produce output that looks like its input.
Sometimes an AI enthusiast will respond with "well, people also mimic what they've seen". And that's true, people often do this. Lord knows the world had plenty of derivative art before "Stable Diffusion" came around. But we don't always mimic, otherwise today's art would still be cave drawings. The great variety in human creativity - in the arts, science, technology, language, philosophy, sport, food, etc - is itself a demonstration of what we can do that GenAI cannot. The products of human creativity are so ever-present that we take this for granted. Everything around us: humans created it. The best GenAI can do is make realistic copies of our creations.
Personally, my follow up would be: why aren't the 'realistic critics' making a dent in the hype? What does that suggest to us about *human* intelligence?
Great essay. I have a glib contribution: if "AI will solve all of physics" gets people to fork over money, those people don't deserve their money. The phrase sounds cool but means nothing, unless Altman's claiming that GPT-7 is gonna summon Laplace's Demon.
I’m worried the Tesla’s self-driving cars become a reality in the next four years, not because they get any better, but because the government gives them them a “get out of liability free” card.
This scares me. I have been almost hit walking my dog by people in Teslas and I wonder if they are using the FSD package.
My colleague at work has a late-ish Model 3 with the FSD, and even driving to lunch, 1.8 miles, he has multiple times he has to take the wheel, so I am more than a little terrified by it in the wild, and Musk getting the regulators to relax the scrutiny scares the hell out of me
They won't because it breaks the "[insert unviable Musk tech of choice] will become a reality in FIVE years" paradigm. It's always five. We're safe from Tesla if it's only four.
Bravo. This needed to be said, and it is elegantly said.
Worth mentioning Henry Farrell's thoughts about the likely uses of AI (specifically LLMs) -- which does a good job of acknowledging and thinking through the actual likely functionality of the technology: https://www.programmablemutter.com/p/the-management-singularity
"... If you have worked for such an organization, you will know that they rely extensively on written material. They spend a lot of time and resources on organizing and manipulating this information. Past a certain organizational size, this is really hard to do well. No individual can actually know what the organization knows as a whole - there is far, far too much knowledge, and it is far too badly organized. Hence, large organizations devote a lot of human and organizational resources to gathering information, mixing it together with other kinds of information, sharing it with people who need to have it, summarizing it for those who don’t have time to read it all, reconciling different summaries and summarizing them in turn, figuring out ex post that some crucial bit of information has been left out and adding it back in or finding a tolerable proxy, or, worse, not figuring it out and having to improvise hastily on the spot. And so on.
LLMs provide big organizations with a brand new toolkit for organizing and manipulating information. It is far from a perfect toolkit: actually existing LLMs work best where you are prepared to tolerate a certain amount of slop, tend to bland out things so that the interesting weirdnesses disappear etc. But there is a lot that it can do, and there are a few applications (discussed below), where they will work very well indeed."
There's a word that has to be dealt with regarding most tech magic, whether self-driving cars or "AI" however you define it, and that word is "liability". When tech magic kills people, imprisons or libels them, it has to be dealt with in our legal system. Look for pre-emptive legislation putting limits around their liabilty before being released to the general public.
The one thing I don't understand (and I do see this mentioned once in a while). What happens when Altman gets all of his money, and the AI has digested as much text information as it can. For shits and giggles let's say ALL of it somehow. And in the meantime, has not paid any content creators any money for using their work. And they have perfected it to spit out its bland but mostly right answers. What happens when mostly the only content it is consuming is content the AI has created (shows, movies, books, help docs, Wikipedia summaries, whatever). Then what? Am I supposed to believe it will be creating content that is indistinguishable from human created. And let's say it does. Then what?
Personally, I am in the old man get off my lawn point of my life and the more tech advances the more tech averse I become. And I can't be the only one. Neo-Luddism here I come. The coolest thing I have seen AI do so far is for Halloween yard decoration skeletons to automatically recognize costumes of people walking by and make appropriate comments about it. Wow :-/ I don't think any of us would have predicted what the killer application for the internet would be good for (porn) or social media (propaganda and false information) or phones (relationship replacement).
I mean you are right. Somehow, they have to justify the vast amounts of money being spent. But for regular joes like me it is like watching physicists argue about dark matter. None of it matters. For all of Microsoft's money spent on this I still basically uses basic Outlook and .xls and .doc with a smattering of .ppt at work for the last 40 years. And that is MOST people in the world who use it. All of it just feels so bubbly just like the past for us who have experienced this multiple times. But what do I know I am admittedly an idiot. But just like politicians I don't think tech bros have any clue what a normal person deals with every today and how much we don't care about any of this. I mean everyone says this but REALLY isn't it just Clippy. Isn't it just organizing and regurgitation all the support and product documentation that nobody wants to organize or write. Is it telling after MS's last big advancement MS 365 I still open all my files on the desktop. And their last real big advancement is Game Pass? Has Google EVER REALLY moved past Search, Ads, and maybe gmail and photos? Be honest. I dunno. Call me a skeptic. This all just feel like more tech BS.
One thing that happens when Sam gets all that money is that the energy required alone will — even if it is all carbon-free — put enough heat into the atmosphere that the warming is equivalent to what we now get with CO2/CH4 driven greenhouse effects.
Good point. I left that out. The potential environmental damage is ALWAY conveniently glossed over. So NOW we can try nuclear power again? To power super clippy? Geez.
And the numbers are: direct heat into the atmosphere from our energy production is at this point roughly 1% of the problem that carbon greenhouse is. But increase the use a hundredfold and even with everything added being fusion/fission/solar, the atmospheric warming of that production is equivalent to carbon effects now. Of course that is the 'good' scenario. The bad one is that that growth will effectively come from more greenhouse-inducing energy production. Sam may want to 'solve all physics', but physics tells us his physics will be a problem, not a solution.
I guess like the ouroboros I alluded to above, we just ask the AI to solve this calamity ¯\_(ツ)_/¯
Which is like what Ilya Sutskever answered when asked how LLMs would become ASI: you simply add to the prompt: "You are a superintelligence". I'm not kidding: https://ea.rna.nl/2023/12/15/what-makes-ilya-sutskever-believe-that-superhuman-ai-is-a-natural-extension-of-large-language-models/
"What happens when mostly the only content it is consuming is content the AI has created (shows, movies, books, help docs, Wikipedia summaries, whatever). Then what? Am I supposed to believe it will be creating content that is indistinguishable from human created."
It won't. Or, more specifically, it won't be able to create much of the kind of content humans are able to create, even if everything it does create appears human. And that's because GenAI requires prior human work for it to copy, and is incapable of the kind of creativity that transcends what came before (e.g. new styles of music and visual art). This is absolutely foundational to the technology; anyone who tells you otherwise doesn't understand how it works. When the AI people talk about "training" AI, they're referring to a process by which it practices mimicking what it's seen until it's able to produce output that looks like its input.
Sometimes an AI enthusiast will respond with "well, people also mimic what they've seen". And that's true, people often do this. Lord knows the world had plenty of derivative art before "Stable Diffusion" came around. But we don't always mimic, otherwise today's art would still be cave drawings. The great variety in human creativity - in the arts, science, technology, language, philosophy, sport, food, etc - is itself a demonstration of what we can do that GenAI cannot. The products of human creativity are so ever-present that we take this for granted. Everything around us: humans created it. The best GenAI can do is make realistic copies of our creations.
Very good. Really very good.
Personally, my follow up would be: why aren't the 'realistic critics' making a dent in the hype? What does that suggest to us about *human* intelligence?
“Load bearing predictions” - excuse me while I add that to my idiolect
Great essay. I have a glib contribution: if "AI will solve all of physics" gets people to fork over money, those people don't deserve their money. The phrase sounds cool but means nothing, unless Altman's claiming that GPT-7 is gonna summon Laplace's Demon.