Artificial Intelligence is a bad term, but "machine learning" is arguably just as anthropomorphic. The machines are solving pattern matching problems, but they aren't learning in any meaningful sense, any more (or less) than we use them to estimate, say, a model of the macro-economy . Large Language Models is an accurate description,
Two Failure modes. I like that, we've seen it. I just finished the first book of Kim Stanley Robinson's Mars Trilogy. I doubt I'll slog through the later ones. He wrote in the early 90s and his book is set in....wait for it...2025. Or it begins there. The assumption is that the Space Shuttle program went incredibly well and they found a way to push lots of those big fuel tanks into orbit and used them to build big space habitats, etc. including the Mars ship that launches with 100 scientists, etc aboard in 2025. (At the last minute the first man to walk on Mars, several years previous is added to the crew). So all things go incredibly nominally and it all ends up looking like Elon's fantasy. Except for the civil war in space, of course.
There's so much fantasy (not masquerading as science fiction, like this) about what the future will be, while hardly anyone figures on ordinary slippage and missing of deadlines, and things falling apart unexpectedly that we all experience as big projects, like having an ordinary human life, move forward. What's scary is that things do fall apart, and they fall apart in direct relation to how grandiose our expectations are.
If the AI bubble is bigger than the dotcom bubble and the housing finance bubble ("let me give your parakeet a $3M mortgage, nothing down no documents required) and it's going to last longer, that makes me fear that the reckoning will be several times worse than the Great Recession--not a happy prospect given that my current income is entirely a combination of the full faith and credit of the U.S. government that we've entrusted to the guy who wants to fire the head of the Federal Reserve and pension instruments completely invested in a financial system based on the stability of that government.
Breathe deep. Simplify. Maybe it will hold on until my sell-by date in the late 2030s or early 40s.
This might sound pedantic but it's something I'm genuinely curious about. People are always pointing out that LLMs aren't 'intelligent'. I'm not an expert on LLMs but I have a good general understanding of how they work, so I understand the argument. But I have used an LLM to build a fully-functional software application by providing high-level requirements in natural language, just as any human software developer would. So if that is not a demonstration of intelligence, then do we conclude that writing software doesn't require intelligence?
I just wanted to add to the notion of the AI 2027 paper insisting results will be disastrous. It does have a "choose your own adventure" ending, and the happy version I guess we'll call it, is a global AI assisted coup, in China and elsewhere:
"The protests cascade into a magnificently orchestrated, bloodless, and drone-assisted coup followed by democratic elections. The superintelligences on both sides of the Pacific had been planning this for years. Similar events play out in other countries, and more generally, geopolitical conflicts seem to die down or get resolved in favor of the US. ... A new age dawns, one that is unimaginably amazing in almost every way but more familiar in some."
So maybe they're saying it's more like, 50% disaster, 50% drone CIA couping the world and us all obviously living happily ever after.
Artificial Intelligence is a bad term, but "machine learning" is arguably just as anthropomorphic. The machines are solving pattern matching problems, but they aren't learning in any meaningful sense, any more (or less) than we use them to estimate, say, a model of the macro-economy . Large Language Models is an accurate description,
Two Failure modes. I like that, we've seen it. I just finished the first book of Kim Stanley Robinson's Mars Trilogy. I doubt I'll slog through the later ones. He wrote in the early 90s and his book is set in....wait for it...2025. Or it begins there. The assumption is that the Space Shuttle program went incredibly well and they found a way to push lots of those big fuel tanks into orbit and used them to build big space habitats, etc. including the Mars ship that launches with 100 scientists, etc aboard in 2025. (At the last minute the first man to walk on Mars, several years previous is added to the crew). So all things go incredibly nominally and it all ends up looking like Elon's fantasy. Except for the civil war in space, of course.
There's so much fantasy (not masquerading as science fiction, like this) about what the future will be, while hardly anyone figures on ordinary slippage and missing of deadlines, and things falling apart unexpectedly that we all experience as big projects, like having an ordinary human life, move forward. What's scary is that things do fall apart, and they fall apart in direct relation to how grandiose our expectations are.
If the AI bubble is bigger than the dotcom bubble and the housing finance bubble ("let me give your parakeet a $3M mortgage, nothing down no documents required) and it's going to last longer, that makes me fear that the reckoning will be several times worse than the Great Recession--not a happy prospect given that my current income is entirely a combination of the full faith and credit of the U.S. government that we've entrusted to the guy who wants to fire the head of the Federal Reserve and pension instruments completely invested in a financial system based on the stability of that government.
Breathe deep. Simplify. Maybe it will hold on until my sell-by date in the late 2030s or early 40s.
This might sound pedantic but it's something I'm genuinely curious about. People are always pointing out that LLMs aren't 'intelligent'. I'm not an expert on LLMs but I have a good general understanding of how they work, so I understand the argument. But I have used an LLM to build a fully-functional software application by providing high-level requirements in natural language, just as any human software developer would. So if that is not a demonstration of intelligence, then do we conclude that writing software doesn't require intelligence?
I just wanted to add to the notion of the AI 2027 paper insisting results will be disastrous. It does have a "choose your own adventure" ending, and the happy version I guess we'll call it, is a global AI assisted coup, in China and elsewhere:
"The protests cascade into a magnificently orchestrated, bloodless, and drone-assisted coup followed by democratic elections. The superintelligences on both sides of the Pacific had been planning this for years. Similar events play out in other countries, and more generally, geopolitical conflicts seem to die down or get resolved in favor of the US. ... A new age dawns, one that is unimaginably amazing in almost every way but more familiar in some."
So maybe they're saying it's more like, 50% disaster, 50% drone CIA couping the world and us all obviously living happily ever after.