7 Comments
User's avatar
John Quiggin's avatar

Artificial Intelligence is a bad term, but "machine learning" is arguably just as anthropomorphic. The machines are solving pattern matching problems, but they aren't learning in any meaningful sense, any more (or less) than we use them to estimate, say, a model of the macro-economy . Large Language Models is an accurate description,

Andrew Kadel's avatar

Two Failure modes. I like that, we've seen it. I just finished the first book of Kim Stanley Robinson's Mars Trilogy. I doubt I'll slog through the later ones. He wrote in the early 90s and his book is set in....wait for it...2025. Or it begins there. The assumption is that the Space Shuttle program went incredibly well and they found a way to push lots of those big fuel tanks into orbit and used them to build big space habitats, etc. including the Mars ship that launches with 100 scientists, etc aboard in 2025. (At the last minute the first man to walk on Mars, several years previous is added to the crew). So all things go incredibly nominally and it all ends up looking like Elon's fantasy. Except for the civil war in space, of course.

There's so much fantasy (not masquerading as science fiction, like this) about what the future will be, while hardly anyone figures on ordinary slippage and missing of deadlines, and things falling apart unexpectedly that we all experience as big projects, like having an ordinary human life, move forward. What's scary is that things do fall apart, and they fall apart in direct relation to how grandiose our expectations are.

If the AI bubble is bigger than the dotcom bubble and the housing finance bubble ("let me give your parakeet a $3M mortgage, nothing down no documents required) and it's going to last longer, that makes me fear that the reckoning will be several times worse than the Great Recession--not a happy prospect given that my current income is entirely a combination of the full faith and credit of the U.S. government that we've entrusted to the guy who wants to fire the head of the Federal Reserve and pension instruments completely invested in a financial system based on the stability of that government.

Breathe deep. Simplify. Maybe it will hold on until my sell-by date in the late 2030s or early 40s.

Nick Blood's avatar

I just wanted to add to the notion of the AI 2027 paper insisting results will be disastrous. It does have a "choose your own adventure" ending, and the happy version I guess we'll call it, is a global AI assisted coup, in China and elsewhere:

"The protests cascade into a magnificently orchestrated, bloodless, and drone-assisted coup followed by democratic elections. The superintelligences on both sides of the Pacific had been planning this for years. Similar events play out in other countries, and more generally, geopolitical conflicts seem to die down or get resolved in favor of the US. ... A new age dawns, one that is unimaginably amazing in almost every way but more familiar in some."

So maybe they're saying it's more like, 50% disaster, 50% drone CIA couping the world and us all obviously living happily ever after.

Tom Hall's avatar

This might sound pedantic but it's something I'm genuinely curious about. People are always pointing out that LLMs aren't 'intelligent'. I'm not an expert on LLMs but I have a good general understanding of how they work, so I understand the argument. But I have used an LLM to build a fully-functional software application by providing high-level requirements in natural language, just as any human software developer would. So if that is not a demonstration of intelligence, then do we conclude that writing software doesn't require intelligence?

Paul Snyder's avatar

In case anyone is interested, I find this a decent analysis from an economics standpoint, though I’m admittedly biased toward Richard J. Murphy’s worldview, so YMMV 😑

I very much appreciate your efforts on this post and your ability to plain-speak this technology down for those of us who have backgrounds in other specialties.

Many thanks.

https://youtu.be/LRr0ItPYDaA?si=BzP_1VcB-j3iynMb

Gerben Wierda's avatar

LLMs are mislabeled. These are not language models, they are token-models (Token = a word fragment, from single characters up to entire words, the models work on a dictionary of these that is 100k-200k large) that produce text which is a 'sequence of tokens' (where the correctness of the text from a human perspective comes from constraining the randomness of the output). So, we might call them Large Text-generating Models (LTM or LTGM). Constraining the randomness so that it is linguistically correct is a lot easier than constraining it that it is 'meaningfully correct'. And given that human intelligence uses shortcuts a lot (20W and speed after all) we have all learned to see linguistically correct text and take that as a proxy for intelligence.

Text and Language are not the same thing. Human language is based on meaning (which is based on — Uncle Ludwig — shared experiences). So the models do token-statistics and produce a text. Humans do meaning and produce text.

They both produce text, but even if the texts from both are 100% identical, the one from the machine has no real relation with meaning (understanding). The models do not understand anything, but they can (token-statistically) produce text that approximates (sometimes surprisingly good, sometimes poorly) what language would produce as text. One could say the the models come from bottom up to produce text, while humans come from the top down.

The text that is produced by LTGMs becomes useful (meaningful) depending on how it can be used correctly (Uncle Ludwig, again).