20 Comments

I’m worried the Tesla’s self-driving cars become a reality in the next four years, not because they get any better, but because the government gives them them a “get out of liability free” card.

Expand full comment

This scares me. I have been almost hit walking my dog by people in Teslas and I wonder if they are using the FSD package.

My colleague at work has a late-ish Model 3 with the FSD, and even driving to lunch, 1.8 miles, he has multiple times he has to take the wheel, so I am more than a little terrified by it in the wild, and Musk getting the regulators to relax the scrutiny scares the hell out of me

Expand full comment

They won't because it breaks the "[insert unviable Musk tech of choice] will become a reality in FIVE years" paradigm. It's always five. We're safe from Tesla if it's only four.

Expand full comment

I'll hand the microphone to Tom Scocca, magnificent as usual:

"Here is the all-conquering generative AI revolution. Where there used to be a specific useful tool, there's now a generically useless thing that vaguely and incompetently mimics the general shape of the old tool. Everything you do on a computer is getting AI interventions grafted into the interface, to prevent you from accomplishing whatever it was you used to be able to accomplish. It's like the mass decision by carmakers to put as many controls as possible onto touchscreens, so drivers have to constantly take their eyes off the road to see where their finger is going on the flat glass. But at least the screens were cheaper than the physical knobs and buttons they eliminated. The purpose of the AI is to make things more expensive - namely, the valuations of the AI companies.

No one who cared about the purpose of the *Washington Post* or the purpose of the *Washington Post* archive would have ever allowed the Ask The Post AI to be deployed. But the world has allowed the management of knowledge to be taken over by ignoramuses, and now the ignoramuses have built ignoramus machines in their own image, manufacturing non-knowledge on a scale previously unimaginable."

(https://www.indignity.net/the-washington-post-burns-its-own-archive/)

Of course, it isn't just the management of knowledge. In the USA, it's now the management of the entire country.

Expand full comment

"It seems like Casey's instinct is to give the tech optimists a pass because (a) they're trying to build something.": Reputedly, so were the builders of the Tower of Babel.

Does this person Newton actually know a damn thing about how any of this stuff works? LLMs are ad hoc in the extreme. Here, for example, is something Stephen Wolfram, who's more favorably disposed toward this stuff than I am, wrote last year about the "attention heads" in "transformer" systems like ChatGPT:

"And, yes, we don’t know any particular reason why it’s a good idea to split up the embedding vector, or what the different parts of it 'mean'; this is just one of those things that’s been 'found to work'."

(https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/)

One of the many, many things that have been "found to work", kinda-sorta, of which there's little or no principled understanding - as Wolfram says, there's a large body of "lore", but there's hardly anything resembling theory, as a scientist would use the term "theory".

And no, that isn't just a pedantic quibble. To ask a question I've never seen asked in any discussion of this stuff, why the hell does it take zillions of training cases and obscene amounts of electrical power to make an LLM or similar system able to kinda-sorta hold a conversation, recognize a face, or drive a car? After all, humans continually learn to do all those things and much more with far less training data and far lower energy consumption. Anything claiming to be "artificial general intelligence" or some such should be at least as capable. That ChatGPT and its ilk are nowhere near as capable tells us, without further ado, that Altman et al.'s claims about them are utter bullshit.

Of course, the answer to the question I posed is that not only frauds like Altman but also the programmers who work for them don't actually know what they're doing. The "transformer" architecture amounts to "one weird trick", but human intelligence is a whole bag of tricks, which evolved over millions of years and mostly aren't well understood yet. A journey of a thousand miles may begin with a single step, but proclaiming "We're almost there!" after you've gone a mile indicates you're an idiot, a liar, or both. As an article published less than three months ago - regarding the tendency of LLMs to become *less*, not more, reliable with increasing size - summed up:

"These findings highlight the need for a fundamental shift in the design and development of general-purpose artificial intelligence, particularly in high-stakes areas for which a predictable distribution of errors is paramount."

(https://www.nature.com/articles/s41586-024-07930-y)

Expand full comment

Bravo. This needed to be said, and it is elegantly said.

Expand full comment

Dave, your insights resonate deeply. As an educator working closely with emerging technologies, I see both the potential and the pitfalls of AI in real-world applications. The failure modes you describe highlight the need for skepticism and accountability, especially in systems like Shotspotter. Tools that don't work as intended often amplify existing societal inequities.

I recently shared "Navigating the AI Frontier" by the WEF, which emphasizes the critical role of governance in these technologies. It’s not just about innovation—it’s about sustainable, ethical integration into our systems. Your critique reminds me why we must question not just the technology but the intentions behind its deployment. Thank you for pushing the conversation forward.

Expand full comment

Worth mentioning Henry Farrell's thoughts about the likely uses of AI (specifically LLMs) -- which does a good job of acknowledging and thinking through the actual likely functionality of the technology: https://www.programmablemutter.com/p/the-management-singularity

"... If you have worked for such an organization, you will know that they rely extensively on written material. They spend a lot of time and resources on organizing and manipulating this information. Past a certain organizational size, this is really hard to do well. No individual can actually know what the organization knows as a whole - there is far, far too much knowledge, and it is far too badly organized. Hence, large organizations devote a lot of human and organizational resources to gathering information, mixing it together with other kinds of information, sharing it with people who need to have it, summarizing it for those who don’t have time to read it all, reconciling different summaries and summarizing them in turn, figuring out ex post that some crucial bit of information has been left out and adding it back in or finding a tolerable proxy, or, worse, not figuring it out and having to improvise hastily on the spot. And so on.

LLMs provide big organizations with a brand new toolkit for organizing and manipulating information. It is far from a perfect toolkit: actually existing LLMs work best where you are prepared to tolerate a certain amount of slop, tend to bland out things so that the interesting weirdnesses disappear etc. But there is a lot that it can do, and there are a few applications (discussed below), where they will work very well indeed."

Expand full comment

There's a word that has to be dealt with regarding most tech magic, whether self-driving cars or "AI" however you define it, and that word is "liability". When tech magic kills people, imprisons or libels them, it has to be dealt with in our legal system. Look for pre-emptive legislation putting limits around their liabilty before being released to the general public.

Expand full comment
Dec 10Edited

The one thing I don't understand (and I do see this mentioned once in a while). What happens when Altman gets all of his money, and the AI has digested as much text information as it can. For shits and giggles let's say ALL of it somehow. And in the meantime, has not paid any content creators any money for using their work. And they have perfected it to spit out its bland but mostly right answers. What happens when mostly the only content it is consuming is content the AI has created (shows, movies, books, help docs, Wikipedia summaries, whatever). Then what? Am I supposed to believe it will be creating content that is indistinguishable from human created. And let's say it does. Then what?

Personally, I am in the old man get off my lawn point of my life and the more tech advances the more tech averse I become. And I can't be the only one. Neo-Luddism here I come. The coolest thing I have seen AI do so far is for Halloween yard decoration skeletons to automatically recognize costumes of people walking by and make appropriate comments about it. Wow :-/ I don't think any of us would have predicted what the killer application for the internet would be good for (porn) or social media (propaganda and false information) or phones (relationship replacement).

I mean you are right. Somehow, they have to justify the vast amounts of money being spent. But for regular joes like me it is like watching physicists argue about dark matter. None of it matters. For all of Microsoft's money spent on this I still basically uses basic Outlook and .xls and .doc with a smattering of .ppt at work for the last 40 years. And that is MOST people in the world who use it. All of it just feels so bubbly just like the past for us who have experienced this multiple times. But what do I know I am admittedly an idiot. But just like politicians I don't think tech bros have any clue what a normal person deals with every today and how much we don't care about any of this. I mean everyone says this but REALLY isn't it just Clippy. Isn't it just organizing and regurgitation all the support and product documentation that nobody wants to organize or write. Is it telling after MS's last big advancement MS 365 I still open all my files on the desktop. And their last real big advancement is Game Pass? Has Google EVER REALLY moved past Search, Ads, and maybe gmail and photos? Be honest. I dunno. Call me a skeptic. This all just feel like more tech BS.

Expand full comment

One thing that happens when Sam gets all that money is that the energy required alone will — even if it is all carbon-free — put enough heat into the atmosphere that the warming is equivalent to what we now get with CO2/CH4 driven greenhouse effects.

Expand full comment

Good point. I left that out. The potential environmental damage is ALWAY conveniently glossed over. So NOW we can try nuclear power again? To power super clippy? Geez.

Expand full comment

And the numbers are: direct heat into the atmosphere from our energy production is at this point roughly 1% of the problem that carbon greenhouse is. But increase the use a hundredfold and even with everything added being fusion/fission/solar, the atmospheric warming of that production is equivalent to carbon effects now. Of course that is the 'good' scenario. The bad one is that that growth will effectively come from more greenhouse-inducing energy production. Sam may want to 'solve all physics', but physics tells us his physics will be a problem, not a solution.

Expand full comment

I guess like the ouroboros I alluded to above, we just ask the AI to solve this calamity ¯\_(ツ)_/¯

Expand full comment

Which is like what Ilya Sutskever answered when asked how LLMs would become ASI: you simply add to the prompt: "You are a superintelligence". I'm not kidding: https://ea.rna.nl/2023/12/15/what-makes-ilya-sutskever-believe-that-superhuman-ai-is-a-natural-extension-of-large-language-models/

Expand full comment

"What happens when mostly the only content it is consuming is content the AI has created (shows, movies, books, help docs, Wikipedia summaries, whatever). Then what? Am I supposed to believe it will be creating content that is indistinguishable from human created."

It won't. Or, more specifically, it won't be able to create much of the kind of content humans are able to create, even if everything it does create appears human. And that's because GenAI requires prior human work for it to copy, and is incapable of the kind of creativity that transcends what came before (e.g. new styles of music and visual art). This is absolutely foundational to the technology; anyone who tells you otherwise doesn't understand how it works. When the AI people talk about "training" AI, they're referring to a process by which it practices mimicking what it's seen until it's able to produce output that looks like its input.

Sometimes an AI enthusiast will respond with "well, people also mimic what they've seen". And that's true, people often do this. Lord knows the world had plenty of derivative art before "Stable Diffusion" came around. But we don't always mimic, otherwise today's art would still be cave drawings. The great variety in human creativity - in the arts, science, technology, language, philosophy, sport, food, etc - is itself a demonstration of what we can do that GenAI cannot. The products of human creativity are so ever-present that we take this for granted. Everything around us: humans created it. The best GenAI can do is make realistic copies of our creations.

Expand full comment

Very good. Really very good.

Personally, my follow up would be: why aren't the 'realistic critics' making a dent in the hype? What does that suggest to us about *human* intelligence?

Expand full comment

“Load bearing predictions” - excuse me while I add that to my idiolect

Expand full comment

Ai is a discovery not an invention. This had information about the people who discovered Ai and who we should believe. But, it did not have much info on Ai.

Expand full comment

Great essay. I have a glib contribution: if "AI will solve all of physics" gets people to fork over money, those people don't deserve their money. The phrase sounds cool but means nothing, unless Altman's claiming that GPT-7 is gonna summon Laplace's Demon.

Expand full comment