22 Comments

Here's what I'm afraid of: I think some large-scale businesses are increasingly invulnerable to whether something "works" or not. Amazon doesn't really care if their algorithmized customer service doesn't work any more--they are passing beyond it. It's not just that they are now 'too big to fail' in terms of providing satisfaction to customers but that the profits, such as they are, come from operations that aren't about whether customers get what they want or can find what they want. Moreover, the larger financialized ownership in the current global economy doesn't depend on whether the assets they own produce annualized revenues that makes them worth owning--they are rentiers who are bringing in so much capital that they have run out of places to park it. (No wonder some firms were parking tens or hundreds of millions in SVB: the world is running out of investments.)

In that environment, the security that AI *doesn't work* isn't much security, because the logic of thing is to use it anyway both to produce a kind of "innovation halo" but also to fire more people. I think this is feeling more like Terry Gillam's Brazil than anything else: a dystopia that doesn't have to work even in suppressing people, maximizing profit, etc.; so the cartoonishly stupid factual content of some AI will be beside the point. The people pushing it scarcely care: it is not about the problems it solves, it is about the rate of its adoption.

Expand full comment

I was just reminded of your Amazon example above, with a less personal but no less silly experience. The diapers that we usually have shipped monthly via "subscribe and save" are out of stock. I clicked on "see backup products" expecting to see other diaper brands listed. Instead Amazon is suggesting diaper wipes, as a replacement for diapers.

Expand full comment

A very well-reasoned critique of our era of magical thinking, whereby everything from curated education to global warming will be developed or solved by AI.

20 years ago, the exact mythology was bestowed upon the tech Bros of Solicon Valley...the visionaries that would lead us into a new Eden of innovation...enriching our lives and advancing humanity.

That worked out so well for us...

An epidemic of ignorance, wrapped in a diet of propaganda, combined with a rejection of critical thinking. A devolution of societal norms, a rapid decline in life expectancy, a plunging population, opioid addiction, poverty, climate catastrophes, I could go on and on.

Instead of engaging in critical introspection of our collective failures as a country, we're diving headlong into the next technological savior, with the exact same people at the helm.

It would be funny, if it wasn't so pathetically predictable.

Expand full comment

A few blogs I read have commented on ChatGPT's tendency to hallucinate (one guy preferred delusional since it describes "belief" in an alternate reality). Does anyone know why the current AI chatbots have been programmed to make up shit out of whole cloth? Or even better, is that out of control of the programmers? Moving these toys into doing real work requires some reliability. Programs (and let's not kid ourselves, that's all these psuedo-intellegent apps are) ought to produce reliable, consistent output. And don't tell me humans aren't reliable or consistent. Of course they're not, but the programs that replace them have to be trustworthy at least, and right now they're not.

Which brings up my second thought. These software programs will take over the world and destroy civilization only if we let them (or some Elmo of the future decides Skynet is a great idea). I get that this is the point of this post, and most of the people who would make that decision are highly motivated to focus on quarterly numbers rather than potential problems, but these things only take over if we, the users/product, let them.

"AI" is the last best hope for Silicon Valley to recapture the glory days of being golden gods astride the world. None of the other scams have a chance, and they are going to put everything they've got into selling these tulip bulbs to an investor class with more money than they know what to do with.

Expand full comment

I got an invite (don't ask me why) and have been using Google's Bard. I'm learning a lot:

"Marx's idea that the means of production are the basis of economic power was a new and radical idea in Plato's time, and it helped to shape Plato's own thinking about economics."

I also got a pretty thorough analysis of Virgil's impact on the imagery in the Iliad.

My curiosity about LLM is now satisfied.

Expand full comment

Interesting. So. (1) Works as advertised, but at very large scale unexpected results, and (2) in the end doesn't do what is expected of it. These seem to be at different levels to me. The latter is the basic question: 'does it work at all?'. The former is 'what is its effect?'. Or we could say of the former 'what is its *meaning*?' (and following Uncle Ludwig, that comes from the actual *use*).

The web is a mature example. When the web started out in the 1990's the tech-optimist wrote stories about a world in which everybody would have the best information at their fingertips, and we could do away with edited media, representative democracy, etc. (I had a few public discussions with them at the time, pointing out some basic fallacies in their observations and predictions, but at the same time I totally underestimated the destructive effects of the advertisement-based attention-economy). Their predictions *were* ideas about what would happen 'at scale', though, they were just plainly wrong (mostly simplistic, like free speech absolutists are dangerously simplistic) about what would happen. Unexpected, but only unexpected because no attention was paid to it, really, the ideas were simplistic.

I've been promoting the use of 'pre-mortems' as a tool not to fly entirely blind. There the question is, "X years from now, it has become a disaster. Imagine what happened." That question actually helps in making more robust assessments, both for the results of technological success and the results of technological failure. f course, 'crazy' people (like Holmes, Musk) will have such deep convictions that such rational approaches are not really accepted.

Expand full comment

Here's something I wrote to another pundit but applies here.-

However your AI article made me feel like I was in two different worlds. AI sounds great if your job is mapping proteins. However I was a master carpenter & contractor for 42 years and I don’t to know how much concrete AI can pour. Maybe it can run machines that pour concrete but there aren’t machines like that. I constantly hearing about automation but we are a long way from automating the building trades. Basically, we need C3PO from Star Wars. They don’t have any yet. It will be a while until they do.

Next time you see some guys pouring concrete, run over there and ask if you can lift a full wheelbarrow off it’s feet. You won’t believe how heavy four cubic feet of concrete is. You won’t believe that you can roll it but you can. So a good laborer can roll that barrow down a line of 2x8” (7 1/4” wide) laid over muddy ground to the forms and dump it exactly where it needs to go in a manner that does not require a lot of screeding. No machine can do that. Then do it 30 times. That’s hard work. They make powered wheelbarrows but the requirements for using them make them impractical 85% of the time. They tried to make a bricklaying machine but failed. The situation we have is a machine to tell us what to do but it can’t do it. Sounds like an architect. Thinking is important but it is a small part of the job. Sadly, we already have a bunch of thinkers, we need the doers.

This is where the “automation” deal falls apart. Automated humanoids are not available. I’m sure you could find a bunch of stuff that they tried to automate but went back to humans because we’re cheaper. Machines are expensive. Until recently, Silicon Valley had lots of money. They could do anything they wanted. Most businesses don’t. They don’t have unlimited funds to try and automate things. So often, it’s not practical to do it. We have a lot of thinking to do and AI can help do that. But we also have a lot of concrete to pour and AI can’t do much there. So my guess is that AI will eliminate some jobs but not most. Just like any machine invented in human history. Maybe AI needs to learn humility. It’s inventors don’t have much.

Rush McAllister

St Louis, MO

Expand full comment

I am too old for this. I admit to being intimidated, confused, confounded and critically disillusioned about the future of this extremely UNNECESSARY AI development in our social and commercial world. We and everything possible will be treated as possibilities for profit in a world that is already suffering from major inequity among human populations. I ,for one and myself, am glad I will NOT see the eventual outcome. This is just plain NUTS!

Expand full comment

There's this uneasy thought that keeps surfacing, like a bubble in a lava lamp, about the future of generative AI in our social systems. It's akin to asking a novice painter to recreate the Sistine Chapel – the ambition is there, but the skill set? Maybe not so much. The anxiety stems from a place of recognizing the chasm between aspiration and ability. It’s like we’re all aboard a high-speed train, sleek and modern, hurtling towards a future where AI is at the helm. But in the pit of our stomachs, there’s this gnawing question: Does the conductor actually know the tracks, or are we all just part of an elaborate experiment in digital bravado?

Expand full comment

"Coded Bias" is a revealing documentary on facial recognition tech. It was on PBS in the USA. https://www.ajl.org/take-action#SPREAD-THE-WORD

Expand full comment