22 Comments

Here's what I'm afraid of: I think some large-scale businesses are increasingly invulnerable to whether something "works" or not. Amazon doesn't really care if their algorithmized customer service doesn't work any more--they are passing beyond it. It's not just that they are now 'too big to fail' in terms of providing satisfaction to customers but that the profits, such as they are, come from operations that aren't about whether customers get what they want or can find what they want. Moreover, the larger financialized ownership in the current global economy doesn't depend on whether the assets they own produce annualized revenues that makes them worth owning--they are rentiers who are bringing in so much capital that they have run out of places to park it. (No wonder some firms were parking tens or hundreds of millions in SVB: the world is running out of investments.)

In that environment, the security that AI *doesn't work* isn't much security, because the logic of thing is to use it anyway both to produce a kind of "innovation halo" but also to fire more people. I think this is feeling more like Terry Gillam's Brazil than anything else: a dystopia that doesn't have to work even in suppressing people, maximizing profit, etc.; so the cartoonishly stupid factual content of some AI will be beside the point. The people pushing it scarcely care: it is not about the problems it solves, it is about the rate of its adoption.

Expand full comment

Indeed. Just a couple of weeks ago Microsoft boasted about how their algorithm is "usefully wrong". https://www.cnbc.com/2023/03/16/microsoft-justifies-ais-usefully-wrong-answers.html - If that isn't Orwellian doublespeak in the wild then nothing is.

Expand full comment

I was just reminded of your Amazon example above, with a less personal but no less silly experience. The diapers that we usually have shipped monthly via "subscribe and save" are out of stock. I clicked on "see backup products" expecting to see other diaper brands listed. Instead Amazon is suggesting diaper wipes, as a replacement for diapers.

Expand full comment

A very well-reasoned critique of our era of magical thinking, whereby everything from curated education to global warming will be developed or solved by AI.

20 years ago, the exact mythology was bestowed upon the tech Bros of Solicon Valley...the visionaries that would lead us into a new Eden of innovation...enriching our lives and advancing humanity.

That worked out so well for us...

An epidemic of ignorance, wrapped in a diet of propaganda, combined with a rejection of critical thinking. A devolution of societal norms, a rapid decline in life expectancy, a plunging population, opioid addiction, poverty, climate catastrophes, I could go on and on.

Instead of engaging in critical introspection of our collective failures as a country, we're diving headlong into the next technological savior, with the exact same people at the helm.

It would be funny, if it wasn't so pathetically predictable.

Expand full comment

A few blogs I read have commented on ChatGPT's tendency to hallucinate (one guy preferred delusional since it describes "belief" in an alternate reality). Does anyone know why the current AI chatbots have been programmed to make up shit out of whole cloth? Or even better, is that out of control of the programmers? Moving these toys into doing real work requires some reliability. Programs (and let's not kid ourselves, that's all these psuedo-intellegent apps are) ought to produce reliable, consistent output. And don't tell me humans aren't reliable or consistent. Of course they're not, but the programs that replace them have to be trustworthy at least, and right now they're not.

Which brings up my second thought. These software programs will take over the world and destroy civilization only if we let them (or some Elmo of the future decides Skynet is a great idea). I get that this is the point of this post, and most of the people who would make that decision are highly motivated to focus on quarterly numbers rather than potential problems, but these things only take over if we, the users/product, let them.

"AI" is the last best hope for Silicon Valley to recapture the glory days of being golden gods astride the world. None of the other scams have a chance, and they are going to put everything they've got into selling these tulip bulbs to an investor class with more money than they know what to do with.

Expand full comment

If you want to understand why hallucination happens in these models, I highly recommend Robert Miles concise video on the topic: https://www.youtube.com/watch?v=w65p_IIp6JY

Expand full comment

Thanks, very interesting. He leans into the detail that the models have no concept of truth or accuracy, so "dunno" isn't an option. This is starting to look like a self-driving car scenario.

Expand full comment

Yeah, very similar - I think Gary Marcus has made that very comparison.

Expand full comment

Here's something I wrote to another pundit but applies here.-

However your AI article made me feel like I was in two different worlds. AI sounds great if your job is mapping proteins. However I was a master carpenter & contractor for 42 years and I don’t to know how much concrete AI can pour. Maybe it can run machines that pour concrete but there aren’t machines like that. I constantly hearing about automation but we are a long way from automating the building trades. Basically, we need C3PO from Star Wars. They don’t have any yet. It will be a while until they do.

Next time you see some guys pouring concrete, run over there and ask if you can lift a full wheelbarrow off it’s feet. You won’t believe how heavy four cubic feet of concrete is. You won’t believe that you can roll it but you can. So a good laborer can roll that barrow down a line of 2x8” (7 1/4” wide) laid over muddy ground to the forms and dump it exactly where it needs to go in a manner that does not require a lot of screeding. No machine can do that. Then do it 30 times. That’s hard work. They make powered wheelbarrows but the requirements for using them make them impractical 85% of the time. They tried to make a bricklaying machine but failed. The situation we have is a machine to tell us what to do but it can’t do it. Sounds like an architect. Thinking is important but it is a small part of the job. Sadly, we already have a bunch of thinkers, we need the doers.

This is where the “automation” deal falls apart. Automated humanoids are not available. I’m sure you could find a bunch of stuff that they tried to automate but went back to humans because we’re cheaper. Machines are expensive. Until recently, Silicon Valley had lots of money. They could do anything they wanted. Most businesses don’t. They don’t have unlimited funds to try and automate things. So often, it’s not practical to do it. We have a lot of thinking to do and AI can help do that. But we also have a lot of concrete to pour and AI can’t do much there. So my guess is that AI will eliminate some jobs but not most. Just like any machine invented in human history. Maybe AI needs to learn humility. It’s inventors don’t have much.

Rush McAllister

St Louis, MO

Expand full comment

It will make some 'information' jobs much faster. For instance, translation work will become much faster. A decent translator can now start with the work of a GIA and then correct the errors. The problem is that it will also be used for translations that are fully automated and are so convincing that the errors will build up. Another instance will be evil people finding weaknesses in open source code and security people finding attacks. Productivity will rise in those areas.

Expand full comment

I got an invite (don't ask me why) and have been using Google's Bard. I'm learning a lot:

"Marx's idea that the means of production are the basis of economic power was a new and radical idea in Plato's time, and it helped to shape Plato's own thinking about economics."

I also got a pretty thorough analysis of Virgil's impact on the imagery in the Iliad.

My curiosity about LLM is now satisfied.

Expand full comment

Interesting. So. (1) Works as advertised, but at very large scale unexpected results, and (2) in the end doesn't do what is expected of it. These seem to be at different levels to me. The latter is the basic question: 'does it work at all?'. The former is 'what is its effect?'. Or we could say of the former 'what is its *meaning*?' (and following Uncle Ludwig, that comes from the actual *use*).

The web is a mature example. When the web started out in the 1990's the tech-optimist wrote stories about a world in which everybody would have the best information at their fingertips, and we could do away with edited media, representative democracy, etc. (I had a few public discussions with them at the time, pointing out some basic fallacies in their observations and predictions, but at the same time I totally underestimated the destructive effects of the advertisement-based attention-economy). Their predictions *were* ideas about what would happen 'at scale', though, they were just plainly wrong (mostly simplistic, like free speech absolutists are dangerously simplistic) about what would happen. Unexpected, but only unexpected because no attention was paid to it, really, the ideas were simplistic.

I've been promoting the use of 'pre-mortems' as a tool not to fly entirely blind. There the question is, "X years from now, it has become a disaster. Imagine what happened." That question actually helps in making more robust assessments, both for the results of technological success and the results of technological failure. f course, 'crazy' people (like Holmes, Musk) will have such deep convictions that such rational approaches are not really accepted.

Expand full comment

I just asked ChatGPT to imagine how we might end up in a catastrophic scenario due to chatbots - after some initial reluctance to engage in any negative commentary about AI, I got a plausible story that ended with the following:

---

Again, I want to emphasize that this is a purely hypothetical and speculative scenario, and such a catastrophic outcome is highly unlikely. It is important to recognize that the development and deployment of artificial intelligence must be carefully managed and regulated to prevent unintended consequences and ensure that AI technology benefits human society in a safe and responsible manner.

---

It's a shame that nobody at OpenAI, Microsoft et al. have asked the LLM about the risks they are unleashing on us all with reckless abandon...

Expand full comment

Thanks, had already watched it. An interesting review (a little alarmist in parts, but certainly full of genuine insight and legitimate concerns).

Expand full comment

The part about 'double exponential' is unnecessary and unrealistic 29-40), but they are right to warn about what the unchecked 'commercial race' will produce. Adam Smith already warned that entrepreneurs (in his book called 'undertakers') have an only a single drive (profit) which not necessarily aligns with what is best for society. His example was them calling for protectionism, but the warning was generic (which is generally overlooked)

Expand full comment

Yes, I thought the assumption of an exponential curve was a useful rhetorical device, but not actually congruent with the tech as it stands, which may in fact be plateauing rather than accelerating.

I 100% concur on the issue of unconstrained capitalist forces. There are many examples of reckless promises made by companies during the "Big Data" revolution - such as in medicine, policing, and politics - that have in fact either been unfulfilled, or actively harmful. We need to harness the profit motive and direct it down productive channels of human endeavour - not unleash it and hope for the best. The externalities of these technologies must be reckoned with or we will simply be repeating the worst mistakes of Web 2.0 and the attention economy it has created.

Expand full comment

There has also been real good stuff. I would say that AlphaFold would be a good candidate for the Nobel Prize in Medicine. But that is also a good point of the seminar: there are good things, but simply only focusing on the good things is bad.

Expand full comment

I am too old for this. I admit to being intimidated, confused, confounded and critically disillusioned about the future of this extremely UNNECESSARY AI development in our social and commercial world. We and everything possible will be treated as possibilities for profit in a world that is already suffering from major inequity among human populations. I ,for one and myself, am glad I will NOT see the eventual outcome. This is just plain NUTS!

Expand full comment

There's this uneasy thought that keeps surfacing, like a bubble in a lava lamp, about the future of generative AI in our social systems. It's akin to asking a novice painter to recreate the Sistine Chapel – the ambition is there, but the skill set? Maybe not so much. The anxiety stems from a place of recognizing the chasm between aspiration and ability. It’s like we’re all aboard a high-speed train, sleek and modern, hurtling towards a future where AI is at the helm. But in the pit of our stomachs, there’s this gnawing question: Does the conductor actually know the tracks, or are we all just part of an elaborate experiment in digital bravado?

Expand full comment

"Coded Bias" is a revealing documentary on facial recognition tech. It was on PBS in the USA. https://www.ajl.org/take-action#SPREAD-THE-WORD

Expand full comment