42 Comments
Jun 13Liked by Dave Karpf

Thank you. I made all of these points inside a Fortune 5 “AI” tech company when they put me in charge of figuring out how to position all of this to enterprise customers. Kept saying “I am struggling with this because it’s not new, it’s just delivering on what we told customers we were doing 10 years ago, but now it costs more and has even more hallucination risk”. They RIF’ed me and said it was to help me find a role that “I’d enjoy more” - the companies do NOT want to hear the truth that they are building a better clippy, a slightly more performant at higher energy costs ML.

Expand full comment

I got laid off for what I suspect are similar reasons. Upper management, even the CIO, were tech ignorant and seemed to be buying into the hype. It amuses me to think of them trying to replace my work with an LLM, and discovering an LLM is the wrong tech for business process automation and document management. At best it can iterate through thousands of documents and give mostly accurate synopsis for each one, but at very great expense.

There was a department wide mandate to 'bring cost savings by use of AI', and I tried to find one, I really did, but mostly it was cheaper and more efficient, and more importantly cheaper and more reliable, to use existent tools. At best I could have put an LLM chatbot on the webforms and then prayed it wouldn't 'hallucinate' false info like that Canada Air chatbot did.

Expand full comment
Jun 13Liked by Dave Karpf

I’m sorry that happened to you too. It’s a cold comfort knowing all these places will learn the hard way, but it doesn’t exactly help when you are filing for unemployment because you are in the middle of a bosses strike back at workers - especially whistleblowers/naysayers - moment.

Expand full comment

In about 1966 I was employed to do market research on the need for future large commercial integrated management information systems. I suggested that what was needed was, in effect, an electronic clerk interface that would work symbiotically with the human members (management and other staff) of the organisation. Top requirements were a common easy to use language (less ambiguous than informal natural language), 100% transparency and self documentation, mutual understanding and efficient use of computer resources. I proposed an unconventional language called CODIL (Context Dependent Information Language) and my idea was backed by two pioneers of UK commercial computing, John Pinkerton and David Caminer. However, following the company merger to create ICL I was made redundant because ICL wanted to develop a new conventional more powerful "black box" system. I made to a technological university to follow up the idea and when I demonstrated that CODIL (a language developed to, for example, help clerks to process complex commercial sales contracts) could also solve the brain-twister problems published weekly in the New Scientist I found AI oriented papers were rejected because they didn't parrot the overhyped AI paradigm fashionable at the time. When I demonstrated (in MicroCODIL - see reviews) that the basic algorithms were simple enough to work on a small school computer I was made redundant again because everyone knows that the purpose of AI was to develop computationally heavy algorithms which looked clever and which attracted huge grants for the biggest, fastest ever computer. Now in retirement I am re-examining the project archives and it seems that accidentally CODIL reverse-engineered how the brain's short term memory. Further details has been (and will be) being posted on https://codil-language.blogspot.com if you are interested.

Expand full comment
author

This is fascinating.

Expand full comment

If only we could lay off the money grubbing grifters instead of the people who make life awkward for them by pointing out that the cheap efficient solutions are better than the expensive profitable solutions.

Expand full comment
Jun 13Liked by Dave Karpf

I like to refer to LLMs in particular as "advanced autocomplete". The transformer model was genuinely innovative! But it's still autocomplete, which we've had for decades now.

Expand full comment
Jun 13Liked by Dave Karpf

One of my favorite facts I have learned during this latest hype cycle is that the coinage of the term AI was a branding move by John McCarthy, made primarily so his 1955 Dartmouth workshop would not fall under the umbrella of Cybernetics so they wouldn't have to deal with Norbert Wiener.

Expand full comment

I can understand that. I appreciate some of Wiener's work, but what I've read about him suggests he was eccentric to the point of exasperating.

Expand full comment
Jun 13·edited Jun 13Liked by Dave Karpf

Great points.

Only thing I'd add is that incremental advance on existing technologies is built on theft of IP on a massive scale. The hucksters in charge of these machine learning companies themselves describe the lawsuits from NY Times and big name authors as an 'existential threat', and if the courts in Europe or the US find in favor of the copyright holders the whole thing goes up in a puff of smoke.

Expand full comment

Machine learning involves the application of "neural networks" to massive data sets. And despite the name, neural networks have nothing to do with neurones. They are just a complicated and non-linear version of discriminant analysis, invented nearly 100 years ago. Fisher used it to assign plants to different species based on some measures such as sepal width. The "training set" contains plants where both the measurements and the species are known.

LLMs take strings of text and predict what text will come next. That involves massively greater training sets, and much more complex outputs, but the underlying principle is the same.

Here's an article which gives the history before making some of the observations I've offered (though with a less critical tone).

https://www.sciencedirect.com/science/article/pii/S0047259X24000484

Expand full comment
author

Very nice, thanks!

Expand full comment

If you take any very large quantity of non-random data and use a powerful enough statistical algorithm you are likely to find some predictable patterns. But these patterns do not automatically provide and explanation of how the patterns were generated.

The Greeks, over 2000 years ago, discovered that the apparent movement of the planets in the Zodiac could be modelled, to an acceptable degree of accuracy, by using epicycles. We now know that using enough epicycles (given a powerful enough computer) it is possible to model almost any pattern - But the epicycle model (however far it is taken) will not tell us that planets move round the sun.

I suggest the large languge models (while undoubtedly very powerful and useful tools) succeed in modelling "intelligence" for the same reason that epicycles model the apparent movement of the planets. There are clearly patterns (including at lot of repetitions) in any vast (terabyte) collections of cultural information and using enough computer power and suitable algorithms at lest some of these patterns can be identified and used to make feasible looking predictions. However I suspect that using the Turing Test "black box" model to assess them tells us very little about the underlying "intelligence" which generated the original patterns., as it is quite clear that the LLMs do not UNDERSTAND the information they are processing.

Expand full comment

This was the main criticism made of discriminant analysis back in the 1970s, by McFadden and others, looking at choice of transport mode. They wanted an explicit model of choice, not a black box prediction.

Most of the problems of "machine learning" were discovered in the early days of statistical modelling, but have been forgotten.

Expand full comment

Dr Emily Bender’s testimony is spot on, IMO. She’s the person who taught me that AI is a 1950s marketing effort.

Still searching for the video. Here’s written testimony.

https://democrats-science.house.gov/imo/media/doc/Dr.%20Bender%20-%20Testimony.pdf

Expand full comment
author

Bender's work is categorically excellent, agreed.

Expand full comment

Dr. Emily Bender is an amazing starting off point for unwrapping the hype machine. Now Knowing her name, and now knowing that MYSTERY AI HYPE THEATRE 3000 exists:

(here: https://www.dair-institute.org/maiht3k/ )

this has been my Sunday morning unexpected moments of excellence.

Thankyou for introducing me to this woman.

Kudos Kathy E Gill, and as always thank you Dave Karpf, for encapsulating so much to your substack you are my Coles Notes, my starting off point for Breaking Developments in all things Tech.

Expand full comment

You’re welcome! Yes Dr Bender is awesome!

Expand full comment

Thanks for a very useful reference. In the 1960s I was asked to look how to interface future IMIS (Integrated Management Information Systems) with human users - and I proposed an electronic clerk interface which used a syntactically simple language CODIL (using the human terminology) which should work symbiotically as a member of a human work team. The key requirements (which the latest AI developments fail to met) was TOTAL transparency, and full human-reasable self-documentation. In effect nothing of the shared task should be hidden in a "black box." I was twice made redundant because the idea did not meet the then fashionable AI paradigm and I am now reassessing the project archives. It seems that CODIL can be considered as an accidentally reversed-engineered model of how the human brain's short term memory processes information and that much of our intelligence depend on our ability to share cultural knowledge (I.e. we are good copycats) and our intelligence is built (using Sir Isac Newton's words) on the shoulder of giants.

Expand full comment

Thanks for making the point that these heralded and partially extant tools do not actually work.

Expand full comment

Total side note, but I noticed an eerie similarity between your header image and one I generated with Stable Diffusion for one of my own articles last year: https://allscience.substack.com/p/ai-head-to-head-gpt-35-vs-gpt-4

I have long since lost my exact prompt details, but it's interesting to see how easily it defaults to recycled archetypes/stereotypes!

Expand full comment
author

iiiiiiinteresting.

Expand full comment

Yep! As GenAI scales, I wonder if we’ll see a wave of “accidental plagiarism” incidents as people create what they think is original work via bespoke prompts but the algorithm uses shortcuts or defaults to a median result per its training data. Also curious what impact we’ll see from more and more of the internet (and thus AI training data) becoming itself AI-generated, in an infinite recursion loop

Expand full comment
Jun 13Liked by Dave Karpf

As best I undersstand, LLM (what we have today) is based on a massive data set that has connectrions assigned for predictive relationships (what cynics call autocomplete). Based on Altman's call for trllions for more processing power and data collection, the hope seems to be that all those connections will form a network that will somehow spontaniously develop "real" "AI", like a human brain. Will it work? Who can say. Odds are we wind up with a supercomputer who tells us the answer is "42".

Expand full comment

It's worth observing that the change in meaning of "algorithm" from "reliable mathematical procedure to solve a given problem" to "statistical model with unknown properties" has helped the credulous acceptance of "machine learning".

Expand full comment

I really like the idea of making a distinction between something genuinely new and something substantive but rebranded. It's a useful way to view this critically.

Perhaps one potential criticism is that something can be imminently world-altering because of the financial might of those backing it. You write (excellently) a lot about the power concentrated in Silicon Valley. In other words, even bullshit generators can be imminently world-altering in a world run on futurism by a handful of cashed-up ideologues and nutters.

Perhaps too, a broader consideration of potential and actual use cases might in turn broaden your view on the novelty and transformative potential of this tech. Even if we take "bullshit generator" as a given, a business consultant and an artist are going to see those as two profoundly different things. For example, in abstract/experimental artistic ontologies where accuracy has limited to no meaning, it's difficult to parse the concept of "bullshit". I find those use cases interesting to think about, even if they are fringe.

Expand full comment
author

thanks, and agreed. I wrote a piece a year or so ago about generative AI as satisficing tools. I still think there's a lot to that. There's significant uses for tools like these, so long as they're constrained to what they're good for.

https://davekarpf.substack.com/p/on-generative-ai-and-satisficing

Expand full comment

Thanks for the link and reply, that was a good read too. A good complimentary piece to this one I thought. Satisficing strongly reminds me of an idea from a Peter Watts novel: "There's no such things as survival of the fittest. Survival of the most adequate, maybe. It doesn't matter whether a solution's optimal. All that matters is whether it beats the alternative."

Expand full comment

> Are LLMs a genuinely new phenomenon, like the steam engine, or are they a significant incremental advance like, say, broadband-speed internet.

I think the answer is *both*, on different timelines. There are a lot of reasons the conversation around LLMs is so confusing and contentious; one is that people are talking about different time scales. The applications we can actually put our hands on today range from snake oil to significant incremental advance. However, the core capabilities of LLMs will continue to advance. More importantly, it's still very, very early days in figuring out how to make good use of these models. How to get information in and out of them ("retrieval-augmented generation", tool use, etc.), which application domains they're best suited for, redesigning user interfaces around chat instead of mouse clicks, etc.

Expand full comment

The term 'general purpose technology' made me think: no it is not, but it is a specific approximation technology dat is more 'general data'-driven, and the results suggests it is 'general-purpose'.

Expand full comment

I can think of lots of ways that AI can be put to use in science and engineering. Google’s protein folding project was a good example. But an LLM is not the right AI tool for any of them. The current wave of LLMs do not impress me as anything I would use for anything that was really important.

Expand full comment

Exactly. As a biologist and a "data scientist", I use machine-learning techniques myself (although I don't call them "AI", in view of all the nonsense associated with that term these days), but I consider LLMs just Stupid Computer Tricks (with apologies to David Letterman).

Expand full comment
Jun 15·edited Jun 15

"Andohbytheway, it isn't so clear that it actually works this time either.": It mostly doesn't, a fact that should be emphasized in all discussions of "AI" but typically isn't even mentioned. See, for example, "The fallacy of AI functionality":

https://dl.acm.org/doi/abs/10.1145/3531146.3533158

"Deployed AI systems often do not work. They can be constructed haphazardly, deployed indiscriminately, and promoted deceptively. However, despite this reality, scholars, the press, and policymakers pay too little attention to functionality. This leads to technical and policy solutions focused on 'ethical' or value-aligned deployments, often skipping over the prior question of whether a given system functions, or provides any benefits at all."

When I worked at the Swedish Institute of Computer Science during the 1990s, the institute was divided into three departments or "labs". One of them was called the "Knowledge-Based Systems" lab. Members of KBS lab worked on what some people called "AI" (this was the "expert system" era), but at least some members of KBS lab avoided the term "AI", which had been brought into disrepute by previous iterations of the "AI" hype cycle. Unlike clowns like Sam Altman, they were genuine, honest scientists.

In my work as a biologist and a "data scientist"*, I've used "machine learning", which is a term of convenience for certain kinds of statistical modeling (e.g., support-vector machines, gradient-boosted regression trees, and, yes, various flavors of artificial neural network). It isn't intelligent, not even close, if by "intelligent" we mean exhibiting anything like the breadth, flexibility, or reliability of reasoning by even a rather stupid human. And no, it isn't just a matter of expanding the training data. One clue to the contrary is that humans learn to recognize faces, generate grammatical sentences, etc. on the basis of far smaller data sets than state-of-the-art machine-learning systems require.

Moreover, this isn't surprising to anybody who knows even a little about evolution or how brains work. Intelligence isn't "one weird trick". It's a whole bag of tricks, which evolved over a vast span of time (e.g., the divergence time between humans and chimps is circa seven million years) and many of which aren't well understood yet, let alone replicated with computers. I know of no good reason to doubt artificial intelligence is possible but several good reasons to doubt it will happen within the next 20 years.

*Another widely used but cringe-inducing term. Which scientists aren't "data scientists"? (String theorists?)

Expand full comment

I’m really looking forward to this AI boom to pop. I’m quite sick of it.

LLMs are quite useful to help me code. I probably save 30 minutes a week. That’s not nothing, but it’s also not a game changer. It’s not a $100 billion data center worth of usefulness .

Expand full comment