12 Comments

I worked on VisiCalc starting shortly before it was released. I was one of the maybe a dozen people who attended the product presentation at the NCC earlier that year. (Two of the people there just wanted a place to sit. The rest of us were friends of Bob Frankston.) Unlike most of the software sold in that era, VisiCalc was noted for its reliability. As Dan Bricklin, one of the founders, explained, "You can keep pressing random keys and it doesn't crash or lock up." Just about everyone who saw it had an instant use case: budgets, crop planning - farmers loved it, data management, accounting, guided instructions for cardiac monitors and even word processing. Spreadsheets are still amazing. Look at them renaming genes so they don't get misinterpreted as dates when loaded into spreadsheets.

You are right about the lack of real uses for AI. Silicon Valley is full of startups whose business model is helping companies build AI applications. They have customers, but, at this point, not a lot of deployed systems. The money is in tool building, not actually using AI. As for AI risk, the big risk seems to be AI systems giving users information to the detriment of the corporation. Just recently, Air Canada lost a court case in which a chatbot granted a user a refund contrary to policy. As far as the court was concerned, the AI was an agent of the corporation. I remember reading the specification for an early single chip processor which including a full page warning against using the chip in a medical application without written authorization from the CEO of the chip company for fear of medical liability. What CEO exactly is going to expose his firm to the liability that could be caused by a hallucinating AI?

Finally, advertising has always been magic. Back in the 1960s, Forbes ran an ad, a serial cartoon with an executive at Turkle Tee Joints trying to come up with an advertising slogan. I remember one try, "Turkle Tee Joints Won't Melt in the Sun." Then his phone rings and some guy wants to order some tee joints. The executive asks, why did you choose Turkle? The answer, it was the first name that came into my head. No one knows anything. It has gotten worse with ad salesmen now having computerized tools for bamboozling advertisers.

Expand full comment

Sat through a sales pitch for a marketing and ad management dashboard and their example client campaign showed a return on ad spend of almost 6,000% percent. Yeah, okay sure…

I try to bring as much data to the campaigns I manage. I think some marketing directors don’t know how to use that information and just throw budget at things because it might work.

Expand full comment
Mar 30·edited Mar 30

I really appreciate the ad analogy as well because I don't think most people understand that advertisers *also* hate online advertising. It doesn't really work. All the promised benefits of micro targeting and smart algorithmic displays and whatever else the pitch of the week is never actually materially translate into more cost efficient revenue generation, and the cost of revenue generation continues to remain extremely high. The only reason people spend on Google and Meta ads is because there's nowhere else to spend your money, unless you're big enough to get into actual television

Online advertising sucks for everyone except the platform holders who make money hand over fist from what is effectively a captured market. I think it is an *excellent* analogy for the upcoming wave of AI.

Expand full comment

...currently seeing GPTs, especially of the chat variety, replacing the utility of the google/bing search functions, and to that end there is a strong argument that just like google search is a targeted advertising service, so too will the inevitable usage end of the chat bot be a similar thing...if nothing else the rich data they are gathering on what people want to know is a meaty bone for advertisers to purchase from them...

---

...in the search for AGI i find myself continually asking why...what is the end goal of replicable digital life with no end purpose...i think about someone having a baby just because they can...it generally doesn't work out well for the bay or the environment in those occasions...

---

...to tie a bow on the GPT = ADS idea I think we can look at the major way most A.I. is currently being used as well, GPT for mass SEO, MIDJOURNEY to replace Stock Photos for ads, deep fakes to create video advertisements for celebrities/politicians...advertising and disinfo, long since bloodbrothers and the opening inning for A.I. --> down to and including most of the job displacement A.I. has created thus far is on "creative" teams...

---

...seeing how A.I. and GPTs are starting their run in the ruins of this world, I have a hard time envisioning them ever detaching from that home...great article thanks for the thoughtful read...

Expand full comment

ChatGPT is already as good as McKinsey (and that's not a compliment). I prompted ChatGPT with "can you write a comparison of wind and wave power mentioning advantages of both but concluding that wave power is superior" and got the following. Add some glossy pics (thanks, DALL-E) and it's indistinguishable from the consulting report you'd get with the same prompt (maybe worded more subtly)

ind power has several advantages over traditional fossil fuel-based energy sources. It is a renewable and clean energy source that does not produce any greenhouse gas emissions during operation.[1][2] Wind turbines also have a relatively small land footprint compared to other renewable technologies like solar.[3] Additionally, wind power is a mature technology with well-established infrastructure and a declining cost of energy.[1][2]

However, wind power also has some limitations. Wind is an intermittent resource, with power output varying significantly based on wind speed.[2] This can lead to issues with grid integration and reliability. Wind farms also require large open spaces, which can limit their deployment in some regions.[3]

In contrast, wave power offers several advantages over wind power. Waves are a more consistent and reliable energy source, as they are hardly interrupted and almost always in motion.[2] This makes electricity generation from wave energy more reliable and predictable compared to wind.[2] Wave energy converters also have a smaller footprint and can be deployed offshore, reducing the need for large open spaces.[4][3]

Furthermore, wave power has a higher energy density than wind, meaning more energy can be extracted from the same area.[5][6] The theoretical global output of wave power is estimated to be around 29,500 TWh/yr, which is roughly 125% of the current global electricity demand.[5] This vast untapped potential makes wave power an attractive option for renewable energy generation.

While both wind and wave power have their advantages, the evidence suggests that wave power is the superior renewable energy source. Wave power offers greater reliability, predictability, and energy density compared to wind, making it a more promising option for large-scale renewable energy generation.[2][5] As such, the development and deployment of wave energy technologies should be a priority in the transition to a sustainable energy future.

Wave vs. Wind and Solar - 'Sintef' - Blog

Wave energy pros and cons - SolarReviews

Feasibility of Wave Power - Stanford University

Advantages and Disadvantages of Wave Energy - Greentumble

Wave and Wind are the New Hybrid Renewable Energy Source

Review of Hybrid Offshore Wind and Wave Energy Systems

Expand full comment

One reason we could expect Microsoft Word to get better is that it wasn’t the first WYSIWYG word processing software. Bravo had been done at Xerox PARC well before.

I have yet to hear of a use for Generative AI that would come close to justifying the huge investment in it, even from people who use it and like it, for summarizing meetings they’ve missed and generating pro forma emails.

Expand full comment

I do believe there are positive uses for LLMs. For example, my experience is that programmers who already know what they’re doing become more productive when they incorporate AI into their workflow. I think the same thing can be said for many kinds of professional writing. To be fair though, I don’t even know what the economics of these use cases are.

And, that leaves a huge swath of uses that are useless at best. We’re currently experiencing an AI bubble. I really do hope that when the bubble bursts, the funding for all of these useless AI startups will go away. After the bubble bursts, LLMs will get more expensive to use and train. That will be a good thing.

Expand full comment

Clearly the "plan" is if we just build a large enough data set, self-awareness will spontaneously appear. Like getting enough monkeys and typewriters together, it's theoretically possible if you squint. Advertising is the great con that actually sorta works, and always has been. The famous aphorism "I know half of the money I spend on advertising works, I just don't know which half" is still true. Targeted ads know what you're interested in (on line) but nothing else, such as what you bought. Add in the spice of advertisers paying for prominent position and you've got a system well and truly gamed.

Expand full comment

It might be both — as in: 'a bit of one, and much of the other' — but I tend to agree with your outlook. There are not enough guardrails for the market to get the good-for-society results from the good-for-the-shareholder results. The guardrails have been steadily eroded over the last half century.

Sam Altman has also said that the hallucinations aren't bugs, they're features. The long story short is: the source of *both* creativity and hallucinations in LLMs is exactly the *same*: the 'stochastically constrained randomness' of producing the next token. You can't get rid of one without getting rid of the other. Which is why LLMs more and more become minor parts in more complex architectures.

The same is true for memorisation (which produces a ton of legal problems, not just copyright but also even more difficult to solve: trademark and its like). Here the source of correct results (a perfect rendering of Shakespeare's Sonnet 18 for instance) and the source of leakage of training data (via regeneration) is again, exactly the same: in this case enough parameter-space dedicated to that specific training data. You cannot solve this unwanted leakage(-through-regeneration) without also destroying the correct results you *do* want.

The GenAIs do not differentiate between good or bad output, they have no understanding (other than of pixel or token distributions, which is like statistically understanding ink distributions). Both good and bad output are — from the perspective of GenAI — the same. It is us, who label that result as bad or good. Regenerating Shakespeare's Sonnet 18: good. Regenerating copyrighted material: bad.

The stuff on https://ea.rna.nl/the-chatgpt-and-friends-collection/ contains a lot of material on these above points in case you're interested, including a very clear illustration of the "hallucinations aren't errors" aspect.

Expand full comment

"The 'bugs' in online advertising will never be corrected, because the marketplace neither demands nor rewards correction.": Also because doing it well is much, much harder than the purveyors want the customers to believe or even than they themselves believe. Or at least that's my intuition as someone who's spent many years making computers do many kinds of trick.*

That's my intuition about "AI" more generally. I'm quite confident LLMs aren't going to turn into HAL. They aren't junk. They may well constitute a piece of the puzzle. However, many other pieces will be necessary. See, for example, Sam Anthony's essay "How LLMs are and are not like the brain":

https://buttondown.email/apperceptive/archive/how-llms-are-and-are-not-like-the-brain/

*Google me if you care. Among other places, I used to work in the UC Berkeley computer science department and the Swedish Institute of Computer Science. That work focused on so-called logic programming languages, particularly Prolog. At the time, those languages were caught up in the AI hype cycle. Remember the "Fifth Generation Project"? It depended heavily on a Prolog-ish language called KL1, and it was yet another development that was going to Change Everything. It was also a largely Japanese initiative, which is one reason why DARPA was willing to pay people like me to make sure Japan didn't steal a march on the USA.

Expand full comment

OK, I like the initial though of exploring hallucinations as a derivative of poor advertising targeting but then you lost me.......

Yep, sometimes targeting fails, especially for people who have ad blockers and refuse to use most social networks etc.

But, have you looked into how digital advertising works?

There are billions of dollars at stake, in a marketplace whose incentives will reward the company.....that delivers the best results.

"The money flows toward companies that can make the most compelling pitches to corporate executives, not to the companies whose products make the fewest errors."

This is patently false and uninformed, I can spend money based on performance. If it works I spend more, if it doesn't we don't. It is a fully automated, dollars-out, revenue-in model, that can be tuned in real-time. Yes, we could have a debate about "Brand" and "out of home advertising" but that's a class by itself. You are focused on direct response digital ads so I'll constrain my response to that arena.

"....microtargeted ads is that the current version is perpetually trash".

As someone who has overseen 10's of millions of dollars of digital advertising spend, I can always say that I want better results, or I can just stop spending. The fact that I can also say that I want to run ads to 40-45 y/o republicans, who love almond milk and decaf coffee in north San Francisco between 12-2 on Wednesday afternoons is incredible. Even more incredible is that I (or the algo) can tune that spend to deliver better results in basically real-time. Finally, A huge majority of my spend was governed by ROI. If it was ROI+ keep spending, if it goes negative, stop.

The problem of selling things to your customers who already bought things can be an issue but often that is more an issue of connecting data (legal/privacy issues) than capability. Would you like more relevant targeting? You can make personal choices to enable that. We can also make societal choices if that's what we prioritize.

"online advertising is a massive, barely-regulated industry."

What regulations would you like to see that align with previous your complaint of poor targeting? I'd suggest those two frustrations are probably at odds. e.g. Prevent data from A from working with B because the government said so vs. prevent A from working with B because the consumer said so.

I'd agree putting these controls in the consumer's hands with more clarity is powerful, just don't complain that someone tried to sell you Ice Cream while you were at Eskimo Camp.

Finally, have you ever gone into Instagram and looked at the shopping stuff or the ads? Sometimes I go there when I want to discover cool new things because they are so good at targeting.

TLDR: Modern companies with mature digital advertising strategies can spend to ROI positive automatically. The AI/Algos can uncover ways to increase your spend within those parameters which is good for them and good for you the seller. If it isn't, you stop spending.

Dave! You've don't it. You've someone started to sound like you are back on Twitter. This is tin foil hat-level conspiracy. This can't be serious, online advertising is measured far more than almost any role. It makes defending a thesis look like a casual conversation of opinions.

"That’s just how it is. The “bugs” in online advertising will never be corrected, because the marketplace neither demands nor rewards correction. There is enough money at stake that all the big actors have an interest in pretending everything works just fine already. "

Expand full comment

First, on : "I could put together a whole syllabus on the topic"--I'd take that class.

Second, I think you are absolutely right that what we think the current state of capitalism determines our view of what we've decided to call AI. My question is what happens to how the middle class (or knowledge workers or the Professional Managerial Class (PMC) or whatever you want to call it) thinks about the state of capitalism when some significant portion of their labor gets automated? Do they follow the path of their 19C brethren, the Luddites because the 21C robber barons keep the surplus generated by productivity growth? Does it "hollow out industries" while enshittifying the internet-based culture where they spend their time? Does end up it improving their lives in the form of 32 hour work weeks for the employed and UBI for those who don't?

The answer depends not only on whatever this tech turns out to be good for, but also the extent managers, engineers, and human services workers in education/healthcare/government are willing to enact the techno-optimistic dreams of the latest crop of robber barons.

Expand full comment