22 Comments

I think biologists call it punctuated equilibrium. Technologies go through periods of rapid change alternating with periods of relative stasis. There's usually something that enables the change. The Otto cycle engine enabled automobiles and airplanes, and for a long time new cars were the high tech of the day. I always likened Steve Jobs to Henry Ford with his company expected to innovate but often at the edge of bankruptcy. The Moore's Law era did the same thing for computers, moving them from the laboratory into just about everywhere and everything. Now, they're more like cars in the 1990s.

Expand full comment
author

Political scientists also call it punctuated equilibrium! (That happened to be the subject of my undergraduate thesis, oh so many years ago...)

Expand full comment

You make a great point that the inevitability of rapid advances in any product involving electronics or software is based more on the memory of the past than the actual truth of the present day.

When it comes to AI, however, I worry about an over-correction. The breathless hype of a year ago was so over the top that we've learned to discount expectations of progress. (At least many of us have. Some folks of course are still talking about "AGI" on timelines as short as 2-3 years.)

As a 40-year veteran of the software industry, and someone who has been following AI closely for the last year, I do believe that AI is going to be as incomprehensibly, terrifyingly transformative as the hypesters claim – it'll just take a while (10-40 years?) to get there. There's enough economic value in the current models, and enough incremental economic value to be unlocked at each further step along the way, that the industry is going to spend whatever it takes to keep pushing forward – just as it did to keep Moore's Law running for so many decades.

And so I worry that folks who correctly recognize that AI is over-hyped today, may fail to notice as the reality catches up with the hype – which might happen gradually, and then suddenly.

(I explore these ideas in my substack, which I won't link to here but you can find it by clicking on my username.)

Expand full comment
author

Just subscribed, sounds right up my alley.

I think that's a good point. What I'd add is that *if* it is indeed moving on a 10-40 year timeframe, that creates plenty of room for democratic institutions and the administrative state to play the shaping role that I frankly think it ought to play.

Moore's Law hype plays a critical role in sustaining an anti-regulatory political project. That's the subtext of Marc Andreessen's manifesto, for example. The argument is that technology moves so fast that governments cannot hope to keep up, so we ought to trust the good will and wise instincts of the VCs and entrepreneurs. If the pace is in fact slower, then we can have the benefits of technological innovation while also shaping how and to whom those benefits are distributed.

Expand full comment
May 23·edited May 23Liked by Dave Karpf

Agreed!

If you're interested, some things I've written that most bear on these topic:

What To Expect When You’re Expecting GPT-5 (https://amistrongeryet.substack.com/p/gpt-5-predictions) – on the likely hype-vs-reality of AI on different scales; links to some deeper dives.

To Address AI Risks, Draw Lessons From Climate Change (https://amistrongeryet.substack.com/p/ai-risks-and-climate-change) – shaping AI is something we should think of as a long game.

Expand full comment
May 25·edited May 25Liked by Dave Karpf

One problem with applying Moore's Law to generative AI is that the technology has built-in limitations that can't be overcome with computing power. Quantity of data is the obvious one. But more fundamentally, this is a form of technology that produces output that is inherently unpredictable. LLMs will always make factual errors, or say offensive things, or divulge stuff you don't want them to divulge (or perhaps pretend to be doing so, like when Bard said it was trained on people's email), or fail to answer questions sensibly, etc. The whole point of the technology is that its methods for generating output are *not* dictated by human programmers. You don't use a model with over a trillion parameters for the sake of reliability. We can never know when or how a generative AI model will go off the rails.

The tech companies can use all the computing power they want, increase the parameter space by another order of magnitude, somehow manage to get even more data for training (though they're already scraping the barrell there), and none of that will fix these fundamental problems.

At some point, these companies will have to make a choice: put out products that incorporate generative AI, or put out products that are reliable. They're still comfortably in the "tell everyone we're working on the problem until they just learn to accept it" phase. But I sense it's getting a little less comfortable, and sooner or later most people will catch on.

Expand full comment

Reading this today, in the midst of seeing various articles complaining about polls in which people vastly underestimate the health of the economy, I wonder if one of the elements contributing to people's contemporary dissatisfaction is that they the 90s offered a template of economic growth meaning having new capabilities and new things to explore.

The slowing of tech that you describe can make it feel like the only time times we are surprised by how quickly something is changing are bad news.

That wouldn't be an accurate description of the world, but it does feel like a shift from the 90s.

Expand full comment

The real internet time is how long it would take me to comment on this article. Pretty fast tbh.

Would TikTok being banned in the US constitute “fast” internet time? I think yes for sure. Since what we’re trying to do is accumulate scientific knowledge of these phenomena. If it happens (I put it at 25%) it’ll be the largest, fastest shock to collective media diet ever, by a lot.

So yes sama bad, shame on us for falling for the marketing. But I really want to at least begin to think about how we would quantify internet time

Expand full comment
author

Quantifying is definitely going to be hard, you're right. We've got multiple versions of temporality to struggle with, and also the thorny issue of what the hell counts as an innovation, and when it ought to be considered a success or failure.

But I want to make a plug for taking the marketing seriously, because in the broader political economy of Silicon Valley (including VC, meme stock trends, etc), moore's law does a ton of real work. Resource flows follow this belief structure. If they did not so firmly espouse this belief, then we would see different products and different companies develop on different time scales.

(Also, if TikTok *is* banned, won't that just lead to a flood of additional users to Instagram Reels and YouTube? ...What I think we *really* probably need right now is someone to write a book about the YouTube Apparatus.)

Expand full comment

In the race to be the earliest investors, the talking heads, it seems have forgotten the most basic rule from every jobsite. The willingness to believe in techno-magical-promises seems almost pathological. (FSD anyone?).

The first day on any job, the new guy/girl will always tell you they are the hardest worker, with an attention to detail that borders on ADHD, and the bestest-ever problem solver in history. Now, because Tech is so intricate, the narrative goes, there are no receipts to be had, everything is simply too cutting edge, to be verifiable, just trust the folks with the portfolio controlling the most shares.

"He says he's a genius, he must be a genius..."

Meanwhile, in the real world your co-workers will always say in the simplest of terms

"Okay, PROVE IT".

It's time Tech and it's critics got back to that.

Expand full comment

(Writers like you have given readers like myself the tools and the insights to see and then make the comparisons between Tech Mythos, and the World we all have to wake up to each day.

Thanks to you for your exceptional insights Mr.Dave Karf,

your keyboard lacks 'e'; mine has a screwed up 'Caps lock'.

Nobody said this was gonna be easy, the struggle is real!).

Expand full comment

It also feels like it's less frenetic because we've developed an apathy towards tech news. I wouldn't be so quick to dismiss its ability to grow beyond faster and cheaper (I thought "stability" and monetization was part of their roadmap, not just vertical growth?

Expand full comment

If you chose your words better, and didn't write so many that have "e" in them, you wouldn't have to replace your laptops so often.

Expand full comment
author

That's right. I know it's right. I'll try hard to avoid it moving forward.

Expand full comment

See, you did it! An entire response without using that letter at all!

Expand full comment
author

<bows>

Expand full comment

Computers are fridges now. =)

Expand full comment

Refrigerators have kept improving, but it's decade to decade, not year to year. Every time we buy a new refrigerator, it seems to have thinner walls and use less power. They've often reorganized the interior a bit, usually for the better. I've also been impressed with stoves, ovens and washing machines. Things like induction heating, more precise temperature control and automatic water levels do make things easier.

Expand full comment

What curious synchronicity Batman, Tim Harford discussed Moore's law today too. He contrasts it to Wright's law that was formulated observing rapidly declining costs of airplane manufacture. Then we get a reminder that perhaps we can direct this enthusiasm toward greener pastures.

Wright’s Law is a function of activity: the more you make, the cheaper it gets. What’s more, Wright’s Law applies to a huge range of technologies: what varies is the 20 per cent figure. Some technologies resist cost improvements. Others, such as solar photovoltaic modules, become much cheaper as production ramps up.

"In a new book, Making Sense of Chaos, the complexity scientist Doyne Farmer points out that both Moore’s Law and Wright’s Law provide a good basis for forecasting the costs of different technologies. Both nicely describe the patterns that we see in the data. But which one is closer to identifying the underlying causes of these patterns? Moore’s Law suggests that products get cheaper over time, and because they are cheaper they then are demanded and produced in larger quantities. Wright’s Law suggests that rather than falling costs spurring production, it’s mass production that causes costs to fall.

And therein lies the missed opportunity. We acted as though Moore’s Law governed the cost of photovoltaics."

https://timharford.com/2024/05/fossil-fuels-could-have-been-left-in-the-dust-25-years-ago/

Expand full comment
May 23·edited May 23

There are genuinely useful and innovative bits of tech out there, but you have to hunt for them.

I've noticed that there are a lot more useful purpose built products like my CAT S62. I was only looking for a durable (like milSpec durable) phone with a long battery life and excellent wifi reception. And wasn't locked to a specific provider.

Caterpillar as in the heavy equipment manufacturer made a phone that is purpose built for shops and work sites. It even has a FLIR IR camera in it.

My iPhone X can't compete for what I need from a phone. It was too fragile and ate battery and had crap wifi connectivity.

And the new iPad can't compete against android devices purpose built for specific uses. Which is kind of weird because Apple certainly has the cash reserves and talent required to make a range of cheap hard wearing purpose built tablets like ones for use in 9-12 schools or for artists and animators or whatever.

Expand full comment

Very good - I just keep wondering how relevant AI is to the stock market - that is, if a more realistic outlook came to be, would we see a significant decline?

Expand full comment
author

I'm no expert on stocks, but it certainly seems as though the valuation of companies like Microsoft is directly correlated with AI hype.

Expand full comment