Bullet Points: Internet Time ain't what it used to be.
The Apple Ad, 90s consumer technology, and an AI anniversary
Today I want to weave together three observations. They are all variations on the same theme:
“Internet Time” is far less frenetic than it used to be. (And that’s probably a good thing.)
(1) That iPad Ad
A couple weeks ago, Apple released a disastrous ad for the new iPad Pro. The ad was called “Crush!” The company quickly pulled it and issued an apology.
I didn’t write about the ad at the time. I didn’t have much to say beyond what I was reading from Brian Merchant, Liz Lopatto, and Damon Beres and Charlie Warzel.
But there’s one bit that I keep circling back to: the sole message they were trying to drive home was just, “this one is thinner than the last one!”
I’ve written about Apple’s advertising a couple of times before (see here and here). I’ve always thought the ads for the first few generations of iPhones were a masterclass. What Apple has historically done so well with its advertising is to make clear the social role that their new, pricy consumer products would play in your life.
My favorite example is this 2010 ad for the iPhone 4. They added a second, front-facing camera to this phone. What that meant was you could, for the first time, be on a video call and show the person you were talking to what you were seeing. This was new, but it wasn’t obviously useful. Here’s the ad that introduced it:
It has been a long time since Apple’s consumer product line changed in a way that mattered. Every new iPhone is like the last iPhone, with a marginally improved camera and battery life. The latest iPad is like the previous iPad, but thinner and with an M4 chip.
No one was saying “dammit! This iPad is just too thick to accomplish [_____].”
The main reason to buy the new iPad is that your old one stopped working. There isn’t anything radically new about these devices anymore. There doesn’t need to be. Apple isn’t bringing products to market that feel like the future. It’s a trillion-dollar company because it makes consumer goods that feel like the high-end present.
(The exception, of course, is the Apple Vision Pro. Which feels like part of a luxury dystopian future. …But hey, maybe version 2 will be different.)
It seems to me that the main reason the advertisement was bad is they were trying to make the new iPad Pro seem special. It isn’t special; it’s just thinner. It’s just yet another modest update to a successful product line. You use it for all the same things you used the last one for.
Apple’s product lines have leveled off.
(2) A flashback to the 90s
I’ve been thinking about a piece that I wrote last summer. I read through all the product reviews from the first five years of WIRED magazine and shared what I found (“90s tech culture was a jumbled mess”). Here’s the relevant passage:
Reading the old product reviews, what really hit home was how real Moore’s Law felt back when consumer technology was changing so rapidly. You could buy an $8,000 flat screen TV in 1996, or you could read about it in [WIRED’s product review section] FETISH and just wait a few years for the better/cheaper model. Last year’s unattainable conspicuous consumption is next year’s Christmas gift.
That consumer-level experience ended in the early ‘10s, I think. I haven’t quite nailed the date down yet. Today’s new iPhone costs basically the same as your last iPhone. It has a faster processor and better screen resolution, but not so much that you would notice. A good laptop can last a decade, and most of us won’t much notice the difference when we replace it with the latest model. [emphasis added]
I think one reason why people in my age bracket have such strong, implicit faith in Moore’s Law is that it was part of our shared reality for such a long time. Consumer tech really was getting significantly better and significantly cheaper, at a pace that you could not help but notice.
I saved up all summer in 1998 to buy a nice stereo. (It played tapes AND cds!) Four years later, my friend Becca was showing off the clickwheel on her new iPod.
In 2021, the keyboard on my laptop started having trouble. The “e” key stopped working. I checked, and found the computer wasn’t under warranty anymore. Turns out I had bought it way back in 2012. It still worked fine, except for the damn “e” key.
When I started college in 1997, a nine year old computer (from 1988!) would, for all practical purposes, not be a computer at all.
Silicon Valley’s aura of futurity was honed through this frenzied cycle of consumer product upgrades. The leveling-off in consecutive iterations of Apple’s product lines today was nowhere to be seen. There was forever a next generation of consumer products coming, and that next generation was demonstrably better and cheaper than the one that preceded it.
It felt just a bit magical. No other part of the physical world was transforming at such a constant, reliable pace. But this is no longer the case, and it hasn’t been for quite awhile.
And in the meantime, the mythos surrounding Moore’s Law keeps being propped up, Weekend at Bernie’s-style.
Sure, the Rabbit R1 might be utter trash. Yes, the Humane AI pin is so bad it’s unreviewable. But why be so dour about these early models? Just focus on how much better the next one will be. Surely it’s just around the corner. (Because Moore’s Law!)
And yes, virtual reality/augmented reality/extended reality headsets keep being just a couple years away from a radical breakthrough. But don’t judge the industry on its track record. Judge it on its aspirations! (The future is arriving! Because Moore’s Law!)
Apple, at least, has leveled off because its products are very good, and they just don’t need to get significantly better. Much of the rest of the industry leans on the old faith in Moore’s Law like a crutch, forever promising that the next version will finally unlock hidden potential.
Which brings me to the third item: the current state of AI.
(3) “Press Pause on the AI Hype Machine”
Julia Angwin wrote a real barnburner in the New York Times last week, titled “Press Pause on the AI Hype Machine.”
It’s worth reading the whole piece. For the purposes of this essay, I just want to dwell on a point from her introduction:
It’s a little hard to believe that just over a year ago, a group of leading researchers asked for a six-month pause in the development of larger systems of artificial intelligence, fearing that the systems would become too powerful. “Should we risk loss of control of our civilization?” they asked.
There was no pause. But now, a year later, the question isn’t really whether A.I. is too smart and will take over the world. It’s whether A.I. is too stupid and unreliable to be useful. Consider this week’s announcement from OpenAI’s chief executive, Sam Altman, who promised he would unveil “new stuff” that “feels like magic to me.” But it was just a rather routine update that makes ChatGPT cheaper and faster.
This was before the Scarlett Johansson/“Her” revelations. It was juuuust before Google declared they’d be breaking the whole damn web. There is a lot of AI churning through the news right now, and I want to make sure we don’t lose sight of this.
It has been over a year since the “six-month pause” letter. The thesis of the letter was that a transformative new AI future was imminent — so close, in fact, that we needed the entire field to declare an unprecedented half-year break just to let the rest of the world prepare.
Critics at the time asserted that this was effectively all just an elaborate form of criti-hype. It was AI-doomer marketing. Actually-existing-AI had all sorts of problems. Once the gimmicks wore off, people started noticing its limitations instead of breathlessly whispering about the onrushing future.
Weren’t those critics just, quite clearly, right?
This is not to say that the technology has plateaued. I do not expect it to decline quite yet. I am withholding judgment until we see whether GPT-5 is, in fact, released this summer, and whether it demonstrates another step-change over the capabilities of GPT-4.
But the critics were certainly right about the timing. The cultural influence of Moore’s Law has been to assure people that technological change is ever-increasing. Sam Altman loves to talk about how we are living through a moment of exponential growth, and that the world is about to change very fast. Moore’s Law carries a lot of load-bearing weight in the construction and maintenance of the AI hype machine.
What would happen if we set it aside? What if we quit treating AI expansion like an unstoppable force of nature and instead treated it like a new, untested set of tools that can be deployed faster or slower, better or worse?
We don’t have to put our faith in the techno-futurists. In fact, based on their track records, we ought to regard them with deep suspicion.
So here’s where this line of thinking leaves me:
-Let’s judge these products based on their existing capacities, not their imagined potential.
-Let’s stop buying into the myth that the future is arriving faster than we think. We have time to assess these technologies and influence the direction of their development.
-Let’s treat every new announcement from Sam Altman and his peers for what it is: marketing hype. Sometimes that marketing hype will prove to be justified. Often it won’t. We don’t need to take his word for it, though.
The pace of Internet Time has slowed down. Scientific and technological innovations are co-produced, they are not inevitable. Tech entrepreneurs aren’t living in the future. They are trying to will a vision of the future into being. We don’t have to accept what they are offering. We have time to demand better.
I think biologists call it punctuated equilibrium. Technologies go through periods of rapid change alternating with periods of relative stasis. There's usually something that enables the change. The Otto cycle engine enabled automobiles and airplanes, and for a long time new cars were the high tech of the day. I always likened Steve Jobs to Henry Ford with his company expected to innovate but often at the edge of bankruptcy. The Moore's Law era did the same thing for computers, moving them from the laboratory into just about everywhere and everything. Now, they're more like cars in the 1990s.
You make a great point that the inevitability of rapid advances in any product involving electronics or software is based more on the memory of the past than the actual truth of the present day.
When it comes to AI, however, I worry about an over-correction. The breathless hype of a year ago was so over the top that we've learned to discount expectations of progress. (At least many of us have. Some folks of course are still talking about "AGI" on timelines as short as 2-3 years.)
As a 40-year veteran of the software industry, and someone who has been following AI closely for the last year, I do believe that AI is going to be as incomprehensibly, terrifyingly transformative as the hypesters claim – it'll just take a while (10-40 years?) to get there. There's enough economic value in the current models, and enough incremental economic value to be unlocked at each further step along the way, that the industry is going to spend whatever it takes to keep pushing forward – just as it did to keep Moore's Law running for so many decades.
And so I worry that folks who correctly recognize that AI is over-hyped today, may fail to notice as the reality catches up with the hype – which might happen gradually, and then suddenly.
(I explore these ideas in my substack, which I won't link to here but you can find it by clicking on my username.)