It feels like EVERYTHING out of TechBroLand is just trying to "pretend to be filthy rich" - car & driver on demand, laundry on demand, food on demand, fancy vacation homes on demand, etc etc etc - and ALWAYS using and abusing workers to get it done. AI is part-and-parcel the same trend - giant talk of "think what it's GOING to be" without ever delivering any of it.
Millions of little startup slaves giving up their entire lives in pursuit of the bosses lifestyle...
I suspect that the cruelty (towards gig workers) is part of the point, people get a thrill out of how rich people are portrayed to treat their *servants* and they want a little taste of that power, too.
It builds on a twisted understanding of the concept "wealth." That word, for most people, did not mean luxury or huge amounts of money, but rather security and happiness.
There are billions of people who lack cheap, fair, quality basic services. And they have smart phones. It's a huge opportunity here for developers.
I think that economic model is going to be a lot like affiliate marketing today, or maybe even like restaurant platforms operate. The company that supplies the agent takes a cut of everything it sells. For example, if you are planning that Brooklyn dragon birthday party, you need a venue, a cake, invitations, etc. The agent appends a percentage fee onto each expense, or maybe takes a cut from each vendor, who will use the agent as a way of marketing themselves.
I don't love this, and would prefer to plan the party myself, but it's not hard to see how this could be modeled like Doordash or nerdwallet.
This all reminded me of a decade-old blogpost (https://hapgood.us/2015/02/20/people-have-the-star-trek-computer-backwards/) noting that, rather than representing the ultimate ideal that Silicon Valley sees it as, the Enterprise computer system in Star Trek is palpably of its era - pre-text processing, pre-keyboard, pre-user input. The author explores various reasons why this is still seen as the goal for tech companies, and one is
'that they see the personal computing era as an anomaly. We edited our documents because computers weren’t smart enough to produce and edit documents for us. We edited assumptions in Excel spreadsheets because computers weren’t yet trustworthy enough to choose the right formulas. Soon computers will be smart enough, and Star Trek can commence.'
This is of course 'identical to the beliefs of the average 1960s computer scientist' - which, left unsaid, was a military and/or academically funded wonk interested in how and where this tech could be used to control the unruly masses. This doesn't undermine your argument about revenue, but I think it does give a glimpse into how Silicon Valley hasn't really changed as much as it likes to think it has.
The execution of software agents as they're being imagined here also just falls apart the minute you start thinking about the actual logistics of how it would need to work. Our digital lives are spread across a lot of services - free and paid, but many of them competitors. Early in the Web 2.0 era interoperability between these platforms was actually pretty good and then *they all shut it down* except for payments and Google/Facebook SSO. So either the big current platforms would need to change their mind about that again (no) or some third party software agent company would need to get permissions from *all* of them (hahahahahaha lol no) oh and also this autonomous agent has banking information and I'm sure that will just go fine.
I wonder if the assumption "We could have developed software agents 10, 20, 30 years ago. Software engineers were working quite hard on it. They started companies and obtained funding. The technical hurdles were comparatively small." is correct. Working hard and getting funding says something about convictions, not about reality. And the technical hurdles were probably *reported* as 'small' (as they are now) but that doesn't mean they were. Or are. I guess we will be able to do more today than 30 years ago. But a 'digital butler' really needs to *understand* a lot, and that remains an unsolved thing as far as I'm aware.
The "technical hurdles [are] comparitibely small" has been used as an excuse since the 1950's. It was horseshit then; it's horseshit now. Absolutely NOTHING the Large Language Models are doing is in any way related to being an "agent". Remember: it is NOT that LLM's *sometimes* hallucinate - it's that they ALWAYS hallucinate, and sometimes it resembles reality.
But an agent may function without actual understanding anything. Some basic agents do exist, mostly classical ones that can e.g. work with fixed format credit card statements and perform automated things on them. Just as LLMs may in certain situations be 'satisficing' some agents may be (regardless of technology).
Yes, exactly. A simpler explanation is: robot butlers are extremely hard but product recommendation algorithms are only quite hard, so that's what we got. Also, recommending a product that you previously bought is lower stakes than messing up a birthday or travel plan.
Companies promised self-driving cars in the 60s, and it turned out harder than anticipated, to put it mildly.
For an assistant to plan and execute a birthday party or vacation without significant mistakes requires something pretty close to median human-level general intelligence, which we still don't really know how to build.
"They have (much of) the necessary technology.": I don't think they do, actually, or if they have much of it, the last 20% or 10% or 5% is going to be much harder*.
Think about the number and subtleties of the judgments Klein's birthday-party planning scenario could well entail. What if the cake or the room isn't available at quite the desired time or with quite the desired amenities - what's negotiable and what isn't? What if they're missing something or other that matters to you or your child but didn't happen to come up in your interaction with the agent? Etc. What passes for AI right now isn't nearly as capable as a typical human servant of meeting such challenges.
It isn't even just that the agent can't know you or your child anything like as well as, say, your child's nanny does (or would, if you were a rich person whose child had a nanny). It's that what passes for AI right now is breathtakingly deficient in common sense, the working knowledge of the world we gain from living in it and watching other people living in it, not by consuming a corpus of documents.
It's as though the people boosting these notions imagine being a butler, nanny, etc. is a low-skill job, so easy it can be handed off to something like ChatGPT. If so, they're wildly mistaken**.
I hasten to add, I'm otherwise in full agreement with this post. In particular, "It's ridiculous, and further evidence that our entire economy is just derivative financial products at this point." Ed Zitron calls it "the rot economy". It does indeed appear to be mostly about stock prices. At some point, I expect much of it to crash and burn spectacularly. However, I also expect the greedy fools who cause the crash to pay no real penalties, any more than Dick Fuld did for crashing Lehman Brothers. Accountability is for the little people, doncha know.
I'm looking forward to reading your book.
*In this sense, this kind of thing resembles self-driving cars, albeit probably with lower (i.e., not life-or-death) stakes: getting it right 90% of the time isn't too hard, but then you face a practically endless succession of "edge cases". Of course, humans don't handle all those cases perfectly, but we tend to do better than what passes for AI right now, because we have much richer mental models of the world.
**I suspect many rich people who employ butlers, nannies, etc. would agree those jobs are far from trivial. Stereotypically, their perennial complaint is, "Oh dahling, it's so hahd to find good Help these days." "Discretion" is high on the list of desiderata. I think it's fair to say what passes for AI right now doesn't really do "discretion".
“I don't think they do, actually, or if they have much of it, the last 20% or 10% or 5% is going to be much harder*.
“Think about the number and subtleties of the judgments Klein's birthday-party planning scenario could well entail. What if the cake or the room isn't available at quite the desired time or with quite the desired amenities - what's negotiable and what isn't?”
My impression is that the companies running these services will shove them down our throats anyway.
What is at first (sold as) an optional enhancement soon becomes a compulsory necessity.
Excellent analysis! "The trajectory of any technology bends toward money" -something we should think about--and perhaps start to strategize for that to be less the case for the next generation, when you're kids and my grandkids are all grown up.
This was Shoshana Zuboff’s vision twenty years ago. Then she realized for it to work companies would have to vacuum up enormous amounts of personal nears in you. Then she realized they were just vacuuming the data a selling it for origin, leading her to her next book, the critical Age of Surveillance Capitalism. https://www.penguinrandomhouse.com/books/290791/the-support-economy-by-shoshana-zuboff-and-james-maxmin/
Sam Altman also said "AI will most likely lead to the end of the world, but in the meantime there will be great companies", but I'm more willing to believe him on that one than "we'll get Stargate because we just have to, guys."
Here's a simple question all the AI boosters struggle with. Would you trust running your life to an AI agent prone to hallucinations, lying, and gas-lighting? If it was a human butler you'd fire them immediately (or hopefully seek medical attention for them).
"The autonomous planning use case is literally what I’ve spent the last 10 years of my career on in a place that has thrown billions at it. Nobody is being denied their birthday planning agents. It literally is that hard, even with the magic of LLMs.
"The fun thing about autonomous agents is that everyone thinks only about the happy case: the agent recognized your speech correctly because you were speaking clearly in a quiet room. You asked for a straightforward thing: a birthday party and cake for your son. The agent knows who your family are and what they like, but not in a creepy way where any information about your family or their preferences are memorized for the long term, just in a magical way where it knows where you are and what your son’s allergies are, but the information manifests when needed and then disappears. The restaurant you wanted for the party was open, and it accepted a reservation without you authorizing your credit card (a rarity nowadays!). The time the agent picked for the party worked for your entire extended family and your in-laws too, even Susan who will never agree to show up at the same place and time as Dave unless you get on the phone with her for an hour. It does all that in the HAPPY CASE.
"You know where the product people, the scientists, and the engineers spend all of our time? On gracefully handling the unhappy cases. Your preferred restaurant isn’t available for 2 months. Is the best alternative a Chuck. E. Cheese or a picnic in the park? Which park? The cake vendor has peanuts in their recipe so you need a different vendor. The date that works for you will absolutely not work for your kid’s best friend’s parents, and it will be a fiasco if they don’t show. Susan is back on her bullshit again. Wait a minute, your original request wasn’t for a cake, it was for a party by the lake, but you were talking over the TV so the agent heard you wrong?
"If the current wave of excitement over AI agents seems like a hype bubble to Karpf, that’s because it’s a fundamentally unsolved problem and LLMs do help but there are still a few step changes of capability necessary before he can stop worrying about his son’s birthday."
It seems any technology that has made available to the masses is paid for with advertising (and now personal data). I don't think it has ever been different. So it seems free but it really isn't. I don't see AI as any different. I mean influencers are just new age commercials. AI will be the same.
It feels like EVERYTHING out of TechBroLand is just trying to "pretend to be filthy rich" - car & driver on demand, laundry on demand, food on demand, fancy vacation homes on demand, etc etc etc - and ALWAYS using and abusing workers to get it done. AI is part-and-parcel the same trend - giant talk of "think what it's GOING to be" without ever delivering any of it.
Millions of little startup slaves giving up their entire lives in pursuit of the bosses lifestyle...
I suspect that the cruelty (towards gig workers) is part of the point, people get a thrill out of how rich people are portrayed to treat their *servants* and they want a little taste of that power, too.
It builds on a twisted understanding of the concept "wealth." That word, for most people, did not mean luxury or huge amounts of money, but rather security and happiness.
There are billions of people who lack cheap, fair, quality basic services. And they have smart phones. It's a huge opportunity here for developers.
I think that economic model is going to be a lot like affiliate marketing today, or maybe even like restaurant platforms operate. The company that supplies the agent takes a cut of everything it sells. For example, if you are planning that Brooklyn dragon birthday party, you need a venue, a cake, invitations, etc. The agent appends a percentage fee onto each expense, or maybe takes a cut from each vendor, who will use the agent as a way of marketing themselves.
I don't love this, and would prefer to plan the party myself, but it's not hard to see how this could be modeled like Doordash or nerdwallet.
Interestingly, this was a problem with human butlers, too. They'd often choose vendors based on who offered the best rebates.
This all reminded me of a decade-old blogpost (https://hapgood.us/2015/02/20/people-have-the-star-trek-computer-backwards/) noting that, rather than representing the ultimate ideal that Silicon Valley sees it as, the Enterprise computer system in Star Trek is palpably of its era - pre-text processing, pre-keyboard, pre-user input. The author explores various reasons why this is still seen as the goal for tech companies, and one is
'that they see the personal computing era as an anomaly. We edited our documents because computers weren’t smart enough to produce and edit documents for us. We edited assumptions in Excel spreadsheets because computers weren’t yet trustworthy enough to choose the right formulas. Soon computers will be smart enough, and Star Trek can commence.'
This is of course 'identical to the beliefs of the average 1960s computer scientist' - which, left unsaid, was a military and/or academically funded wonk interested in how and where this tech could be used to control the unruly masses. This doesn't undermine your argument about revenue, but I think it does give a glimpse into how Silicon Valley hasn't really changed as much as it likes to think it has.
The execution of software agents as they're being imagined here also just falls apart the minute you start thinking about the actual logistics of how it would need to work. Our digital lives are spread across a lot of services - free and paid, but many of them competitors. Early in the Web 2.0 era interoperability between these platforms was actually pretty good and then *they all shut it down* except for payments and Google/Facebook SSO. So either the big current platforms would need to change their mind about that again (no) or some third party software agent company would need to get permissions from *all* of them (hahahahahaha lol no) oh and also this autonomous agent has banking information and I'm sure that will just go fine.
C'mon.
I wonder if the assumption "We could have developed software agents 10, 20, 30 years ago. Software engineers were working quite hard on it. They started companies and obtained funding. The technical hurdles were comparatively small." is correct. Working hard and getting funding says something about convictions, not about reality. And the technical hurdles were probably *reported* as 'small' (as they are now) but that doesn't mean they were. Or are. I guess we will be able to do more today than 30 years ago. But a 'digital butler' really needs to *understand* a lot, and that remains an unsolved thing as far as I'm aware.
The "technical hurdles [are] comparitibely small" has been used as an excuse since the 1950's. It was horseshit then; it's horseshit now. Absolutely NOTHING the Large Language Models are doing is in any way related to being an "agent". Remember: it is NOT that LLM's *sometimes* hallucinate - it's that they ALWAYS hallucinate, and sometimes it resembles reality.
Chiming in that, yeah, I'm convinced by folks in this thread that I either phrased that wrong or outright was wrong about it.
What I had in mind was *compared to the bigger, sparkly promises like self-driving cars*, the technical hurdles here seem much more manageable.
But compared to the status quo, as it stands today, it still might be effectively insurmountable.
I know. https://ea.rna.nl/the-chatgpt-and-friends-collection/ (specifically: https://ea.rna.nl/2023/11/01/the-hidden-meaning-of-the-errors-of-chatgpt-and-friends/)
But an agent may function without actual understanding anything. Some basic agents do exist, mostly classical ones that can e.g. work with fixed format credit card statements and perform automated things on them. Just as LLMs may in certain situations be 'satisficing' some agents may be (regardless of technology).
Yes, exactly. A simpler explanation is: robot butlers are extremely hard but product recommendation algorithms are only quite hard, so that's what we got. Also, recommending a product that you previously bought is lower stakes than messing up a birthday or travel plan.
Companies promised self-driving cars in the 60s, and it turned out harder than anticipated, to put it mildly.
For an assistant to plan and execute a birthday party or vacation without significant mistakes requires something pretty close to median human-level general intelligence, which we still don't really know how to build.
"They have (much of) the necessary technology.": I don't think they do, actually, or if they have much of it, the last 20% or 10% or 5% is going to be much harder*.
Think about the number and subtleties of the judgments Klein's birthday-party planning scenario could well entail. What if the cake or the room isn't available at quite the desired time or with quite the desired amenities - what's negotiable and what isn't? What if they're missing something or other that matters to you or your child but didn't happen to come up in your interaction with the agent? Etc. What passes for AI right now isn't nearly as capable as a typical human servant of meeting such challenges.
It isn't even just that the agent can't know you or your child anything like as well as, say, your child's nanny does (or would, if you were a rich person whose child had a nanny). It's that what passes for AI right now is breathtakingly deficient in common sense, the working knowledge of the world we gain from living in it and watching other people living in it, not by consuming a corpus of documents.
It's as though the people boosting these notions imagine being a butler, nanny, etc. is a low-skill job, so easy it can be handed off to something like ChatGPT. If so, they're wildly mistaken**.
I hasten to add, I'm otherwise in full agreement with this post. In particular, "It's ridiculous, and further evidence that our entire economy is just derivative financial products at this point." Ed Zitron calls it "the rot economy". It does indeed appear to be mostly about stock prices. At some point, I expect much of it to crash and burn spectacularly. However, I also expect the greedy fools who cause the crash to pay no real penalties, any more than Dick Fuld did for crashing Lehman Brothers. Accountability is for the little people, doncha know.
I'm looking forward to reading your book.
*In this sense, this kind of thing resembles self-driving cars, albeit probably with lower (i.e., not life-or-death) stakes: getting it right 90% of the time isn't too hard, but then you face a practically endless succession of "edge cases". Of course, humans don't handle all those cases perfectly, but we tend to do better than what passes for AI right now, because we have much richer mental models of the world.
**I suspect many rich people who employ butlers, nannies, etc. would agree those jobs are far from trivial. Stereotypically, their perennial complaint is, "Oh dahling, it's so hahd to find good Help these days." "Discretion" is high on the list of desiderata. I think it's fair to say what passes for AI right now doesn't really do "discretion".
“I don't think they do, actually, or if they have much of it, the last 20% or 10% or 5% is going to be much harder*.
“Think about the number and subtleties of the judgments Klein's birthday-party planning scenario could well entail. What if the cake or the room isn't available at quite the desired time or with quite the desired amenities - what's negotiable and what isn't?”
My impression is that the companies running these services will shove them down our throats anyway.
What is at first (sold as) an optional enhancement soon becomes a compulsory necessity.
Excellent analysis! "The trajectory of any technology bends toward money" -something we should think about--and perhaps start to strategize for that to be less the case for the next generation, when you're kids and my grandkids are all grown up.
Though I AM looking forward to that mea culpa.
This was Shoshana Zuboff’s vision twenty years ago. Then she realized for it to work companies would have to vacuum up enormous amounts of personal nears in you. Then she realized they were just vacuuming the data a selling it for origin, leading her to her next book, the critical Age of Surveillance Capitalism. https://www.penguinrandomhouse.com/books/290791/the-support-economy-by-shoshana-zuboff-and-james-maxmin/
Sam Altman also said "AI will most likely lead to the end of the world, but in the meantime there will be great companies", but I'm more willing to believe him on that one than "we'll get Stargate because we just have to, guys."
one of the things I rarely see mentioned is Apple's Knowledge Navigator (https://en.wikipedia.org/wiki/Knowledge_Navigator) speculation from the late 80's. The original video (https://www.youtube.com/watch?v=p1goCh3Qd7M). It's still pretty striking all these years later.
I like Doctorow's explanation. AI doesn't have to be able to do your job. It just has to convince your boss that it can do your job.
Here's a simple question all the AI boosters struggle with. Would you trust running your life to an AI agent prone to hallucinations, lying, and gas-lighting? If it was a human butler you'd fire them immediately (or hopefully seek medical attention for them).
Kind of like heaven in the Middle Ages? Yes,
Heaven is for everybody…but wait…where’s your ticket, you thought this was going to be free?
A friend says:
"The autonomous planning use case is literally what I’ve spent the last 10 years of my career on in a place that has thrown billions at it. Nobody is being denied their birthday planning agents. It literally is that hard, even with the magic of LLMs.
"The fun thing about autonomous agents is that everyone thinks only about the happy case: the agent recognized your speech correctly because you were speaking clearly in a quiet room. You asked for a straightforward thing: a birthday party and cake for your son. The agent knows who your family are and what they like, but not in a creepy way where any information about your family or their preferences are memorized for the long term, just in a magical way where it knows where you are and what your son’s allergies are, but the information manifests when needed and then disappears. The restaurant you wanted for the party was open, and it accepted a reservation without you authorizing your credit card (a rarity nowadays!). The time the agent picked for the party worked for your entire extended family and your in-laws too, even Susan who will never agree to show up at the same place and time as Dave unless you get on the phone with her for an hour. It does all that in the HAPPY CASE.
"You know where the product people, the scientists, and the engineers spend all of our time? On gracefully handling the unhappy cases. Your preferred restaurant isn’t available for 2 months. Is the best alternative a Chuck. E. Cheese or a picnic in the park? Which park? The cake vendor has peanuts in their recipe so you need a different vendor. The date that works for you will absolutely not work for your kid’s best friend’s parents, and it will be a fiasco if they don’t show. Susan is back on her bullshit again. Wait a minute, your original request wasn’t for a cake, it was for a party by the lake, but you were talking over the TV so the agent heard you wrong?
"If the current wave of excitement over AI agents seems like a hype bubble to Karpf, that’s because it’s a fundamentally unsolved problem and LLMs do help but there are still a few step changes of capability necessary before he can stop worrying about his son’s birthday."
Your thoughts?
I'm convinced your friend is right and I'm underestimating how hard these will be.
I was thinking about them compared to stuff like self-driving cars. I buy Ezra's premise that this is easier than that.
But, yeah, it's probably still a lot farther off than I was giving it credit for.
It seems any technology that has made available to the masses is paid for with advertising (and now personal data). I don't think it has ever been different. So it seems free but it really isn't. I don't see AI as any different. I mean influencers are just new age commercials. AI will be the same.
Except certainly shittier and even more enabling of the panopticon.
The arc of technological innovation steeply rises as it bends steadily toward commerce.
The arc of commerce is constant even as it bends steadily toward exploitation.
(With many apologies to the memory of Martin Luther King)
And then nose-dives into enshittification.
There certainly is that.