Thinking of predictive AI as "just a rebrand of the Big Data hype bubble from 10-15 years ago" seems right, and the habit of substituting "data for strategy" nicely sums up the real risks of many forms of AI, including the kind that generates cultural artifacts.
Deflating AI hype requires pointing out all the ways AI doesn't work, but it also requires pointing out how truly terrible using it can be when it works as designed to make some types of management decisions. Davies is a great example of how to think about this problem in terms of organizations, as is The Ordinal Society. Analytical Activism is now on my list.
The recent chemistry Nobel was for the prediction of protein structure. Describing it as being for “the study of chemical compounds” is like saying “the Nobel Prize in chemistry was given for the study of chemistry.”
I was wondering what had become of "Big Data" as the Next Big Thing. In the 2016 time frame my old company was going through another bout of reinventing itself. Part of this was supposed to involve the retraining of large numbers of employees to save their jobs by going into Big Data, Computer Security, or VMware. Like all of these programs in my experience, a year or two later it was all forgotten except the out-sourcing part. Almost none of the folks who retrained ever found inside work in the new specialties. Big Data just disappeared as a term along with the groups doing it. Maybe they reinvented themselves as something else. Computer Security groups existed but never hired any of the trainees that I heard of. The same with the VMware group. Within 2-3 years VMware was replaced by "The Cloud" as the Next Big Thing and is today a shell of what it was for a brief period. It makes me wonder after NFTs and Crypto if AI as a thing will pass or if it will be applied to whatever follows LLMs. It's nice to learn that other folks still remember the Siren Song of Big Data!
Great piece. Coming a late to it but thought I'd share some thoughts (probably best shared late and quietly) regarding agency and AI, since perhaps there's a case for expanding this kind of discussion to issues of securitization, the automation of violence, and the myriad ways Silicon Valley all ties into it. In other words, as you say, "BE the scientist and the civil servant and the journalist" ...but also, I was thinking, the military response, the judge, jury and executioner.
The whole "Just trust the augments. The AI might be a black box, but it has a genius inside" idea, but attached to a gun.
There's a paper that spoke to much of this that I think about a fair bit lately, which I'd recommend if interested: Ethics for the majority world: AI and the question of violence at scale (particularly the section "War machines and the automated extermination of the Otherness"). It explores AI's "agency" in a context slightly different to what's discussed here, but also very potently relevant insofar as we're talking about offloading cognitive work to AI, which can and does extend to military/security contexts.
I take the point and agree "we are hailing these technologies as though they possess agency, acting as though they can make choices for us". I also think/read about places in the world where this nebulous thing we call "AI" is already making the greatest of choices and exercising semi-autonomous agency in life-altering (ending) ways. See +972's coverage of Palantir's Lavender, and/or MintPressNews related piece on Thiel for example, or Palantir+ICE and targeting immigrants in your own country. Like, no, we shouldn't trust the Big Data's nor their rebranded AI's, and I fear us handing over control like that, but also, that exact thing is already happening in some pretty disturbing ways.
And ofc since the Valley folks creating/backing/profiting from this facet of AI just got handed the biggest of wins...it's only going in one direction for the forseeable future.
Thinking of predictive AI as "just a rebrand of the Big Data hype bubble from 10-15 years ago" seems right, and the habit of substituting "data for strategy" nicely sums up the real risks of many forms of AI, including the kind that generates cultural artifacts.
They don't use the term in the book, but last year the Snake Oil guys put out a paper with a few others on what they call "predictive optimization," which I think helps frame this danger. Here is a link to post about it: https://www.aisnakeoil.com/p/ai-cannot-predict-the-future-but?utm_source=publication-search
Deflating AI hype requires pointing out all the ways AI doesn't work, but it also requires pointing out how truly terrible using it can be when it works as designed to make some types of management decisions. Davies is a great example of how to think about this problem in terms of organizations, as is The Ordinal Society. Analytical Activism is now on my list.
Thanks mentioning my review essay!
The recent chemistry Nobel was for the prediction of protein structure. Describing it as being for “the study of chemical compounds” is like saying “the Nobel Prize in chemistry was given for the study of chemistry.”
Good note, thanks.
I was wondering what had become of "Big Data" as the Next Big Thing. In the 2016 time frame my old company was going through another bout of reinventing itself. Part of this was supposed to involve the retraining of large numbers of employees to save their jobs by going into Big Data, Computer Security, or VMware. Like all of these programs in my experience, a year or two later it was all forgotten except the out-sourcing part. Almost none of the folks who retrained ever found inside work in the new specialties. Big Data just disappeared as a term along with the groups doing it. Maybe they reinvented themselves as something else. Computer Security groups existed but never hired any of the trainees that I heard of. The same with the VMware group. Within 2-3 years VMware was replaced by "The Cloud" as the Next Big Thing and is today a shell of what it was for a brief period. It makes me wonder after NFTs and Crypto if AI as a thing will pass or if it will be applied to whatever follows LLMs. It's nice to learn that other folks still remember the Siren Song of Big Data!
"You don’t have to make hard strategic choices anymore. Just trust the augments. The AI might be a black box, but it has a genius inside."
The ol' implausible deniability trick.
I'll join Rob in thanking you for mentioning my review of Amodei's manifesto!
This is a big 'ol subtweet of Future Forward lol -- testing ourselves to meaninglessness
You have a "know" that should be a "no"
Great post, and AI Snake Oil is my next read.
Great piece. Coming a late to it but thought I'd share some thoughts (probably best shared late and quietly) regarding agency and AI, since perhaps there's a case for expanding this kind of discussion to issues of securitization, the automation of violence, and the myriad ways Silicon Valley all ties into it. In other words, as you say, "BE the scientist and the civil servant and the journalist" ...but also, I was thinking, the military response, the judge, jury and executioner.
The whole "Just trust the augments. The AI might be a black box, but it has a genius inside" idea, but attached to a gun.
There's a paper that spoke to much of this that I think about a fair bit lately, which I'd recommend if interested: Ethics for the majority world: AI and the question of violence at scale (particularly the section "War machines and the automated extermination of the Otherness"). It explores AI's "agency" in a context slightly different to what's discussed here, but also very potently relevant insofar as we're talking about offloading cognitive work to AI, which can and does extend to military/security contexts.
I take the point and agree "we are hailing these technologies as though they possess agency, acting as though they can make choices for us". I also think/read about places in the world where this nebulous thing we call "AI" is already making the greatest of choices and exercising semi-autonomous agency in life-altering (ending) ways. See +972's coverage of Palantir's Lavender, and/or MintPressNews related piece on Thiel for example, or Palantir+ICE and targeting immigrants in your own country. Like, no, we shouldn't trust the Big Data's nor their rebranded AI's, and I fear us handing over control like that, but also, that exact thing is already happening in some pretty disturbing ways.
And ofc since the Valley folks creating/backing/profiting from this facet of AI just got handed the biggest of wins...it's only going in one direction for the forseeable future.