Bullet Points: Oh-just-shut-up edition
A.I. is a bullshit generator. Conservatives aren't being barred from APSA. and pronatalism is still a pretend-crisis.
Today’s newsletter is another notes-and-commentary edition. It covers three recent articles that annoyed me enough to merit a proper rant.
I’ll be back later this week with a proper essay from the history-of-the-digital-future project. And I’m halfway through the Elon Musk biography. I'll share notes when I’m done.
(1) “We gave a profession of bullshit generators access to GPT-4. You won’t believe what happened next.”
There’s something darkly funny in Ethan Mollick’s latest post about the bright business-future of generative AI. Mollick (and coauthors) has a new working paper, based on a large-scale research collaboration with Boston Consulting Group (BCG). Here’s his summary of the findings: “A lot of people have been asking if AI is really a big deal for the future of work. We have a new paper that strongly suggests the answer is YES.”
Ethan Mollick teaches entrepreneurship at Wharton. It shows. He produces serious, methodologically-sound research. And he never comes within a mile of examining the social assumptions that his research is grounded in. Anytime he engages with A.I. criticism, it comes across like “Boo-urns.”
In the working paper, Mollick and his team randomly assigned consultants at BCG to have access/no access to GPT-4. And then they compared the consultants productivity across a range of tasks. What sort of tasks, you ask? Well, all sorts of serious-consultant-tasks, such as:
creative tasks (“Propose at least 10 ideas for a new shoe targeting an underserved market or sport.”), analytical tasks (“Segment the footwear industry market based on users.”), writing and marketing tasks (“Draft a press release marketing copy for your product.”), and persuasiveness tasks (“Pen an inspirational memo to employees detailing why your product would outshine competitors.”).
Mollick and his coauthors find that GPT-4 improves consultant productivity and work quality on all these tasks. The gains were strongest for the low-performers. But, also, he writes that AI is a “jagged frontier” — the technology excels at some tasks, is terrible at others, and it requires significant expertise to differentiate between the two. To Mollick, this means that (1) the business opportunities are phenomenal and (2) the people who get rich will be the first-movers who really develop their skills in this grand new landscape.
And, I mean… sure? One could read the findings that way.
But an alternate reading would be something like “hey! I hear you think A.I. is a bullshit generator. Well, we gave a whole profession of bullshit generators access to A.I., and you’ll never believe how much more productive they became at generating bullshit! This is such a big deal for the Future of Work!”
Back in May, Ted Chiang wrote a piece titled “Will A.I. Become the New McKinsey.” It’s probably the finest piece of critical writing about the near-term trajectory of A.I. that I have seen. It’s like Mollick read the piece and said “Yes, it will, isn’t that fantastic?!?”
Mollick, in other words, is playing to a very particular audience. If your students are paying to get a credential from Wharton, then “becoming the new McKinsey” is a huge market opportunity. If A.I. can help you become the very best McKinsey employee you can be, then A.I. is extremely well-aligned with your narrow learning objectives.
The broader problem with his “jagged frontier” metaphor is that we know a thing or two about how that jagged frontier will be colonized. Under prevailing conditions, the A.I. future is going to be driven by three forces:
Silicon Valley pitch decks that overpromise and underdeliver. This is the gospel of startup culture. Create a “minimum viable product.” Make a ton of mistakes, constantly pivot, and try to find a market segment that will grow. Ignore existing regulations, break all the rules. We’re building the future here. All the great businesses did a little creative accounting when they were getting started.
Cost-cutting efforts among existing industries. We’re already seeing this in journalism, just like absolutely every critic predicted. Generative AI cannot, today, come close to producing replacement-rate journalism. But plenty of shoddy media organizations are deploying it anyway because (a) it saves money and (b) C-suite executives believe it will surely get better soon.
Chasing large-dollar government and legacy industrial contracts. The startup ecosystem is judged on nothing but potential. It’s vibes. The actual money for most of tech has been in SaaS (software as a service). The big financial upside for AI companies is going to come through locking in long-term service contracts with government agencies and legacy industries. There, once again, all the incentives point toward overpromising and underdelivering.
I wrote about this back in April (Two Failure Modes of Emerging Technologies). I’m not worried about an A.I. apocalypse. (Mollick and I have that in common.) I’m worried that we’re going to start relying on A.I. for a bunch of critical functions where the technology simply isn’t as good as entrepreneurs claim. And we’re going to make those mistakes because all the macro incentives for entrepreneurs are to exaggerate what the technology can do, insist that it is on the verge of a breakthrough, and bully their way toward financial rewards. This is exactly what happened during the “Big Data” hype cycle. The people who got rich from that hype cycle are the same people propping up the Generative A.I. hype cycle. It won’t go better this time. The incentives all push in the direction of privatizing the rewards and socializing the risks.
I guess you could call this a “jagged frontier” if you want. But where Ethan Mollick looks at it and says “look at all the individual opportunities to get rich, I look at it and say “look at all the societal problems that are sure to be created if we leave entrepreneurs unattended at the wheel.”
It sure would be great if our future-titans-of-industry devoted a little more classroom hours to thinking about the social consequences of McKinsey though. At least then they would have a clue what the critics are warning people about.
(2) On the Claremont Institute at APSA: good riddance.
Robert Maranto wrote an opinion piece for The Hill earlier this month, titled “I’m a conservative. Is there still a place for me in the field of political science?” The tone of the piece is as whiny as the headline. And it takes him eleven paragraphs to get to the point: he wants APSA to bring back the Claremont Institute.
In 2021, activist threats of violence led the APSA annual conference, at the last minute, to move online its 10 panels involving the right-wing Claremont Institute, because of Claremont’s association with John Eastman. (…) Rather than hold its panels online, Claremont pulled out altogether, which seemed to be what some wanted.
That’s, uh, not even a little bit true.
The APSA council was not responding to “activist threats of violence.” It was responding to an open letter, signed by over 300 members of the professional association. The letter argued that, since APSA had passed a motion “strongly condemning” the events of January 6th, and since both Eastman and the Claremont Institute continued to proudly defend his role in those events, APSA should rescind its relationship to both Eastman and Claremont.
And I would know. I wrote the letter. Read it for yourself. I think it’s pretty damn persuasive. And even if you disagree, it sure as hell doesn’t constitute a “threat of violence.”
The APSA leadership did not directly respond to the letter. But they felt pressured. And so they decided to move Claremont’s panels online, insisting this was to avoid having members of the discipline show up and ask pointed questions. And then Claremont declared "WE’VE BEEN SILENCED” and canceled all panels. Claremont is no longer a Related Group. (a status that they were actively using for fundraising purposes, btw.)
In other words: The polite, measured, reasonable pressure tactic worked just fine.
I detest Eastman’s views on the peaceful transfer of power, and more to the point he was simply wrong in many of the assertions that underpinned his arguments. But I also believe that scholars fight bad ideas of all political varieties with debate, not banishment, and certainly not through guilt by institutional association with those who hold erroneous or even detestable viewpoints.
Further, ending its longtime association with Claremont leaves APSA without any significant organized conservative presence, which is a travesty for representation. Notably, the Claremont panels were always among the best-attended at APSA annual meetings. Two years on, APSA should bring back Claremont, or something like it.
This, again, is just unbearably sloppy. The Federalist Society is still an APSA related group. So is the McConnell Center for Political Leadership. Claremont actively defended Eastman, put him on their panels, kept him on their Board, and left him in charge of their Center for Constitutional Jurisprudence. Holding an institution responsible for the political acts of its leadership team that the institution still publicly defends is not “guilt by institutional association.”
One thing we made clear in the open letter was that this wasn’t about banning conservatives from APSA. It was about enforcing a line that the APSA council had itself already drawn, against supporters of the January 6th insurrection. Really, its right there in the plain text of the letter: “Your statement of strong condemnation must apply to the Claremont Institute if it is to apply to anyone at all.”
Now, two years later, Maranto is feeling lonely. He argues that APSA needs to bring back Claremont, “or something like it.” Last month, Katherine Stewart wrote an excellent feature story about Claremont for The New Republic last month. They’re insurrectionist wishcasters, dreaming about the post-democratic authoritarian future that a permanent Trump regime could bring about.
There is plenty of room for conservative political scientists at APSA. There always has been. There is no room for insurrectionist ideologues who want to overthrow electoral democracy. The problem for Maranto is that the Republican Party has become so hostile to conservatives intellectuals who still support electoral democracy. But that is not APSA’s problem to solve.
The next time Maranto wants to complain about how hard it is for a conservative political scientist these days, I hope he shows enough professional self-respect to at least get the basic sequence of events right.
(3) Pronatalism is still a nothingburger, part 3.
How are we still doing this?
The New York Times published yet another article warning about the dire, looming threat of global population collapse. (“The World’s Population May Peak in Your Lifetime. What Happens Next?”) I keep reading these pieces, waiting for a moment of clarity about what the actual threat is actually supposed to be. And, once again, I’ve got nothing. (previous pieces are here and here)
Today’s essay comes from an economist. They always seem to come from economists, or from Thiel-adjacent techies who spend too much time on rationalist discussion boards. And, to his credit, this economist at least is clear about the data. So here’s the crisis, as he sees it:
People, en masse, are choosing to have smaller families now. If current trends continue, then global population will peak at ~10 billion people around 2085. And, after that peak, the global population will decline.
This is bad because… there is a correlation between population growth and innovation. So, uh, there will be less innovation. (Also less economic growth. Won’t someone please think of the economic growth?)
Also, it “will mean tens of billions of lives not lived over the next few centuries.” Which is a statement that the authors chose not to unpack.
I just want to reiterate a few things:
Economists are terrible at long-term forecasting. Truly abysmal.
This is an imagined 22nd-century catastrophe. It is a problem that my potential grandchildren will face as adults.
This imagined crisis is based entirely on modeling assumptions. If, 30-50 years from now, people start to prefer slightly-larger families, then the entire crisis evaporates. Poof. It’s gone.
Unlike climate change, there is no destructive feedback loop. We needed to move earlier on climate, because every gigaton of carbon we pump into the atmosphere makes adaptation harder. Our inaction in the 90s/00s/10s forecloses the range of our potential futures. By comparison, our collective-baby-making-inaction today just creates some near-term problems of supporting an aging population. All of those problems are solvable.
It’s a pretend crisis. It seems like every time we get close to seriously addressing the climate emergency, a bunch of economists pop up to declare “no no what if we worried about this other thing instead.” The author of today’s NYT piece goes out of his way to avoid saying that directly. He also tries his best not to dip into the eugenicist tropes that always float just below the surface of these debates.
But still, can we please call a moratorium on economists setting the public agenda with their shoddy long-term forecasting models? “If this mass behavioral trend that has fluctuated for decades remains stable for the next 150 years, society will be ruined. Kind of.” Okay, thanks for the heads-up. I can guarantee you we will allot this the precise amount of policy attention that it deserves right now.
…That’s it from me. Thanks for letting me rant a bit. I’ll be back later this week with a proper tech-history piece, examining WIRED magazine’s startup years (1993-1998).