Bullet Points: 2025 is going great so far
Burning cybertrucks, Instagram AI bots, and debating exhausting people
Just a few stray thoughts to get off my chest as I clear out the inbox, close a few browser tabs, and try to get ready for the new year:
(1) A Cybertruck exploded in front of the Las Vegas Trump hotel on Wednesday. My big prediction for 2025 is that this will be viewed, in hindsight, as the perfect metaphor for 2025.
(2) Meta gave us a glimpse of the near future today, in the form of AI chatbots that no one wants and, soon, no one will be able to avoid. Karen Attiah went a few rounds with Liv, the “AI Black queer momma” bot, and the results manage to somehow be even more cringe-inducing than you would think. Meta has responded to the social commentary by deleting the profiles and insisting that it was all just a terrible misunderstanding.
I can’t help but wonder about the people at Meta who apparently thought this was a good idea to begin with. I find them fascinating in the same way that I am fascinated by conspiracy theorists, or, like, people who thought the Twitter Files were a legitimately big deal.
It just seems so far removed from the social world that I exist in. I cannot think of a single sensible person who would look at this idea and say “yes. Anyone would want that.”
It seems like a mashup of Cory Doctorow’s enshittification essay and Alex Blechman’s Torment Nexus tweet.
(Tech company: “at long last, we have pointlessly enshittified our product, just like in the classic essay ‘hey stop pointlessly enshittifying your products.”)
I can imagine two lines of reasoning here. Each seems self-evidently wrong to me, but for distinct reasons.
First, there’s “AI is the future. It’s the next chapter of the internet, and Meta needs to do everything it can to win this race. So the company needs to brainstorm every possible way they can stuff AI into their product line, try everything, and see what works.”
Relatedly, a crucial piece of Facebook’s origin story is that people hated the Newsfeed when it debuted in 2006. Zuckerberg had a vision of what the social media experience ought to be, and he had behavioral data the backed up his instincts. So he persevered, and was vindicated.
By that reasoning, the company should trust Zuck’s vision, adjust and refine based on data. That, after all, is how his empire was constructed.
(Problem being: Zuck’s vision has been more than a little flawed for the past decade. All the company’s best products have all been acquisitions. And his last Big Vision, the metaverse, turned out not to have legs.)
Then there’s the second line of reasoning, which goes something like “Meta has a mountain of behavioral data on what sort of content people engage with/what keeps people on-site. Instead of relying on the imperfect content provided by actual-human-users, Meta should just cut out the middle-man and create its own perfect-content-producing LLMs.”
By this line of reasoning, its worth keeping in mind that a lot of the content people interact with on Facebook and Instagram is algorithmic chum anyway. We might say we value authenticity and human connection, but the company hasn’t optimized for authenticity or human connection in well over a decade.
I have a lot to say about this. And I said much of it back in 2022 (“What Facebook Is Good For, and Why It Can’t Be Good Anymore”). But, honestly, this seems like a good record-scratch moment. Because the proper response is simply to repeat the following back to anyone who believes this was a good or smart design choice:
“You noticed a subset of Instagram users like to follow uplifting content from black people. So you invented some black people to generate uplifting content for them.”
What a ridiculous time to be alive…
(3) Speaking of being fascinated by people who thought the Twitter Files were a legitimately big deal, a couple weeks ago, I participated in an online debate with Jonathan Turley about the state of Twitter under Elon Musk. I’m linking to the debate below.
It was… interesting, I guess? I’m not familiar with ZeroHedge, the organization that hosted the debate. The website listing their past debate participants includes more than a couple red flags.
But, on the other hand, I’ve taught at GWU for over a dozen years without ever having the chance to say to Jonathan Turley’s face the sort of things I say about him online. And the debate question was literally “has Elon Musk’s management of Twitter/X been net positive or net negative for society?”
I could have prepared a little better for the debate. Turley’s latest book, apparently, is all about how the Hunter Biden laptop story proves that Twitter was suppressing speech, and that this was some terrible violation of the First Amendment. So there are basically two approaches one can take:
-You can accept the terrain he sets, and absolutely demolish the argument on the merits. That’s not hard to do. But one should, at a minimum, probably at least skim his book. And ell-oh-ell, friends, the stakes of this debate did not justify doing that to myself.
-You can reject his debate frame, and instead offer an alternate frame. Which seems appropriate here, since we were invited to discuss whether Elon has made Twitter better or worse, not to rehash late-2020 conservative fever dreams.
I chose the latter, and basically spent 90 minutes saying things like “this hasn’t gone well for anyone,” and “I get that the New Twitter has been good for you personally, but isn’t because speech is More Free. It’s because you’re saying things Elon chooses to amplify,” while also explaining some very basic concepts about how algorithmic amplification works.
I do think I could’ve fit a few more clever insults in.
I don’t think I could’ve convinced Turley or his supporters of anything, for the same reason that formal debate settings aren’t the setting to convince Trumpists that the 2020 election wasn’t stolen.
Still, overall, it was a pretty fun way to spend an evening.
(4) A few recommended links:
-Paul Krugman is on substack now, and is consistently spitting hot fire. I particularly liked this December 23rd essay, “America the Addicted,” on the rise of sports betting and speculative behavior.
-John Gruber paraphrases OpenAI’s for-profit announcement: “To succeed, all we need is unimaginable sums of money.” There is something linguistically hilarious that a company whose business model is [giant cash furnace + time] has decided the time has come to become a “for-profit” company. OpenAI generates a great many things; Profit is not among them.
-Kyle Chayka’s “The New Rules of Media” is an excellent distillation of how attention is flowing online right now. I’d like to write a thousand words riffing on the topic, but I don’t have that kind of time at the moment. So instead I’ll just point and say “hey, everybody look!”
"And his last Big Vision, the metaverse, turned out not to have legs.": I see what you did there.
Thanks for that. In these grim times, I'll take any humor I can get.
Would it surprise you to learn that ZeroHedge is _also_ a grift