Bullet Points: On the social trajectory of AI
(AI and word processing, AI and higher education, Sam Altman wants regulation so long as he can pick the regulators.)
I’m traveling for work this week and next. In lieu of the usual essay, I’m just going to share a few shorter observations and reading recommendations:
(1) Microsoft is rolling out an AI personal assistant, “Windows Copilot.” Next-generation Clippy is coming, folks. (Clippy-but-awesome!) It’s 2023, climate change is on track to render much of the planet uninhabitable, and the best minds of my generation are focused on… breakthroughs in PowerPoint presentations!
Okay, snarky aside out of the way: One thing I’ve been pondering is how generative AI tools might change writing — not in apocalyptic terms, but on the personal level. (I expect Generative AI will be bad for writers as a professional class, because capitalism. But that’s a different level of analysis, I’ll dig into that more some other time.)
My hunch is that generative AI is eventually going to change how we write in much the same way that Word Processors changed how we write. It’s worth reading James Fallows’s 1982 essay “Living with a Computer” in this light. (Fallows also has a very good substack, btw).
I don’t think I would be capable of writing on a typewriter. I mean, technically I could — a keyboard is just a keyboard, after all. I could figure out the carriage return and buy a bottle of whiteout or whatever. But I would be completely flummoxed at the prospect of writing anything longer than a few paragraphs on a typewriter. My brain isn’t built for that.
I do not have an efficient writing routine. Usually these Substack essays begin as a hasty outline that I thumb-type on my phone and email to myself. Some of those outlines immediately become the writing project of the week. Others languish in my inbox for weeks/months/a year, until I have the bandwidth to revisit them. Once I sit down to write the thing, I fumble repeatedly with the introduction. There’s a lot of muttering involved. I find I am mostly incapable of writing section 2 until section 1 is at least passable. And the same goes for sections 3, 4, etc. The initial, hasty sketch of an outline rarely survives contact with actual words on the (digital) page.
For bigger pieces, I’ll spend days constantly fiddling. I’m either writing the next passage or thinking about the next passage, thumb-typing passages or key ideas and emailing them to myself. (Right now, about 20% of my inbox are notes-to-self regarding the second essay in my MoveOn Effect at 10 series. I really need to buckle down and finish the damn thing.) And then, once I finally have all the pieces assembled, I read through it to see which ideas need to be trimmed, expanded, or rearranged.
Mind you, I’m not suggesting this is a good system. No one taught me this haphazard method. I didn’t study the craft of writing in school. I’ve read a few books on the topic, but mostly this has been a multi-decade make-things-up-as-you-go process. But I find that it works, and that’s enough.
This simply would not function with a typewriter or pen and paper. The outlining, the drafting, the editing, the constant false starts and dead ends? I wouldn’t have the slightest idea how to proceed.
Which brings me back to Fallows’s 1982 essay. Notice the joy he takes at describing his initial encounter word processing technology:
What was so exciting? Merely the elimination of all drudgery, except for the fundamental drudgery of figuring out what to say, from the business of writing. The process works this way.
When I sit down to write a letter or start the first draft of an article, I simply type on the keyboard and the words appear on the screen. For six months, I found it awkward to compose first drafts on the computer. Now I can hardly do it any other way. It is faster to type this way than with a normal typewriter, because you don't need to stop at the end of the line for a carriage return (the computer automatically "wraps" the words onto the next line when you reach the right-hand margin), and you never come to the end of the page, because the material on the screen keeps sliding up to make room for each new line. It is also more satisfying to the soul, because each maimed and misconceived passage can be made to vanish instantly, by the word or by the paragraph, leaving a pristine green field on which to make the next attempt.
I wonder if I would be a more efficient writer if the only tools available would have been a typewriter, pen, and paper. My writing tools are very forgiving of my sloppiness, and so I have stitched together a haphazard writing routine that ultimately works well enough. With older writing tools, I either would have developed tighter writing habits or would have given up altogether.
How will AI writing assistant change our process? Will they “eliminate all drudgery,” and let us focus on the original ideas, or will it be just a mess. I don’t plan on using generative AI as a writing assistant anytime soon. I honestly have trouble seeing the appeal. But this feels like one of those unknown-unknowns that will be interesting to watch as it unfolds.
At the societal level, I feel quite confident that we’re heading for a terrible mess. But on the individual level, I’m far less certain, and find myself quite curious.
Anyway, I recommend checking out the Fallows piece. It’s a nice little portal into back to an earlier time.
(2) Ian Bogost’s latest Atlantic essay reflects on ChatGPT’s first academic year. It’s great. You should read it. One of the themes that he explores is the nature of the status quo ante that ChatGPT is poised to “disrupt.”
There’s a genre of academics-writing-about-ChatGPT that I have quickly become annoyed by. It’s a mix of “this is the end of everything!” and “no no, this will free us to develop bespoke pedagogical tools that enrich the learning experience for our students.” And those essays are pretty much universally written by tenured professors at elite universities. They rarely grapple with just how tiny their portion of the educational experience actually is.
And look, sure, I’m also a tenured professor at an elite university. But I had the benefit/curse of spending a couple of years as Associate Director of GWU’s School of Media and Public Affairs (department chair, basically). Which meant I had to cobble together the schedule of classes for the semester. And hire adjuncts to teach many of those classes. And apologetically answer the question “how much will the university pay me for teaching this class?”
Something like 75% of undergraduate classes are taught by adjunct faculty. Some of those adjuncts are working professionals, teaching classes in their field of expertise on the side. Most of them went through the same doctoral training I did, but didn’t strike gold in the tenure track job-lottery. So now they are teaching 4-5 classes per semester for low wages, limited health benefits, and tenuous job security. It sucks, but it saves money for the university, so it has been rising inexorably for decades, like carbon in the atmosphere.
It’s all well and good to share tips on how Professors can completely rewrite their syllabi and overhaul their classroom exercises to take advantage of the opportunities offered by Generative AI. At the individual level of analysis, that’s no different than my pondering what Clippy-but-awesome will mean for writing practices. But I haven’t read nearly enough discussion of how these tools will interact with the actually-existing state of higher education today.
The United States has spent decades reducing public funding for higher education. Most classes are taught by non-tenure-track professors at varying levels of precarity. The response to ChatGPT is going to be a total mess in most places, because the existing system was already so rickety to begin with.
Bogost’s piece only addresses these trends adjacently, but in so doing he provides a much richer portrait of the problems and dilemmas we face in actually-existing higher ed today. It’s a thoughtful, thorough bit of reflection on how we handle and (mis)manage a new wave of consumer technology. Definitely one of the better pieces I’ve read this week.
(3) Sam Altman went to Congress last week, and everyone just loved him.
One of the reasons why I don’t trust Sam Altman is that he has just been a little too perfect in how he has framed his company. It has a strong whiff of Sam Bankman-Fried circa 2021. Altman is the good one, patiently explaining the opportunities and threats posed by his company’s technology, and proactively calling for responsible regulation (that just happens to support his business and constrain his competitors). And my lord how the big tech journalists and elected officials are eating it up.
.One of the big questions being debated right now is whether the U.S. ought to develop a new agency charged with regulating AI or whether AI should be regulated under existing frameworks. And my (admittedly undercooked) reaction so far is “well, it kind of depends on who would the regulators would be.”
Like, an AI regulatory body composed of people who work for/work with Sam Altman is going to do little to address the actual harms caused by the misapplication of these technologies (aka failure mode 2). We would be much better off just empowering FTC chair Lina Khan to do impressive Lina Khan things. But the FTC already has its hands full. Expanding regulatory capacity is a good thing, so long as the regulators aren’t captured. (Also, it sounds like Altman took one look at the EU’s proposed AI regulations and immediately declared “screw you guys, I’m going home.” Regulation is essential, says OpenAI, so long as OpenAI gets to pick the regulations and the regulators…)
There was a moment in the hearing when Altman proposed setting up a regulatory body and was basically asked if he would be interested in chairing it. I spit out my coffee when I heard that. We’re eventually going to look back on this moment as a major misstep.
It reminds me of a piece I wrote back in 2018, about “Life under responsible information monopolies.” The crux of that argument was that “a nervous monopoly is a better monopoly.” I felt, in the midst of the Trump years, that our best near-term hope lay in Google and Facebook executives being nervous enough to be (relatively) well-behaved.
OpenAI is not yet an information monopoly. But the company has been supremely effective at making its products synonymous with AI in the same way that Google is synonymous with search. We would be better off if Sam Altman was given cause to be more professionally-nervous.
His charm offensive is working too well. It is not charming. It is offensive.
That’s all for the week. Next week will probably be too hectic to post anything. Don’t let anything interesting happen in the meantime, alright?
(And hey, Biden, mint the damn coin already!)
I remember being enthralled by the IBM Selectrics with film ribbons that could make errors vanish with the Back Delete key. Magic!
My inbox is totally out of control, and I pay for too damn many Substacks, so I was tempted to read the first couple of lines of this then hit delete.
That would have been a huge mistake. Top quality writing Dave!