“Accursed creator! Why did you form a monster so hideous
that even you turned from me in disgust?”
―Mary Shelley, Frankenstein
“You may ask yourself, ‘Am I right, or am I wrong?’
And you may say to yourself, ‘My God! What have I done?'”
—Talking Heads
The road to writing hell, it seems, is paved with good intentions.
I like to [used to] start off each semester of ENG101 with [what I thought was] a comforting speech. “Don’t think of this as a writing class,” I [used to] tell the students. “It’s a composition course, which simply means to ‘put things together’. Nobody expects you to write poetry.” And then I show them [I’d show them] a 14-step PowerPoint, much revised, beginning with this slide:

“The birdhouse represents an Academic Paper,” I’d say. “Once built, it can be the home of ever-so-many birds: history, psychology, sociology, on to the end of recorded time. So be of good cheer, you see. I am going to give you the tools you need to build the thing.”
And then I got a good look at “the thing” in the form of a brief, AI-generated article on higher education in prison. My sense of outrage has always been on a hair trigger, and this pulled it. HEP is a topic that I’ve been writing about for years–laboriously, I might add–and I’m still in the throes of responding to MLA edits on a draft that I first submitted in January of 2020. I felt, as Annette Gordon-Reed says in the Prologue to On Juneteenth, a “twinge of possessiveness.” I certainly wasn’t going to let some lifeless upstart muscle in on my hard-fought territory.
Even more worrisome, though, is that the article isn’t as bad as I hoped it would be. I was expecting something like Google Translate, but the article is…fine. Grammatically correct, even. There are topic sentences, examples, signal phrases, APA citations: all the things I teach in 101. The Conclusion’s not bad, either. Yes, the topic sentences become laborious, sources are not analyzed in any detail, transitions are annoying (far too many “other hands”) but it gets the job done. Like the The Simpsons’ Chorus Line of Mediocre Presidents, it’s perfectly adequate.
So, what’s the problem? Well, the article feels empty, desolate. John Koenig, in his Dictionary of Obscure Sorrows, coined a word for just such a feeling: “Kenopsia: The Eeriness of Places Left Behind.”
Where is everybody?–the situational awareness, the struggle with English, the awkward stab at academic discourse, the touching yet worrisome personal revelation, the effective fragment, the happy accident, the student writer?
Where be your gibes now? your
gambols? your songs? your flashes of merriment,
that were wont to set the classroom on a roar?
By pushing the birdhouse, in a good-faith effort to be comforting, I had mistaken the birds. They were not–are not–“history, psychology, sociology.” The birds are the students, themselves. The AI article, though, is empty. It’s birdhouses all the way down, and after reading it I couldn’t help but wonder: who’s responsible for this? Then it hit me—
I am.
Or, rather, we are: anyone who ever told a student not to use “I” in a paper because “It’s not about you.” Anyone who ever told a student that knowing how to use a comma is the sine qua non of writing. Anyone who ever sat brooding over that opportune moment to take down a student for not citing correctly. “The writing center stinks of fear,” according to Kurt Schick, in an article at the Chronicle of Higher Education. “Professors’ overattention to flawless citation (or grammar) creates predictable results,Students expend a disproportionate amount of precious time and attention trying to avoid making mistakes. Soon, they also begin to associate ‘good’ writing with mechanically following rules rather than developing good ideas.”
In the first post in this series, I likened AI to the mechanistic nature of information theory, which is concerned only with sending a message, but not with its meaning. Likewise, in a recent article at MIT Technology Review on the short, happy life of Galactica, Meta’s “large language model,” Chirag Shah at the University of Washington writes, “Language models are not really knowledgeable beyond their ability to capture patterns of strings of words and spit them out in a probabilistic manner. It gives a false sense of intelligence.”
Sounds a bit like English class, when you think about it.
*****
Other posts in this series:
One thought on “Confronting AI, Part the Second”