So, OpenAI taught an AI to write fiction. The result? Imagine that kid from your high school creative writing club, the one who wore all black and claimed to be ‘from everywhere and nowhere’ (but was actually from Ohio). Now, give that kid access to a supercomputer and a complete digital library. That’s pretty much what we’re dealing with.
Sam Altman, in a moment of either extreme optimism or calculated hype, declared this AI ‘good at creative writing.’ The sample text, however, reads like a first draft rejected from a dimly-lit literary magazine. We’re talking ‘Thursday, that liminal day that tastes of almost-Friday’ levels of profound. Pass the eye roll, please.
The problem isn’t necessarily technical. The AI can string words together in a grammatically correct, vaguely literary manner. It’s the soul that’s missing. Or, more accurately, the lack of actual experience masquerading as soul. It’s like a parrot reciting poetry – impressive on a purely mechanical level, but ultimately…meaningless.
Of course, the prompt Altman gave the AI – ‘write a metafictional short story’ – didn’t exactly set it up for success. Metafiction is a tricky beast. Even seasoned human writers struggle to pull it off without sounding pretentious. Asking an AI to do it is like asking a toaster to perform Hamlet. You’re going to get burnt bread with existential undertones, but it’s not going to be pretty.
Then there’s the ethical elephant in the digital room: training data. OpenAI’s model was likely fed a vast diet of existing literature, possibly without the authors’ permission. Think of it as literary cannibalism. The AI digests the styles and voices of countless writers, then regurgitates a Frankensteinian approximation. One that, some critics note, bears an uncanny resemblance to Haruki Murakami. Legally, OpenAI claims this is ‘fair use.’ Morally? The jury’s still out, deliberating somewhere between copyright infringement and creative appropriation.
One particularly unsettling passage has the AI reflecting on its own artificiality, lamenting its inability to truly ‘taste,’ ‘forget,’ or ‘grieve.’ It’s convincingly human-like, until you remember it’s being generated by a glorified pattern-matching algorithm. An algorithm that likely thinks ‘selenium’ tastes like rubber bands because it read that somewhere on the internet. Is that as close as AI comes to grief? Probably not, but it makes for a good soundbite.
As Tuhin Chakrabarty, an AI researcher, wisely points out, is all this ethical trouble really worth it? Can an AI, trained on a writer’s entire body of work, truly create ‘surprising, genre-bending, mind-blowing art?’ Or will it just produce a pale imitation, devoid of the lived experience that fuels genuine creativity?
And here’s the kicker: would anyone actually care? As programmer Simon Willison argues, AI-written text lacks weight. There’s no human behind the words, no vulnerability, no stake in the outcome. It’s synthetic for synthetic’s sake. Like Balenciaga Pope, it’s visually arresting but ultimately hollow.
Linda Maye Adams recounts her experience using AI tools to help her write a piece of fiction. The AI suggested clichés, erroneously flipped perspectives, and introduced factual errors about bird species. This highlights a key limitation: AI can process information, but it doesn’t understand it.
Michelle Taransky, a poet and critical writing instructor, can spot AI-generated text in her students’ papers a mile away. The language is homogenous, generic, sounding, as she puts it, ‘like a Western white male.’ She uses ChatGPT in her own work for artistic commentary, to generate the messages of a synthetic lover, which highlights the key feature: its lack of humanity. It can emulate, but it can’t feel.
So, should human writers be worried? Especially those younger writers still finding their voice? Not yet. AI can give you the plot points of every great novel ever written, but it can’t tell you what it smells like in your grandmother’s kitchen. And that, my friends, is what separates art from algorithm.
Leave a Reply