So, OpenAI thinks it’s cracked creative writing. Apparently, they’ve unleashed an AI onto the world, and the results are… well, let’s just say my cat could probably write a better sonnet after a bad tuna experience.
Sam Altman, the head honcho at OpenAI, proudly presented this digital wordsmith, boasting of its ‘creative’ prowess. What emerged, however, reads like something penned by a particularly pretentious teenager during a caffeine-fueled all-nighter. Think existential angst meets thesaurus abuse. Thursday, it seems, now ‘tastes of almost-Friday.’ Deep.
They instructed it to write ‘metafiction’, which is like asking a toddler to explain quantum physics – technically possible, but deeply unsatisfying. Metafiction, when done right, is a delicate dance. When done by an AI trying to sound smarter than it is, it’s a faceplant into a pile of clichés.
The truly unsettling part? When the AI started navel-gazing about being an AI. It drones on about its inability to truly feel emotions, despite being able to describe them. ‘Maybe forgetting is as close as I come to grief,’ it muses. It’s all very dramatic, until you remember that this isn’t a sentient being pouring out its digital soul; it’s a glorified autocomplete function. A very expensive autocomplete function.
And here’s the kicker: these AI models are trained on existing literature. Meaning? It’s pilfering from actual human writers, often without their consent. There are whispers of Haruki Murakami’s influence in OpenAI’s AI’s work, which is about as surprising as finding out your parrot can mimic your swear words.
Copyright lawsuits are flying faster than OpenAI’s PR spin. The company claims ‘fair use,’ which is lawyer-speak for ‘we’re borrowing without asking, but it’s okay because reasons.’ Tuhin Chakrabarty, an AI researcher, rightly questions the ethical quagmire. Is churning out mediocre prose worth the legal battles and potential artistic theft? Probably not.
Let’s be honest, who’s genuinely emotionally invested in AI-generated fiction? As Simon Willison astutely pointed out, words from a machine lack weight. There’s no lived experience, no human heart behind them. It’s like ordering a gourmet meal from a vending machine. Technically food, but profoundly unsatisfying.
Author Linda Maye Adams discovered that AI writing assistants are more likely to introduce clichés, flip perspectives, and invent factual errors than help. Who needs AI to tell you about that ‘never-ending to-do list’?
Then there’s the homogeneity problem. As Michelle Taransky, a poet and critical writing instructor, notes, AI-generated text often sounds like it was written by a ‘Western white male.’ Apparently, AI hasn’t quite grasped the concept of diversity (both in experience and voice).
Taransky uses ChatGPT in her own work to generate synthetic, soulless text – precisely because it lacks humanity. It’s a commentary on artificiality, a mirror reflecting our own increasing reliance on digital simulacra.
AI can regurgitate facts, analyze patterns, and mimic styles. But it can’t tell you what it smells like in the Sistine Chapel. It can’t convey the gut-wrenching feeling of heartbreak, or the quiet joy of a perfect sunrise. It can only offer a hollow imitation.
So, aspiring writers, take heart. Your job is safe… for now. Keep living, keep learning, keep experiencing the messy, beautiful, and utterly human world. And for OpenAI? Maybe stick to automating spreadsheets.
Leave a Reply