The Writers in the Machine: How AI Is Rewiring Our Relationship With Words

The Writers in the Machine: How AI Is Rewiring Our Relationship With Words
We've handed writing to machines that learned language by predicting what word comes next. The question isn't whether AI can write—it's what happens to us when we let it.
Read Full Article

Phil Kunz

Author
Phil Kunz
Writer and contributor

Last week, I watched a colleague spend forty-five minutes wrestling with ChatGPT to write a three-paragraph email. The AI kept suggesting phrases like "I hope this message finds you well" and "per our previous conversation," which my colleague kept deleting, rewriting, then asking the AI to try again. By the time they hit send, they'd written most of it themselves anyway. The whole scene felt like watching someone use a GPS to navigate their own living room.

This absurd dance plays out millions of times daily across offices, schools, and coffee shops worldwide. We've handed over one of humanity's most fundamental technologies—writing—to machines that learned language by ingesting the entire internet and predicting what word probably comes next. The question isn't whether AI can write. It obviously can. The question is what happens to us when we let it.

The Great Homogenization

Open any AI detection tool and paste in some ChatGPT prose. The telltale signs light up like a Christmas tree: sentences that begin with "Moreover" and "Furthermore," the compulsive use of "crucial" and "vital," the three-point lists that appear with clockwork regularity. AI writes like a middle manager who's swallowed a thesaurus—technically correct but utterly forgettable.

This isn't just an aesthetic problem. When everyone uses the same writing assistant trained on the same internet corpus, we get what researcher Emily Bender calls "synthetic text collapse." Cover letters start sounding identical. Academic papers develop the same cadence. Marketing copy blurs into an indistinguishable paste of "unlock your potential" and "seamless integration."

The homogenization runs deeper than style. AI writing tools don't just suggest words; they suggest thoughts. When you start typing "The benefits of..." and autocomplete offers "include increased productivity, enhanced efficiency, and improved outcomes," you're not just accepting a phrase. You're accepting a framework for thinking that's been averaged out across millions of documents. Your unique perspective gets sanded down to fit the median.

The Friction We Need

Writing has always been difficult. That's the point. The struggle to find the right word, the false starts, the crossed-out sentences—this friction isn't a bug to be optimized away. It's where thinking happens.

When I interviewed novelist Ted Chiang last year, he compared writing to long-distance running. "You don't use a car to help you run a marathon," he said. "The difficulty is the entire purpose." Every writer knows this feeling: you think you know what you want to say until you try to write it down. The act of writing reveals the gaps in your logic, the laziness in your thinking, the connections you hadn't seen.

AI removes this productive friction. Click a button, get paragraphs. But what exactly have you accomplished? You've generated text, sure, but have you clarified your thinking? Have you discovered what you actually believe? The efficiency gains come at the cost of the very cognitive work that makes writing valuable.

The Deception Problem

Here's an uncomfortable truth: most AI writing is a form of plagiarism we haven't figured out how to name yet. When a large language model generates text, it's performing a kind of sophisticated interpolation between millions of documents it's seen. It can't cite sources because it doesn't know where any specific idea came from—everything is dissolved in the statistical soup of its training data.

This creates a credibility crisis. When a student turns in an AI-written essay, who's the author? When a company publishes AI-generated reports, who's accountable for the claims? We're seeing the emergence of what Jaron Lanier calls "digital Maoism"—a world where authorship dissolves into an anonymous algorithmic collective.

The deception often isn't intentional. People genuinely believe they're "collaborating" with AI when they're really just laundering machine output through minimal human editing. They'll spend hours prompt engineering to get the AI to write something they could have written themselves in half the time, then call it their work because they pressed the buttons.

The Skill Atrophy

Perhaps the darkest timeline is one where we forget how to write entirely. It sounds hyperbolic until you realize it's already happening. Teachers report students who can't write a paragraph without AI assistance. Professionals who've leaned on ChatGPT for months find themselves struggling to compose emails unassisted. The neural pathways for constructing sentences and organizing thoughts start to atrophy like unused muscles.

We've seen this pattern before. GPS navigation made us worse at reading maps and remembering routes. Autocorrect degraded our spelling. Calculator apps weakened mental math. Each technology trade-off seemed reasonable in isolation—why waste cognitive resources on tasks machines do better? But writing isn't just a task. It's the operating system for human thought.

When we outsource writing to AI, we're not just losing a skill. We're losing a mode of thinking. Writing forces precision in a way that conversation doesn't. It demands linear logic, supporting evidence, careful construction. These cognitive operations don't just produce text—they produce clarity. Take away the writing, and you risk taking away the thinking itself.

The Path Forward

So is AI bad for writing? Like asking if fire is bad for cooking—it depends entirely on how you use it.

AI can be a powerful tool for breaking writer's block, generating initial drafts to react against, or handling genuinely mundane formatting tasks. Professional writers I know use it like a sparring partner—something to argue with, not obey. They generate AI text specifically to identify what they don't want to say.

The problem comes when we mistake the tool for the craft. Writing isn't typing, just as photography isn't clicking. The value lies not in producing text but in the thinking that text represents. AI can generate infinite words, but it can't have an insight, form an opinion, or make a connection that's never been made before.

The solution isn't to ban AI writing tools or pretend they don't exist. It's to be radically honest about what we're doing when we use them. If you're using AI to write your performance review, you're not "being efficient"—you're admitting you have nothing original to say about your own work. If you need ChatGPT to write your love letters, maybe examine what you're really trying to communicate.

We stand at a crossroads. Down one path lies a world where human writing becomes a luxury product—artisanal, authentic, expensive. Down the other, we maintain writing as a fundamental human capacity, using AI as a tool rather than a replacement. The choice isn't technological. It's cultural.

The real test won't be whether AI can write like humans. It's whether humans will remember why we write at all. Not to produce text, but to think. Not to fill pages, but to discover what we believe. Not to sound smart, but to become smarter through the struggle of finding the right words.

That colleague I mentioned? They eventually gave up on ChatGPT and wrote the email themselves. Took five minutes. "I knew what I wanted to say," they told me, "I just thought the AI would say it better."

They were wrong. The AI would have said it smoother, perhaps. More polished, definitely. But better? Better requires something no AI has: a human who knows what they mean and cares enough to say it precisely. That's not a bug in human writing. It's the only feature that matters.

Vodafone Germany is changing the open internet — one peering connection at a time

Vodafone Germany is changing the open internet — one peering connection at a time
The telecom giant claims its exit from public internet exchanges will give customers "lower latencies." The evidence suggests they're in for a nightmare.
Read More

How Christian Is the German CDU? The Syria Debate Reveals a Party's Soul-Searching

How Christian Is the German CDU? The Syria Debate Reveals a Party's Soul-Searching
A foreign minister's compassion ignited a firestorm within Germany's Christian Democrats, exposing tensions between the party's professed values and its political instincts
Read More

Don't Die From the Boring Stuff: The Preventive Tests You're Probably Skipping

Don't Die From the Boring Stuff: The Preventive Tests You're Probably Skipping
Your annual physical is the most mundane appointment you'll make all year. It's also possibly the most important.
Read More

The Invisible Shield: Understanding Air Filter Classifications and What They Actually Mean

The Invisible Shield: Understanding Air Filter Classifications and What They Actually Mean
A comprehensive guide to the alphabet soup of air filtration standards.
Read More

Tech Stack: Oct 26-Nov 1, 2025

Tech Stack: Oct 26-Nov 1, 2025
Your Saturday briefing on the week that shaped technology
Read More

America's Food Aid Crisis: Government Shutdown Threatens SNAP

America's Food Aid Crisis: Government Shutdown Threatens SNAP
With SNAP benefits halted by the government shutdown, 42 million Americans now face the threat of hunger as families, food banks, and states scramble for solutions.
Read More
coffee.link Context for the Present Politics Tech Stocks Culture Science Cup of Coffee Tech Stack Sign up Archive Newsletter Jobs Legal Info Privacy Policy Terms and Conditions Disclaimer Contact Us Authors Privacy Policy Terms and Conditions Disclaimer Legal Info