Why we’re all writing like shit on purpose now

So there I was, knee-deep in research for an article about how the grammar police is beating us into submission for using AI years, and I found myself reminiscing about the past, and how vigorous we all were in trying to write down the perfect sentence. We were all taught by Victorian-era teachers with their rulers and their “proper English” and their obsession with split infinitives that would make a dominatrix blush. And once we were part of the ‘academia’ it turned us all into neurotic spell-checkers, terrified of dangling participles like they were contagious diseases, and the red pen became mightier than the sword in traumatizing generations of students into grammatical submission.

We have been so thoroughly programmed to write “correctly” that a misplaced comma feels like a mortal sin, mon Dieu! And now here we are, after centuries of having perfect grammar beaten into our skulls, we are all starting to write like we have been huffing paint thinner all morning… because of AI tools like Claude and ChatGPT.

I know that, personally, I don’t care about spelling nor grammar mistakes no more (sic).

Why?

Oh boy, get ready for a ride.

I’ve been reading up on humanity’s resistance to technology for a while, and let me tell you, we are about as consistent as a meth-addled hamster on a wheel when it comes to new tech. My recent client work has revealed some absolutely delicious ways we’re giving the middle finger to AI, specifically those fancy-pants Large Language Models that think they’re so forkin’ smart.

“But we can’t detect AI content” say the experts.

Bullshit.

I’d argue we absolutely can, at least to some degree, though I’ll admit more research needs doing on this before I completely lose my street cred. What I’m seeing are these beautiful little rebellious signals in the static of our collective cultural tantrum, like finding a perfectly preserved middle finger in archaeological ruins.

The thing is that humans are pattern-recognition machines in meat suits, and we’re all inherently storytellers – c’est la vie – it’s how we make sense of this absurd existence and manage to function as social units without completely murdering each other most days. From hundreds of conversations I’ve analyzed about LLMs recently, some consistent complaints keep popping up like particularly persistent acne.

The overuse of certain punctuation marks by LLMs for instance, the perfect spelling or immaculate grammar, and what about arguments that flow about as naturally as a constipated robot. There’s this weird monoculture to LLM output that just says “I AM NOT HUMAN”.

It is uncanny valley territory, mes amis.

You know that creepy feeling when you see a robot or CGI character that’s almost human but something’s just… off?

Yeah, indeed, like watching Zucky try to act natural.

That’s what LLM text feels like.

As people discuss these quirks online and offline and in their group chats while high on whatever designer drugs the kids are into these days, we’re developing what sociologist Emile Durkheim called a collective conscience – lemme mansplain this: we’re basically evolving a shared, unconscious “fark off” response to AI.

People are now intentionally butchering their writing. Removing certain punctuation. Leaving spelling mistakes like breadcrumbs for Hansel and Gretel, and twisting grammar into pretzels, because that’s how we express our humanity now by deliberately writing like we’re on a three-day bender.

This deliberate preservation of “flaws” becomes embodied resistance to disembodied intelligence – fancy words for “we’re being assholes on purpose”.


More rants after the commercial brake:

  1. Comment, or share the article; that will really help spread the word 🙌
  2. Connect with me on Linkedin 🙏
  3. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  4. Visit TechTonic Shifts blog, full of slop, I know you will like !

Why we resist new technologies like they’re veggies at thanksgiving

I can’t tell you how our knuckle-dragging, stone-tool-wielding ancestors resisted newfangled stone tools – probably by hitting each other with the old ones – but we’ve got a pretty good sense of how we’ve been throwing tantrums about innovation since our hunter-gatherer days. New technologies are almost always seen as threats to moral order, social fabric, family structures, or whatever religious and cultural values we’re clinging to that week.

Change scares us more than a positive pregnancy test at seventeen.

We also fear physical alteration from technology, which is hilarious considering how many of us voluntarily pump ourselves full of caffeine, alcohol, and whatever else we can get our hands on at 3 AM. When telephone wires went up, people thought they’d spread disease faster than a kindergarten classroom in winter. “Bicycle face” was a thing amongst women (and the reason why cycling is still verboten in North Korea). “Nintendo thumb” plagued me, and all the other 90s kids, and now we worry LLMs will make us stupid – though honestly, have you seen Facebook, Medium, Twitter, or LinkedIn lately?

That ship has sailed, hit an iceberg, and sunk with all hands on board.

“But AI writing will destroy authentic human connection” they said. “Google will make us stupid” they said. “LLMs will turn our brains to mush” they say now.

I think that’s pretty frackin’ daft, personally.

This resistance to writing using AI is nothing more than culture’s immune system having a histamine reaction to something foreign and misunderstood – like my body’s response to vegetables or exercise or basic human decency before noon.

And naturellement mon amis, we must mention the Luddites, those misunderstood bastards who weren’t against technology per se but against losing their jobs to machines while the government sided with rich industrialists and told workers to go fark themselves.

Now who wouldn’t rise up? I’d be throwing wooden shoes into machinery too if some steam-powered contraption threatened my ability to buy bread and ale.

We see new tech as creating “fake” versions of real human activities. Photography wasn’t art in the beginning. Recorded music wasn’t “real” performance. Now AI content is “AI Slop” – and it’s hurting brands in subtle, trust-eroding ways that make customers think “these friggin’ assholes don’t give a single solitary shit about me, they’re just trying to save a buck”

When I see AI Slop, I know whoever posted it cares about me about as much as a mosquito cares about my personal wellbeing.


Just tell Large Language Models to Go Fork Themselves

So we’re reacting to LLMs this way for all the above reasons and also we’ve been fighting to express our humanity since we first looked at the stars and thought “what the hell is all that about then?” It’s why we’ve been asking “why” for millennia, usually while drunk or high or both.

I use LLMs regularly – I’m absolutely far from being some technophobic hermit living in a cave, surviving on berries and paranoia. I understand the technology, the limitations, the whole shebang. And as an analyst of human behavior related to tech, I try to step away from my biases, though that’s about as easy as staying sober at a wedding.

But the signals are there, weak but growing. Something peculiarly, beautifully, stupidly human is happening. We’re rebelling in the dumbest, most human way possible – by writing badly on purpose.

For most people, LLMs seem dazzling, intelligent, scary even. Like that smart kid in class who made me feel inadequate but probably ended up selling insurance or becoming a crypto bro. But in small, ridiculous ways, we’re already giving the machines the finger.

Watch out there, Mr. Terminator – you may not be back after all, especially if we keep writing like we’re texting while riding a mechanical bull after downing a bottle of absinthe.

The machines may be coming, but we’re meeting them with deliberately shitty grammar and enough spite to power a small city.

Vive la résistance, you beautiful, badly-writing bastards.

Signing off,

Marco

I build AI by day and warn about it by night. I call it job security. Let’s keep smashing delusions with truth. We are the chaos. We are the firewall. We are Big Tech’s PR nightmare.


Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google and LinkedIn appreciates your likes by making my articles available to more readers.

To keep you doomscrolling 👇

  1. The AI kill switch. A PR stunt or a real solution? | LinkedIn
  2. ‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn
  3. AIs dirty little secret. The human cost of ‘automated’ systems | LinkedIn
  4. Open-Source AI. How ‘open’ became a four-letter word | LinkedIn
  5. One project Stargate please. That’ll be $500 Billion, sir. Would you like a bag with that? | LinkedIn
  6. The Paris AI Action summit. 500 billion just for “ethical AI” | LinkedIn
  7. People are building Tarpits to trap and trick AI scrapers | LinkedIn
  8. The first written warning about AI doom dates back to 1863 | LinkedIn
  9. How I quit chasing every AI trend (and finally got my sh** together) | LinkedIn
  10. The dark visitors lurking in your digital shadows | LinkedIn
  11. Understanding AI hallucinations | LinkedIn
  12. Sam’s glow-in-the-dark ambition | LinkedIn
  13. The $95 million apology for Siri’s secret recordings | LinkedIn
  14. Prediction: OpenAI will go public, and here comes the greedy shitshow | LinkedIn
  15. Devin the first “AI software engineer” is useless. | LinkedIn
  16. Self-replicating AI signals a dangerous new era | LinkedIn
  17. Bill says: only three jobs will survive | LinkedIn
  18. The AI forged in darkness | LinkedIn

Become an AI Expert !

Sign up to receive insider articles in your inbox, every week.

✔️ We scour 75+ sources daily

✔️ Read by CEO, Scientists, Business Owners, and more

✔️ Join thousands of subscribers

✔️ No clickbait - 100% free

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Up ↑

Discover more from TechTonic Shifts

Subscribe now to keep reading and get access to the full archive.

Continue reading