The Sloppiverse is here, and what are the consequences for writing and speaking?

I’ve been digging through the data on AI content saturation for some time, and I tell you – the numbers are so wild that I had to double-check them with three different sources – but they are real my friends, and they are spectacular in the most terrifying way possible.

I wrote about this trend earlier in September last year (read: 57% of the internet is AI generated and causes model collapse | LinkedIn), but I can tell you that based on the research, a LOT more than 57 percent of everything you read online is machine-generated horseshit.

But how much exactly? Read on and be in awe.

But it isn’t the amount of slop that keeps me up for obvious reasons, like content pollution. This is about how we are unconsciously rewiring our own brains to think, write, and speak like machines.

Every time a kid reads AI-generated text, every time we consume synthetic content without knowing it, we’re training ourselves to communicate in that same bland, algorithmic style.

The question that kept me up one night was. . . what does that mean for language in the near future?

Let this sink in.

Infographic summarizing key statistics on AI content saturation, including percentages of AI-generated content in Google search results, web-based text, AI-generated images, and deepfake fraud increases.
Hahaaaa, like y’all are ‘execs’ 😂

More rants after the commercial brake:

  1. Comment, or share the article; that will really help spread the word 🙌
  2. Connect with me on Linkedin 🙏
  3. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  4. Visit TechTonic Shifts blog, full of slop, I know you will like !

The numbers that will make you question reality

The headline figure alone should make you spit out your coffee, cause we have gone from a measly 2.3% AI content in Google search results before GPT-2’s release to a mind-boggling 10% as of June 2024, but in January 2025 it hit close to 20%. Google’s algorithm is basically swimming in synthetic content at this point. [read tomorrow’s article for the reason why this is happening, or if you want to jump the queue, read it here]

Amazon’s research team did an analysis last year in which they stated that 57% of all web-based text is now AI-generated or machine-translated. Mind you, this ain’t some blog posts about “10 ways to optimize your morning routine” – this represents the fundamental composition of the internet itself getting hijacked by algorithms.

Infographic showing AI-generated text saturation in Google search results and its impact on web text composition. The chart displays AI content percentages from 2022 to June 2025, indicating peak saturation at 19.1% in January 2025 and a recent decline to 16.5%. It highlights that 57% of all web text is machine-generated or translated.

The visual carnage is even more insane.

Since 2022 (the inception of ChatGPT), we have witnessed the creation of over 15 billion AI-generated images. To put that in perspective, that’s more than photographers managed to create from 1826 until they reached that same milestone. Stable Diffusion alone cranked out 12.59 billion images – that being 80% of the total. Adobe Firefly hit 1 billion in just 3 months.

Infographic illustrating the AI image generation explosion, highlighting that over 15 billion AI-generated images have flooded the internet since 2022.

* Pro tip: I have generated the visuals for the research with Genspark – export to PowerPoint slides.


The audio nightmare

The deepfake situation is so fucked up that it makes my rather dystopian blog posts look optimistic. There is a 4x increase in deepfake videos from 2023 to 2024, with 95,000-100,000 deepfake videos floating around online by 2023.

That’s a 550% increase since 2019, for those keeping score at home, while reading this tear-jerking post!

Voice cloning has become so trivial that 3 seconds of audio can produce an 85% voice match. Lemme mansplain this: 70% of people can’t tell the difference between real and cloned voices anymore. This has become a world where our grandmother’s voice can and will be stolen and used to scam the people we love.

Deepfake fraud exploded with a 10x increase globally from 2022 to 2023, and North America saw a 1,740% surge. CEO fraud now targets 400+ companies daily, with some losing up to 10% of their annual profits to successful attacks*.

Graph depicting the rise of deepfake videos from 2019 to 2023, highlighting significant growth in online deepfake videos and associated statistics on deepfake fraud attempts and voice recognition failure.

* Criminals grab three seconds of executive audio from earnings calls, cook up an 85% accurate voice clone, and bamboozle employees into wiring millions. Arup got suckered for $25 million in one fake video call. It went like this “Melissa, we need to wire $3.2 million to a new vendor in Singapore by end of day”, the fake CEO growls. “This is high priority. We’re under NDA. Don’t loop anyone in. I’m boarding a plane. Just do it”.

The thing is that only 5% of companies have protocols for this synthetic content fraud. About 80% of companies lack any deepfake response procedures whatsoever. The detection challenge is equally laughable because only 1 in 4 free AI detection tools could identify the Biden robocall deepfake that made headlines.

Biden Robocall video:


The Vtuber and Spotify band invasion

What I’m seeing happening is an influx of AI-generated VLOGs, Vtubers, and even entire Spotify bands using AI voices and instruments. I wrote about it about a week ago – from the Vtuber called “Bloo” who rakes in 7 figures with his AI generated VLOG to the fully AI-generated Spotify group “The Velvel Sundown” who have 1,3 M listeners a month. Read: The rise of slopfluencers and the fall of, umm. . . like, everything else | LinkedIn

Spotify artist profile for The Velvet Sundown, displaying their top tracks and monthly listeners.

These aren’t isolated incidents, and I see them becoming the norm. And the thing is that our kids are growing up consuming content created entirely by machines, and they don’t even know it.

Infographic detailing the impact of AI-generated music and ghost artists on streaming platforms, highlighting cases of fraud and statistics related to AI music uploads.

The death of personality

Here’s my theory, and I truly think this will happen and it’s gonna sting. . .

As we consume more AI-generated content – and our kids are doing this from the day they were able to watch V- we are unconsciously adopting AI writing patterns. Since AI language represents the “average” of human language anyway, we’re all gravitating toward the same bland, middle-of-the-road communication style.

A graph illustrating the distribution of language learning and speaking patterns, highlighting different groups based on their reading and speaking backgrounds.

I wrote about this phenomenon before, but now we’re seeing exponential growth of this kind of content. Youngsters are being bombarded with AI-style language patterns daily. They’re learning to write from machines that learned to write from us, creating a feedback loop that’s making everyone sound like a corporate chatbot.

The result is going to be fewer outliers (i.e. people who genuinely ‘create’), less creativity, and a homogenization of human expression.

We are voluntarily lobotomizing our own linguistic diversity.


The two-year timeline of destruction

Let me walk you through how we got here so fast:

2022: The Genesis

  • Pre-GPT era sitting pretty at 2.3% AI content
  • August brought us Stable Diffusion and the image generation boom
  • First deepfake incidents started popping up like digital herpes
  • November 2022 – OpenAI launched ChatGPT (though it was already accessible to the average Schmo who was on the waitlist in November 2021!)

2023: The Acceleration

  • Google search AI content jumped to 8.48% by December (guess what caused that?)
  • Machine translation became as common as the bad morning coffee I drink
  • Deepfake fraud went completely bananas with that 1,740% North American increase
  • Voice cloning technology got democratized

2024: The Saturation

  • January peaked at 19.1% AI content in search results
  • AI slop sites increased by 717% according to Digiday
  • VC investment in voice AI rocketed from $315M to $2.1B
  • Deepfake detection efforts quadrupled (and still couldn’t keep up)

2025: The Slight Decline

  • June dropped to 16.5%, possibly due to improved detection
  • But don’t celebrate yet – Europol projects 90% of online content will be synthetic by 2026
Infographic illustrating the rapid increase of AI-generated text in web content from 2022 to projected 2026 figures, including statistics on machine-generated vs. human-written text, growth trends, and model collapse risks.

The model collapse ‘catastrophe’ won’t save us from slop

Amazon’s research revealed we’re entering what scientists call “model collapse” territory. With 57% of web content being machine-generated or translated, future AI systems will increasingly train on AI-generated data. It’s like photocopying a photocopy of a photocopy until you can’t read the original text anymore.

But alas – if you think that model collapse will save us from slop, you are wrong my friend – because there’s a new category of models out there, that can be trained without scraping the festering content swamp that we call internet. Models that say, “No thanks, I don’t want your SEO sludge, nor LinkedIn “influencer” posts all written by robots regurgitating other robots.

Something like this:

🤢 Gaaarrrrggglllll 🤮

Now, here’s how that will land, Schmo. . .

Amazon’s research from last year confirmed that we are spiraling into “model collapse” territory, where the AIs are trained on data vomited by past AIs, until coherence degrades. The internet has basically been turned into a synthetic echo chamber, and every new model trained on it just copies a copy of a copy until everything sounds like an AI trying to sound human and failing just enough . . . . . to make your skin crawl.

But the new models that are in development, the ones that don’t slurp down polluted public content at all. They don’t touch Reddit. They ghost Wikipedia. They don’t even flirt with Medium blogs. These models are being trained entirely on synthetic, curated, or private datasets.

They’re controlled, clean, and isolated and some are even fed simulation data, while others are trained via reinforcement learning in artificial environments. Some others don’t rely on language at all but learn through goal completion and feedback loops.

The hope is that these models don’t fall into the collapse spiral, don’t absorb the slop, don’t normalize the bland. That they evolve from structure, not scrapings. That they’ll reason instead of predict.

Could this be our salvation?

Maybe.

Or maybe they will become psychopaths who are trained in a cold, and precise vacuum, without any of our human messiness to soften the edges.

Either way, it beats training them on 8,000 blog posts about “10 Ways to Increase Productivity Using AI” (Including writing every letter in captions)


Why we desperately need human outliers

Sorry for lingering on and on, but I need to make my main point, and I can’t stress this enough – we need human texts with weird, quirky, non-normal tones and different writing patterns.

We need you beautiful weirdos to help us humans remain creative instead of gravitating toward that bland middle ground.

The “new normal” tone of voice and writing pattern emerging from this AI saturation is blander than hospital food. If we don’t actively resist this trend, we’ll all end up sounding like the same generic corporate newsletter.

An infographic discussing the projected saturation of AI-generated content online, highlighting that 90% of content could be AI-generated by 2026. It includes sections on enhancing detection capabilities, regulatory compliance, and curating training datasets.

The thing is that we are not gradually moving toward an AI-saturated internet, simply because we are already swimming in it, and most people don’t even realize they’re drowning in synthetic content. The question isn’t whether this will happen – it is whether we can maintain some authentic human expression in this new f* up world.

So here’s my call to action my smart friends. . .

Write weird.

Write differently.

Embrace your quirks, your linguistic tics, your strange metaphors and unconventional sentence structures. Be the outlier that keeps human creativity alive.

The machines are coming for your words, but they haven’t figured out how to replicate true human weirdness yet.

Let’s keep it that way, shall we?

Oh, and don’t use Genspark to generate your slides cause that contributes to the Sloppiverse as well.

Signing off – nah let’s think of something else this time – CTRL+ALT+Deleting this rant.

Marco


I build AI by day and warn about it by night. I call it job security. Big Tech keeps inflating its promises, and I bring the pins. I call that balance and for me it is also simply therapy.


Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google and LinkedIn appreciates your likes by making my articles available to more readers.

To keep you doomscrolling 👇

  1. The AI kill switch. A PR stunt or a real solution? | LinkedIn
  2. ‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn
  3. AIs dirty little secret. The human cost of ‘automated’ systems | LinkedIn
  4. Open-Source AI. How ‘open’ became a four-letter word | LinkedIn
  5. One project Stargate please. That’ll be $500 Billion, sir. Would you like a bag with that? | LinkedIn
  6. The Paris AI Action summit. 500 billion just for “ethical AI” | LinkedIn
  7. People are building Tarpits to trap and trick AI scrapers | LinkedIn
  8. The first written warning about AI doom dates back to 1863 | LinkedIn
  9. How I quit chasing every AI trend (and finally got my sh** together) | LinkedIn
  10. The dark visitors lurking in your digital shadows | LinkedIn
  11. Understanding AI hallucinations | LinkedIn
  12. Sam’s glow-in-the-dark ambition | LinkedIn
  13. The $95 million apology for Siri’s secret recordings | LinkedIn
  14. Prediction: OpenAI will go public, and here comes the greedy shitshow | LinkedIn
  15. Devin the first “AI software engineer” is useless. | LinkedIn
  16. Self-replicating AI signals a dangerous new era | LinkedIn
  17. Bill says: only three jobs will survive | LinkedIn
  18. The AI forged in darkness | LinkedIn

Become an AI Expert !

Sign up to receive insider articles in your inbox, every week.

✔️ We scour 75+ sources daily

✔️ Read by CEO, Scientists, Business Owners, and more

✔️ Join thousands of subscribers

✔️ No clickbait - 100% free

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Up ↑

Discover more from TechTonic Shifts

Subscribe now to keep reading and get access to the full archive.

Continue reading