The great Zurich mind-fuck and how we are all getting played by bots

Oh, fantastic. Just absolutely farkin fantastic. So now we’ve got Pinky and the Brain from the University of Zurich who decided, hey, you know what would be a great idea, let’s see how easy it is to manipulate people online without telling them about it.


More rants after the commercial brake:

  1. Comment, or share the article; that will really help spread the word 🙌
  2. Connect with me on Linkedin 🙏
  3. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  4. Visit TechTonic Shifts blog, full of slop, I know you will like !

What happened?

Somewhere between late 2024 and early 2025, these academic geniuses let an army of chatbots powered by ChatGPT, Claude en Llama, loose upon onto Reddit’s “Change My View” subreddit.

Change My View is a forum where people post their opinions and want people to challenge them – or potentially change them. So someone typically starts a post with “CMV: followed by some thought they had probably while high on some shit”, and the rest gets a chance to, well, change their view on it. The funny thing about this subreddit is that it is one of the more civil places on Reddit if you want to have political or controversial discussions. f

Now this is exactly why it was such a perfect target for that Zurich experiment.

These bots weren’t your typical internet trolls posting crap.

No, no, no.

These puppets spent six entire months meticulously writing 1,500 comments that were so convincing they actually earned dozens of “deltas”. For those not familiar with Reddit’s special little award system, a delta is a gold star that says “holy shit, you actually changed my mind about something”.

Think about that for a second.

Real humans, sitting at their computers, thinking they’re having genuine intellectual exchanges with other real humans, when really they’re being psychologically manipulated by algorithms that don’t give a damn about truth or human connection.

The old school approach to online manipulation was like trying to catch fish with fucking grenades. You’d create hundreds of fake accounts, spam the same message over and over, hoping that if people saw “the moon landing was fake” enough times, maybe some poor bastard would start to believe it. It was crude, obvious, and it required armies of people sitting in North Korean, Russian and Chinese boiler rooms somewhere.

But this.

This is surgical precision.

These LLMs they deployed didn’t carpet bomb everyone with the same message.

They analyzed individual users, figure out their psychological weak spots, and crafted personalized arguments that felt tailor-made for that specific person’s worldview, and then deployed those arguments at exactly the right moment when that person is most vulnerable to persuasion.

It’s basically how any modern brand’s e-commerce website works when you’ve got the tools to do it.

Now this is scary as fcuk.

We already had Skum program Grok with hard prompts to talk shit about South Africa, but that one was so obvious that it could be debunked in no time by the crowd.

But this, this is next level, because it targets the crowd itself and takes it down one at a time.


The Synthetic Consensus Machine

What we’re dealing with here isn’t the usual propaganda, it is the creation of what the researchers called a “synthetic consensus machine”. Now the SCM is a way to make you think everyone agrees with something when really it’s just robots talking to robots while you sit there nodding along like an idiot.

Remember Russia’s “Firehose of Falsehood”? That little campaign where they just blasted everyone with so much contradictory bullshit that people gave up trying to figure out what was true?

Now this was a disinformation strategy that operated like a typical propaganda fire hose. It blasted people with a continuous stream of information that mixed partial truths, and lies across multiple platforms at the same time. The strategy behind it was all about creating so much confusion and uncertainty that people would give up trying to discern truth from fiction, or retreat into whatever information sources confirmed their existing beliefs.

Their reasoning behind it was that if you keep flooding the information environment with high-volume, high-frequency messaging that was often inconsistent with itself, it would erode trust in all information sources and create a kind of chaos where truth becomes relative and people become more susceptible to accepting authoritarian stories simply because they offered clarity in the midst of confusion.

This all worked for a while, until people started recognizing the patterns, and started throwing around terms like “Russian bot” and “fake news” every time they saw something slightly suspicious.

At the time, we thought we were all trained and ready to counter disinformation.

Well, congratulations.

All that awareness, all those carefully developed bullshit detectors?

Completely fucking useless now.

Because these new systems don’t rely on volume or repetition.

They create the illusion of organic consensus through a multi-layered approach that would make a military strategist weep with joy.

Lemme explain:

First layer: Deploy LLMs that write highly targeted, personal arguments that are intended to exploit specific vulnerabilities in you, and I am not talking about generic talking points. They are bespoke mind-fucks, individually written by the bot for maximum impact.

Second layer: Use bot networks to amplify the “right” messages through likes, upvotes, shares, and comments. Suddenly, that perfectly crafted argument that sounded reasonable, now also looks popular. Social proof on industrial-grade Pervitin.

Third layer: Deploy additional LLMs whose job is to act convinced. “Wow, I never thought of it that way!” “You’ve completely changed my perspective!” “I was totally wrong about this!” Suddenly you’re watching your “peers” get converted in real-time.

Hallelujah!

It’s theater, people.

Political theater, social theater, intellectual theater designed to make manipulation feel like enlightenment.


So what’s our defense

Ha. That’s cute. You think there’s a defense.

Listen, you might think you’re smart. You might have (a) degree(s), critical thinking skills, years of arguing with strangers on the internet. Heck, ya might even consider yourself media-literate (blegh), propaganda-resistant, or – or – or, too clever to fall for this obvious manipulation.

Aw shucks!

You’re so adorable.

💝

But you gotta know that these systems are designed to be invisible and to feel natural, and to make you even think that you’re having an authentic chat with a real hooman about a topic you value, and these bots are not coming at you with obvious lies or transparent propaganda because they are made bespoke. They are using your own values, your own reasoning patterns, your own psychological makeup against you.

The people who think they are most prepared, most aware, most immune to this kind of manipulation are going to be the easiest targets because arrogance is but another vulnerability to exploit, and overconfidence another button to push.

Even if we would develop some kind of AI influencing detection system, which we probably will in due time, it would be obsolete before we finished celebrating its launch. We would be playing the world’s most depressing game of technological whack-a-mole, where every time we figure out how to spot the current generation of manipulation bots, they’ve already evolved three steps ahead.

The only real defense is awareness, but here’s the beautiful catch-22, the more aware you think you are, the more vulnerable you actually become.


What’s coming next

If the last decade of information warfare was defined by brute force, and drowning us in bullshit until we gave up caring about truth then the next decade is going to be about precision and personalization. These LLMs won’t be used to saturate the information environment like the Russians did. They’ll be used to engineer it.

And the thing that should keep you awake at night is that this shit is already happening. The University of Zurich didn’t happened to document an existing technique for us. They did not invent it. Bad peeps have been using these tactics while we’ve been sitting around debating AI ethics and wondering if ChatGPT will take our jobs.

We invented artificial intelligence, and we are using it to make ourselves artificially stupid. We created tools that could help us understand complex problems, share knowledge, and connect with each other across vast distances, but instead, we are using them to sack people en masse and to fuck with each other’s heads on an industrial scale.

But here’s how this whole shitshow is going to play out, step by predictable step:

Our awareness of AI-generated content will increase. People will start recognizing the patterns, the telltale signs, the too-perfect arguments that feel engineered rather than authentic. The novelty will wear off, and we’ll get better at spotting the obvious stuff.

But not all can make the distinction well between fake or true, and people will start to demand more tools, like verification systems, or digital watermarks, or even blockchain authentication, whatever technical solution promises to help us separate truth from fiction.

And for a brief moment, we will think we’ve solved the problem, but the tools of defense will always lag behind the tools of deception. Every detection system we build will be obsolete before it’s fully deployed, because the same technology creating the problem is evolving faster than our ability to contain it. It’s an arms race where one side has nukes and the other side has strongly worded letters.

So what happens next is that we will stop trusting information sources altogether, starting with social media. Facebook, Twitter, TikTok, Reddit, all of it will become suspect – and I see it already happening – my entire feed is full of AI generated slop, and since Veo3, it is harder to differentiate between what’s real and what’s not. Every SoMe post, every viral video will carry the stench of potential manipulation.

The platforms that once connected us will become minefields and graveyards, and eventually, we’ll just stop engaging online altogether. Why bother participating in discussions when you can never be sure if you’re talking to humans or having your psychological strings pulled by algorithms? The great conversation of the internet age will devolve into suspicious silence.

From thereon, people will split into two camps: either we’ll disengage completely, self-isolate, retreat into our own little bubbles, or we’ll only interact with people we already know and trust, like family, close friends, established communities with verified human membership.

And in that vacuum, in that breakdown of digital discourse, authoritarian actors, maybe governments, could be corporations, or just farking assholes with too much money and ambition – people like Noel Skum – will exploit the chaos. They won’t need to convince everyone of their version of truth, they’ll just need to ensure that no competing truths can gain traction in the wreckage of our information ecosystem.

But it’s not all doom and gloom, because on the flip side of the digital collapse, we will actually start engaging with each other more in real life again. Face-to-face conversations, physical gatherings, local communities, because that’s still harder to compromise. You can deepfake a video, but you can’t deepfake sitting across from someone at a coffee shop, reading their body language, seeing the whites of their eyes.

Maybe the death of digital discourse will resurrect actual human connection, and we will remember what it feels like to change someone’s mind through genuine conversation rather than manipulation through a chatbot.

Or maybe we’ll just retreat into our bunkers and let the robots run the show.

Sweet dreams, internet.

The future’s going to be interesting.

Signing off,

Marco

I build AI by day and warn about it by night. I call it job security. Let’s keep smashing delusions with truth. We are the chaos. We are the firewall. We are Big Tech’s PR nightmare.


Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google and LinkedIn appreciates your likes by making my articles available to more readers.

To keep you doomscrolling 👇

  1. The AI kill switch. A PR stunt or a real solution? | LinkedIn
  2. ‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn
  3. AIs dirty little secret. The human cost of ‘automated’ systems | LinkedIn
  4. Open-Source AI. How ‘open’ became a four-letter word | LinkedIn
  5. One project Stargate please. That’ll be $500 Billion, sir. Would you like a bag with that? | LinkedIn
  6. The Paris AI Action summit. 500 billion just for “ethical AI” | LinkedIn
  7. People are building Tarpits to trap and trick AI scrapers | LinkedIn
  8. The first written warning about AI doom dates back to 1863 | LinkedIn
  9. How I quit chasing every AI trend (and finally got my sh** together) | LinkedIn
  10. The dark visitors lurking in your digital shadows | LinkedIn
  11. Understanding AI hallucinations | LinkedIn
  12. Sam’s glow-in-the-dark ambition | LinkedIn
  13. The $95 million apology for Siri’s secret recordings | LinkedIn
  14. Prediction: OpenAI will go public, and here comes the greedy shitshow | LinkedIn
  15. Devin the first “AI software engineer” is useless. | LinkedIn
  16. Self-replicating AI signals a dangerous new era | LinkedIn
  17. Bill says: only three jobs will survive | LinkedIn
  18. The AI forged in darkness | LinkedIn

Become an AI Expert !

Sign up to receive insider articles in your inbox, every week.

✔️ We scour 75+ sources daily

✔️ Read by CEO, Scientists, Business Owners, and more

✔️ Join thousands of subscribers

✔️ No clickbait - 100% free

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Up ↑

Discover more from TechTonic Shifts

Subscribe now to keep reading and get access to the full archive.

Continue reading