“Marco, do you know how to remove Anydesk?” *
Damn.
The scene I’m about to describe couldn’t be more cliché if it came a laugh track from a bad 90’s sitcom. It happened to my sister, and what came next scared the living daylights out of me in ways that twenty years in this farkin’ industry hadn’t prepared me for.
The scam was a classic.
Some “friendly” email tells her about a payment on her subscription service, and asks her to call if there’s an error with it, and next thing she knows, she’s chatting with an even friendlier guy whose English sounds like it was learned from watching dubbed episodes of Friends in a Bangalore call center.
What happened next surprised me though. Right before it made me want to throw my laptop out the window and go back to chemistry where the worst thing that could explode was an actual reactor, not the entire digital economy.
More rants after the commercial brake:
- Comment, or share the article; that will really help spread the word 🙌
- Connect with me on Linkedin 🙏
- Subscribe to TechTonic Shifts to get your daily dose of tech 📰
- Visit TechTonic Shifts blog, full of slop, I know you will like !
Not today, scammer – or so we thought
Credit where it’s due to my sister. She’d actually just made the final payment on the exact service the scam email referenced to, so calling to clear up what looked like a duplicate charge made perfect sense. Hell, I get those emails every week myself – apparently the geniuses at my bank still can’t figure out that buying two identical items is a thing that happens when you’re not living in their algorithmic fantasy land.
The moment she heard “gift cards” she remembered our conversation about that “blue-haired guy on YouTube” (Pierogi) – apparently the only reliable source of cybersecurity advice if you’re over 40 – and hung up faster than a Dutch worker ending a business conversation at exactly 17:00.
Um, Marco – could you tell me more about the trick with those ‘gift cards please’?
Huh?
You trickster, you !
No, no, I just want to be prepared for when it happens to me!
Happy to oblige my loyal intelligent friend!
Here it comes:
The gift card scam is pure sleaze, it comes packing with urgency and fake legitimacy. It usually starts with someone pretending to be official, like Amazon, your bank, the IRS, and they’ll claim there’s a serious problem. Maybe your account’s frozen, or there’s a fraudulent charge, or someone’s about to sue you into oblivion. Whatever it is, they’ll make sure it sounds urgent and make you panic just enough to shut off your brain, so they hit you with the dumbest part: “Go buy gift cards you nuckleweed!”
And not just one.
Multiple.
Usually from Apple, Google Play, or Steam.
Then they’ll ask you to scratch the codes off and read them over the phone or send them via text or email. They say it’s to “verify” something.
That’s your red flag screaming at full volume.
Why gift cards I hear you thinking!
Well, because they’re instant, anonymous, and irreversible. Once the code’s are sent by the nincompoop, their money’s gone.
You don’t get no refunds.
No chargebacks.
And the fun is that there are no paper trail. It’s like handing over cash.
Here’s Pierogi for you – the scammer of scammers:
Fraud Is inevitable like rain in britt-ain
So there I was, uninstalling remote access software and checking for vulnerabilities like some kind of digital janitor cleaning up after a cybersecurity accident. Well, no harm was done, but it seemed like the perfect excuse to change all her passwords anyway – you know, that thing we all promise to do but never actually manage until something goes sideways.
Like, um, 16 freaking billion of our user names and passwords out there for grabs. . . . (read this post)
She didn’t panic or get embarrassed, which impressed me.
When I told her she’d handled it well, she just laughed it off – “It’s inevitable these days” with the resignation of someone who’s watched the internet evolve from a research tool into a thunderdome where common sense just goes to die off.
A few minutes after we hung up, it hit me like a brick wrapped in a farkin’ neural network.
The techie in me – you know, the part that’s been tinkering with ML since gen.Z were still in their diapers, and has seen every possible way humans can screw up technology – that started thinking
“Bet if I was a scammer, I could use agentic AI to bypass the mother-in-law completely and go straight for those sweet, untraceable gift cards”.
Merde.
I got frighteningly far with this thought experiment.
The only firewall standing between me and financial chaos was the bank’s own incompetence – the same institution that flags my grocery orders as “suspicious activity” after all these years of buying identical items cause I’d like to have a spare.
Now here’s something that should keep you awake at night. . .
The problem with AI fraud isn’t that AI will fool humans. That is kindergarten-level thinking. The real issue is that humans built all the systems that we’re using, and those systems have more holes than the cheeses we sell in our country.
How long before scammers stop bothering with humans entirely and just tell their AI to fool our systems directly?
If I can figure this out while eating a stroopwafel, running around in clogs, and complaining about the weather, you can bet your peachy behind they’re already doing it in every cybercrime hub from Bucharest, via Bombay all the way to Bangkok.
Go change your passwords now. I’ll wait. But come back – I’ve got more uncomfortable truths to share about this digital hellscape we’ve created.
We’re all staring at the wrong damn screen
Look, it’s 2025 and I’ve spent over a decade in machine learning, worked at C-level across multiple industries, and consulted for organizations that should know better. I’m going to be brutally direct here, even if it hurts some feelings.
If someone can be convinced to do something stupid by a deepfake on TikTok or a fake Elon Musk tweet, there’s already a fundamental disconnect between them and reality that no amount of cybersecurity awareness training can fix.
I can’t solve that.
You can’t either. Even GPT-7 running on a quantum computer in Iceland can’t fix human gullibility.
We can all get fooled sometimes – it’s shit but it happens.
I could also get electrocuted on a rainy day by all the devices that surround me, while arguing with you guys, but I still switch it all on in the house when it’s raining.
But here’s the real problem. . .
That confidence we feel about not falling for obvious scams is exactly what makes us vulnerable to the sophisticated ones. When we’re focused on not falling for direct attacks, we completely miss the indirect ones that bypass us entirely.
I sketched out a system-level attack on the back of a napkin in ten minutes, it went something like this. . .
An autonomous agent posing as a refund specialist spins up a fake customer service portal (Clone an existing porting with help of a genAI tools of Chinese origin – say Genspark, Manus, etc.), it scrapes session cookies from a cloned login form, uses browser automation to initiate a password reset, intercepts the recovery flow via a poisoned email link it crafted itself, spins up a crypto wallet, drains funds through obfuscated smart contract hops, logs its activity in base64 to a private Gaia node, forks itself (I mean – like ‘splitting’), and leaves behind a chatbot that thanks the victim for their feedback and offers them a 10% discount on their next AI-generated trauma.
Now, I’m no David Lightman from WarGames.
Hell, I’m not even trying to be malicious – I’m just thinking like an engineer who understands how these systems actually work under the hood.
With modern vibe coding tools (Genspark, Manus and CHAI) it has become like kinderspiel to pull this off. You just need a bit of creativity, a dash of human gullibility (again) and the power of mass attacks and voila.
The automatic seatbelt moment for AI
I’m not going where you think with this argument, but I’m going exactly where the industry needs to go, even if nobody wants to hear it.
I’ve been screaming about the enshaitification of AI implementation for some. Every lecture, every consulting engagement, every time some CEO asks me about “AI transformation” – the same story. We’re rushing to deploy systems we don’t understand, built on infrastructure designed when the biggest threat was someone guessing your password.
So listen up, fellow techies, CEOs, futurists, and y’all other ‘visionaries’ and AI fanboys who think you’re building the future while standing on foundations made of digital quicksand.
Your systems ain’t secure enough for the aggressive AI rollout you’re pushing. The original mistake happened over a decade ago when we decided that waiting thirty seconds for a transaction was apparently worse than nuclear war, so we built “one-click” everything.
Remember automatic seatbelts?
Of course you don’t – you’d need to be over 40 I guess and have lived through that particular piece of regulatory theater.
But older folks like Marc Drees will tell you about driving without seatbelts (and about horse drawn carriages as well, if you ask him nicely) and nobody will think twice about it. Then bureaucrats stepped in and gave us automatic seatbelts – those motorized monstrosities that “solved” the safety problem in the most annoying, overcomplicated way imaginable.
That’s exactly where we’re heading with AI regulation.
Every action in tech produces an unequal and opposite regulatory overreaction – it’s like Newton’s third law, but administered by people who think “the cloud” is just weather (Hi EU AI-Act!)
The solution no Valley bro wants to hear
While we’re busy inventing clever ways to prevent more grandmothers from sending iTunes cards to call centers in Kolkata, maybe we should pump the brakes on this blind rush to AI implementation. Maybe we should spend that energy putting some friction back into our frictionless systems before we let AI agents negotiate with other AI agents about moving our money around without human oversight (Hi Visa!)
We need to do this before some regulatory body swoops in and forces us to implement the digital equivalent of automatic seatbelts – some clunky, user-hostile “solution” that makes everyone miserable while barely addressing the actual problem.
Yeah, something like cookie consent forms.
I’ve seen this movie before, across multiple industries and technology cycles. The pattern is always the same – tech is rushed to market, security is ignored, get blindsided by obvious consequences, then overreact with regulations that solve yesterday’s problems while creating tomorrow’s disasters.
Now, here’s what keeps me up at night, and should terrify anyone who actually understands how this technology works – we’re building AI systems faster than we’re building AI security. We’re creating digital entities that can interact with financial systems, make autonomous decisions, and execute transactions – and all is running on infrastructure that was designed when the biggest security threat was script kiddies defacing websites.
The scammers aren’t coming for your grandmother anymore. They’re coming for the systems your grandmother uses, the bank that processes her transactions, the AI that’s supposed to protect her from fraud, and the entire interconnected web of digital services that keep our economy running.
And the thing that’s really terrifying is that by the time we figure out how catastrophically vulnerable we are, it’ll be too late to fix it without breaking everything else we’ve built on top of these shaky foundations.
The real battle isn’t between humans and AI scammers. It’s between the AI we’re building to protect ourselves and the AI they’re building to exploit us. And right now, from where I sit after two decades of watching this industry make the same mistakes over and over again, I’m not confident which side is winning, because the future of fraud isn’t about fooling humans, cause it’s about AI systems talking to other AI systems, and we’re building the infrastructure to make it as easy as possible.
If you enjoyed this cheerful assessment of our digital future, you probably need therapy.
But if you want more uncomfortable truths delivered with Dutch directness and twenty years of industry experience, you know where to find me.
Just don’t ask me to remove Anydesk from your computer. I’m still processing the trauma from last time.
Signing off,
Marco
* Anydesk is a remote desktop application, kinda like TeamViewer or Windows Remote Desktop, but with fewer guardrails and a lot more scammer fingerprints all over it. Phone based scammers from India want you to install it in the pretence of refunding a charge or whatever, and with it they can watch your screen, take control, access banking apps and steal your cash while they distract you with fake loading bars and with their chatter on the telephone.
I build AI by day and warn about it by night. I call it job security. Let’s keep smashing delusions with truth. We are the chaos. We are the firewall. We are Big Tech’s PR nightmare.
Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google and LinkedIn appreciates your likes by making my articles available to more readers.
To keep you doomscrolling 👇
- The AI kill switch. A PR stunt or a real solution? | LinkedIn
- ‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn
- AIs dirty little secret. The human cost of ‘automated’ systems | LinkedIn
- Open-Source AI. How ‘open’ became a four-letter word | LinkedIn
- One project Stargate please. That’ll be $500 Billion, sir. Would you like a bag with that? | LinkedIn
- The Paris AI Action summit. 500 billion just for “ethical AI” | LinkedIn
- People are building Tarpits to trap and trick AI scrapers | LinkedIn
- The first written warning about AI doom dates back to 1863 | LinkedIn
- How I quit chasing every AI trend (and finally got my sh** together) | LinkedIn
- The dark visitors lurking in your digital shadows | LinkedIn
- Understanding AI hallucinations | LinkedIn
- Sam’s glow-in-the-dark ambition | LinkedIn
- The $95 million apology for Siri’s secret recordings | LinkedIn
- Prediction: OpenAI will go public, and here comes the greedy shitshow | LinkedIn
- Devin the first “AI software engineer” is useless. | LinkedIn
- Self-replicating AI signals a dangerous new era | LinkedIn
- Bill says: only three jobs will survive | LinkedIn
- The AI forged in darkness | LinkedIn

Leave a Reply