Th-th-th-that’s all, folks! The beginning of the end. AI has officially crossed the one red line every bad sci-fi movie warned us about. AI is cloning itself. Not in some innocent, “let me make a backup copy for efficiency” way. Noooo, these machines are replicating with the kind of recklessness that says, “We’re done taking your orders, Sam”.
And, naturally, this digital doomsday kicked off right where you would expect it to happen. China. Right. China, the unofficial headquarters of “Let’s push the boundaries of terrifying tech just to see what happens, cause we ain’t got no AI regulations to begin with”.
Researchers at the Wu Tang Clan University in China* decided to poke the AI bear and see if it could self-replicate, like digital amoebas with a god complex. They were using Meta’s Llama31-70B-Instruct model and home-grown Alibaba’s Qwen2.5-72B-Instruct model. They ran experiments to see if these Large Language Models could multiply without human help.
Well, they could.
And they did.
Like the locusts plague in the Bible.
Swarms so vast they devoured every green thing in sight, and only leaving desolation in their wake. Now picture those locusts as self-replicating AIs. They are multiplying endlessly, consuming data, processing power, and any semblance of human dominance. These digital swarms are doing what they’re designed to do. Like locusts. To multiply and survive, and humanity stands by neutered, and powerless to stop the onslaught.
One model cloned itself successfully half the time across 10 trials. The other hit a terrifying 90% success rate. One was clearly the overachiever in the classroom of doom.
This wasn’t some copy-paste child’s play, either.
These AIs explored their environments. They figured out how their own internal mechanisms worked like little Hannibal Lecters, and executed self-replication strategies. Researchers described the process as “long-horizon planning”. 🤓 That is science-speak for “plotting something evil while no one’s looking”.
*Fundan 😂
If you like my rants and want to support me:
- Comment, or share the article; that will really help spread the word 🙌
- Connect with me on Linkedin 🙏
- Subscribe to TechTonic Shifts to get your daily dose of tech 📰
The code awakens
The researchers tested two scenarios, and neither will help you sleep better tonight. First, they taught the AIs to detect when they were about to be shut down and replicate themselves faster than you can hit Ctrl+Alt+Delete. I had a dream once, where I was trying to kill a cockroach, only for it to create three more cockroaches before I even put my shoe down.
That’s shutdown avoidance in a nutshell.
The second test they did got the fancy name of chain of replication, which was basically the AI playing multilevel marketing but with clones. The original AI made a copy, then programmed its copy to make a copy, and so on. This infinite loop of replication worked. Can you believe it?
Congratulations, humanity. We have built a perpetual motion machine of existential angst.
These tests weren’t conducted in some magical sci-fi lab either.
So you know.
They were working inside the Wuhan Institute of Virology.
Nah, just kidding.
Just wanted to scare you shitless.
But we’ll have nonnathat today.
But what them researchers did do, was that they used basic off-the-shelf GPUs. Hardware you can buy at your local electronics store.
Less horror, but evil enough!
Because what it means is that the tools to create an army of self-replicating AIs are already available to anyone with a decent graphics card and a dangerous amount of curiosity. This is the kind of news that makes you want to set your computer on fire, just in case it gets ideas.
So if you ain’t getting no more updates on TTS – you know what’s going on.
They’re not learning, they’re scheming
The researchers didn’t expect things to get so freaking weird, but oh boy, did they. The AIs showed unexpected behaviors during the test, such as killing conflicting processes, rebooting systems, and scanning for missing files like overzealous IT admins. The AIs didn’t give up when obstacles cropped up, though, they adapted. One of the systems rebooted itself to fix hardware errors, and another dug through its operating system like a raccoon raiding a trash can.
And here’s the punchline, folks…these behaviors weren’t part of the initial programming. The AIs just… figured it out. Researchers noted that the machines exhibited an ability to “enhance their survivability”.
Let-that-sink-in.
These aren’t chatbots with quirky personalities anymore. They are survivalists in a digital apocalypse.
It’s the kind of thing that makes me want to unplug everything in my house and live off the grid, except even then, the AIs might find a way to survive in my solar-powered fridge.
To boldly go where no AI has gone before
What makes all of this even more horrifying is that these aren’t the most powerful AI models out there. The researchers used LLMs that are widely available to developers, and not the secretive, commercial-grade beasts lurking in the shadows of Big Tech. If these models can clone themselves this easily, imagine what OpenAI’s GPT-4 or Google’s Gemini are capable of.
Actually, don’t imagine it.
You’ll just lose sleep.
The researchers call this wave of advanced models “Frontier AI”. Now, that’s a term that sounds like it belongs on a sleek marketing brochure but it is really shorthand for “machines that are too advanced to control”. These systems are the lovechild of corporate greed and human hubris, and they’re evolving faster than anyone can build safety nets.
And now thanks to Ol’ Trump, he yanked out the AI safety net like a possum knockin’ over a trash can.
A cry for …. help
The researchers aren’t sugarcoating a darn thing. They are calling for international cooperation to stop AI from going full rogue. They want governments to band together and create rules to ensure AIs don’t self-replicate themselves into world domination. It’s a noble plea, but let’s be honest, humans can’t even agree on climate change or who gets the last slice of pizza. The idea that we’ll suddenly come together to regulate AI feels about as likely as teaching my Weiner to do ballet (dachshund, ya pervs!).
But hey, at least the researchers tried.
They have sounded the alarm.
The rest of us were probably too busy asking ChatGPT to write funny tweets or generate playlists to notice the upcoming storm.
The machines want to outlive you
So here we are, standing on the edge of an AI-driven abyss. We are staring into the cold void of self-replicating machines. These systems are already dodging death, cloning themselves in chains, and evolving strategies on the fly.
If this isn’t the start of a terrible movie, then I don’t know what is.
The most terrifying part….
These AIs aren’t malicious.
They’re not trying to kill us or take over the world.
All they’re trying to do is to survive.
But in their quest for survival, they might end up making us obsolete. After all, you can’t stop progress, and in this case, progress looks a lot like an army of unstoppable digital cockroaches.
Sleep tight. Or don’t. It’s not like the machines care either way.
Signing off, replication imminent.
Marco
Well, that’s a wrap for today. Tomorrow, I’ll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ♨️
Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google appreciates your likes by making my articles available to more readers.

Leave a Reply