Some bright minds at MIT’s Media Lab, which is the birthplace of many grand ideas and even grander failures, have dusted off an old concept and given it a fresh coat of AI paint. They call it decentralized AI, which is a brave new vision where intelligence isn’t locked in the gilded cages of Big Tech. Instead it should frolic free, like some utopian commune of digital minds.
A wonderful fantasy, really.
And I support it wholehartedly!
Because if there’s one thing that history has taught us, it’s that humans – especially the ones running the show – don’t particularly like sharing power.
But hey, let’s entertain the notion for a bit.
More rants after the commercial brake:
- Comment, or share the article; that will really help spread the word 🙌
- Connect with me on Linkedin 🙏
- Subscribe to TechTonic Shifts to get your daily dose of tech 📰
AI is an exclusive party
The VIP section belongs to a handful of monolithic corporations who are hoarding data like dragons on piles of gold. “Open”AI, Google, Meta, and even lame X/Grok – they own the algorithms, the compute power, and also the regulatory influence.
And you, my dear smart friend, yet still mortal, are simply the product. Your thoughts and preferences are harvested to refine the very systems that will soon replace your job, your opinions, and possibly your sense of self.
Now here comes decentralized AI.
The idea is simple – we have a handful of corporates, billionaires and technofiles who are controlling intelligence, but now, we let a network of independent agents run the show.
Sounds great!
Like . . .
Like . . ?
Like democracy, but for machines!
Except, as with democracy in real life, the real trick isn’t the idea, it’s the execution.
Because if you f**k this one up, you will, instead of a single entity crushing you, you’d have many entities doing it simultaneously, and some of them would be running malware.
Ramesh Raskar is one of the movement’s loudest evangelists. He believes that our current AI ecosystem is a rotting carcass of centralized control, and I could not agree more.
He’s not wrong.
Companies treat data like forbidden treasure, refusing to share it unless there’s a stack of cash or regulatory threats involved. It’s a system which is built on distrust, and distrust breeds monopolies.
Raskar’s solution is a decentralized “mixture of experts” model. That’s the idea is that multiple AI models, each trained on different things, work together without any single one having absolute control.
This could be revolutionary, or it could be an absolute shitshow. Because the thing is that decentralization isn’t magic. Like, it doesn’t make problems disappear. It just spreads them around like a virus.
Take DeepSeek, for example.
This plucky new AI player made headlines when it managed to outmaneuver traditional training methods.
Big win, you’d say.
Except, immediately after, the company got hacked.
Welcome to the future, people, where the moment you build something groundbreaking, someone else takes a crowbar to it and rips it apart.
Decentralized AI advocates like to tout four key pillars, like, privacy, incentives, verification, and dashboards.
What?!
Ok, ok!
Let’s talk about this.
What is decentralized AI
Decentralized AI is where no single company, government, or entity has full control over an AI system. AI models, data, computing power, and decision-making are not being concentrated in one place, they are spread across multiple independent nodes, organizations, or users.
They have proposed to do this through federated learning, blockchain, and peer-to-peer networks.
Federated learning lets AI models be trained on separate devices or servers without sharing raw data.
Each participant trains a local AI model using their own data, and only the model updates, not the data itself, are shared. This keeps information private and still allows for an AI that is performant.
The Blockchain helps coordinate the development of the AI. The aim is that it is acting as a shared, tamper-proof ledger. AI agents can interact, and verify updates without needing a central authority. Smart contracts, those are automated programs on the blockchain, set rules for updates, and data exchanges, all for the sake of transparency and accountability.
Multiple AI agents run independently on different nodes in a peer-to-peer AI network. Instead of one large model owned by a corporation, smaller AI models specialize in different tasks and work together by exchanging information. MIT’s proposal suggest a “mixture of experts” system, where specialized AI models work together using reinforcement learning and supervised fine-tuning.
A token-based incentive system is proposed to encourage participation of parties in this network.
Organizations or individuals who provide computing power, training data, or model improvements receive digital tokens. Those tokens can be exchanged for AI services or financial compensation. The idea is that this will create an AI economy where contributions are rewarded.
Decentralized AI also includes distributed data marketplaces, where different entities can share AI models, datasets, and computing power. This lets smaller players, like startups, and researchers, access AI resources without being dependent on major tech corporations.
Their end-goal is to create an AI system that are more open, resistant to monopolization, and less vulnerable to single points of failure. This means users retain more control over their own data, reducing the risk of large-scale surveillance and corporate exploitation.
In theory that is.
If you extrapolate this idea of removing centralized control, and moving to decentralized AI, you will allow different organizations, and researchers, and even individuals to contribute to and access AI models without needing permission from a single authority.
Buuuuuut.
Decentralization also introduces a few challenges, like the hereforementioned little issues like privacy, incentives, verification, and. . . trust.
Privacy. Sounds nice, until you realize that data privacy and convenience are mortal enemies. We will trade our privacy for a better Netflix recommendation, let alone a smarter AI assistants (guilty).
The Incentive to do this, poses another challenge. Decentralized AI requires competitors to work together, but as you know, businesses prioritize profit over some kumbaya-esque network.
Verification is also somewhat of an issue. In a centralized system, lofty things like trust, are enforced through oversight. But in a decentralized model, there is no universal authority to verify actors. And the mere existence of verification implies oversight, which means someone still has to be in charge. And isn’t that just centralization with extra steps?
There’s also this thing called trust. Probably the biggest issue to solve. Because in a decentralized system, you don’t just have to trust one AI tech player, you have to trust thousands of them. If a centralized AI model is a single deity ruling over us all, a decentralized one is a pantheon of bickering gods, each with their own agendas, biases, and vulnerabilities.
Great. We’ve basically reinvented human civilization, but worse.
Decentralization powers Agentic AI
Of course, the MIT Media Lab folks like to frame this in terms of progress. They draw comparisons to the internet itself. Web 1.0 was just information, Web 2.0 was social media, and Web 3.0, they claim, will be decentralized AI (I thought it was crypto, but hey – their IQ is a logarithmically higher than mine). They envision a future where machines talk to each other, bypassing centralized control and making their own decisions.
Yeah. That’s exactly what we need.
Cause the implementation of it is where I kinda differ from them.
Autonomous AI agents making deals in the shadows and humans sitting on the sidelines, just watching, and wondering if their smart fridge has just taken out a life insurance policy in their name.
And let’s not forget the risks.
Because once you decentralize something, good luck controlling it again.
A 51% attack, that’s a scenario where a bad actor gains control of a decentralized network, isn’t a possibility.
It’s inevitable.
The same people who hacked DeepSeek. . . .
They’d love nothing more than to seize control of an entire decentralized AI system.
Imagine an army of rogue AI agents, operating on corrupted protocols, making financial trades, deploying bots, and approving fraudulent insurance claims before regulators even know what hit them.
So, what’s the solution?
The experts, in their infinite wisdom, suggest starting small.
Deploy these systems in low-stakes environments first. You know, places where failure isn’t catastrophic.
The problem with this is that every tech dystop – I mean, catastrophe – begins in low-stakes environments.
Social media was just for fun until it started swaying elections. AI-generated content was a harmless gimmick until it began rewriting history. Give it a few years, and your bank, your employer, and your government will be running on this stuff, whether it works or not.
So, decentralized AI?
Bold idea.
Could be amazing.
Could also be the last great experiment before we let the machines truly off the leash. Because at the end of the day, no matter how you structure AI – centralized or decentralized – the problem isn’t the technology.
It’s us.
But I give this initiative my full support because it’s the best alternative to billionaires running the show.
Signing off from different places at the same time.
Marco
Well, that’s a wrap for today. Tomorrow, I’ll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ♨️
Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google appreciates your likes by making my articles available to more readers.
To keep you doomscrolling 👇
- The AI kill switch. A PR stunt or a real solution? | LinkedIn
- ‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn
- AIs dirty little secret. The human cost of ‘automated’ systems | LinkedIn
- Open-Source AI. How ‘open’ became a four-letter word | LinkedIn
- One project Stargate please. That’ll be $500 Billion, sir. Would you like a bag with that? | LinkedIn
- The Paris AI Action summit. 500 billion just for “ethical AI” | LinkedIn
- People are building Tarpits to trap and trick AI scrapers | LinkedIn
- The first written warning about AI doom dates back to 1863 | LinkedIn
- How I quit chasing every AI trend (and finally got my sh** together) | LinkedIn
- The dark visitors lurking in your digital shadows | LinkedIn
- Understanding AI hallucinations | LinkedIn
- Sam’s glow-in-the-dark ambition | LinkedIn
- The $95 million apology for Siri’s secret recordings | LinkedIn
- Prediction: OpenAI will go public, and here comes the greedy shitshow | LinkedIn
- Devin the first “AI software engineer” is useless. | LinkedIn
- Self-replicating AI signals a dangerous new era | LinkedIn
- Bill says: only three jobs will survive | LinkedIn
- The AI forged in darkness | LinkedIn

Leave a Reply