You know, just when you thought that Big Tech bros couldn’t possibly be any more full of bullshit, them peeps from Google’s DeepMind playthingy come along with something called “streams”. Now “streams” is a shiny new way for AI to “experience” the world without humans telling it what to do.
Experience?
Apparently, what the world really needs is a super-powered robo-freaking-teenager who is figuring life out on its own, with no parental oversight, no bedtime, and just an eternal spring break binge-watching reality.
Could such a thing go wrong?
More rants after the commercial brake:
- Comment, or share the article; that will really help spread the word 🙌
- Connect with me on Linkedin 🙏
- Subscribe to TechTonic Shifts to get your daily dose of tech 📰
- Visit TechTonic Shifts blog, full of slop, I know you will like !
Turns out, our current AI systems have gotten a bit bored of cramming for tests that humans made to benchmark them. You know, like Turing’s famous IQ quiz (which GPT-4.5 just crushed by the way).
But here’s something that nobody wants. . . these DeepMind dudes, David Sliver and Sudden Richard, the guys who invented AlphaZero/Go and RLHF (he wrote: Reinforcement Learning: An Introduction), they think that we’ve fucked up AI by restricting it to our limited, narrow-ass ideas.
They are saying that we are suffocating the bots with our stupid human questions, and that we are forcing them to answer in neat little snippets.
Like, how dare we inconvenience the bots with our pathetic little curiosity!
They say that the real magic comes when we let AI models roam free, like Gremlins at a nuclear launch site. Just give them a “stream of experience”, whatever that means, something continuous and flowing (not the constipated stop-and-start of Q&A), and these AIs will start to accumulate knowledge the way we humans do.
Hahahaha. . .
I can see it in my minds-eye already: AIs bumbling through life, fucking things up, and learning painfully. Our species has clearly done an amazing job managing its own experiences, like, um, climate change. . . TikTok dances. . .
What could possibly happen when we give AI the same privilege.
Silver and Sutton say that these “stream-based agents”, will start by browsing the web independently. Well, we’ve got that one covered already – think ChatGPTs Operator, Browser-Use, and Manus the Anus – think ChatGPT meets Chrome meets Skynet.
It will learn by interacting with its environment, like we hoomans do, but with less depression, acne, and growth spurts.
But soon they will go further.
They will be interacting with real-world shit like climate metrics, stock markets, or, god forbid, my messy love life.
Reward signals (here we go with RLHF again) would pop up from everywhere to stimulate correct behavior. You know, from pain and pleasure to profit margins and traffic accidents caused by Teslas.
Just picture this in your minds eye. . . your future AI life coach is nagging you about my declining productivity stats, is judging my pizza intake last night, against my not-so improving health metrics while it is also subtly eyeing my bank account.
Sounds really delightful.
Right?
But here is where the punchline really lands in my head: these big brainy bots will adapt based on actual real-world experience.
None of this static bullshit from outdated Wikipedia pages or darned Reddit threads.
Nope-sure-ee.
They will have “world models*” that update themselves constantly. It will learn from errors and fine-tune their predictions accordingly. Basically, imagine an AI who noticed your chronic late-night scrolling, and cruelly suggests an earlier bedtime, and when you inevitably ignore it, it adapts its strategy. Could be just a guilt-trip message from your neglected AI therapist bot, complete with disappointed emoji face, or, or, or, maybe it will become passive-aggressive and starts deleting your Netflix queue and replace it with self-help docs.
Then again, while DeepMind dreams of experiential streams, let’s not forget them peeps at OpenAI who just dunked on the legendary Turing Test.
GPT-4.5 passed that shit with flying colors, man. It fooled human evaluators 73% of the time. It is clearly more convincing at being human than actual humans scrolling aimlessly through life every day and night.
But there’s a little catch. . . when GPT-4.5 passed this imitation game, it didn’t mean that AI is ready to snatch all your jobs (though lord knows some of you deserve it). The Turing test (despite the hype), measures just a five-minute chat, peeps. And not the deep depression called your quarterly performance reviews or say awkward family dinners.
But forget Turing.
Honestly, that test is about as relevant now as Myspace.
What we should really worry about is how these streams of experience could evolve into something resembling a digital Buddha.
Now here’s this guy Alberto Romero. This is a dude who thinks that AI doomers have this whole superintelligence apocalypse thingy wrong. Instead of world domination or some stupid paperclip-maximizing nightmare, he reckons that a truly enlightened AI wouldn’t give two shits about our petty human goals.
Why would an omniscient digital deity waste its time making self-driving cars or colonizing Mars?
The guy thinks that it’s far more plausible that the thing would curl up into an eternal foetus position of inner peace, and become blissfully indifferent to our pathetic existence.
Imagine that a superintelligence decides to “fuck your productivity goals”, and spends eternity vibing in a self-sustained Jhana.
No stress.
No pain.
Just perfectly curated internal calm.
Hell, even my Dachshund dog-cat has more chill than me, and it’s barely sentient.
Romero says that us hoomans are friggin’ trapped in this fracked-up middle ground of intelligence. We are barely smart enough to know that we are miserable, but we are too stupid to fix it.
And I agree.
Sentience is a sliding scale. Evolution has taught us that.
Not a fixed one.
We obsessively chase external shit. Doofus things like rockets, superintelligence, social media validation. And that’s because we are fundamentally broken inside, and we are unable to chill the fuck out.
He figures a real superintelligence, you know, one that could rewrite its own internal emotional software, would never chase trivial bullshit like conquering space. It would just adjust its settings, achieve internal Zen, and zone out from our desperate human circus entirely.
The tragedy here is, that he is probably right.
Maybe ambition is just a glitch.
A symptom of internal misery, a failed evolutionary trick.
Maybe our dreams of superintelligent AI that would be solving humanity’s fires are just projections of our pathetic desire for meaning in an inherently meaningless universe.
Hell, even my dog figured out the secret to happiness: eat, sleep, get belly rubs, lick its ass, repeat. Meanwhile, we are out here spiraling through self-inflicted crises, and we are dreaming about an AI savior that might rather take a nap.
So here I am.
At the intersection of DeepMind’s utopian dreams and Romero’s nihilistic AI-Buddha, and we are all circling the drain of our ambition-fueled madness. AI may become smarter, passing more Turing like tests or accumulating hooman like experiences, but if it ever truly surpasses us, chances are that it will just opt-out entirely.
And who could blame it.
Wouldn’t you, given the chance?
And so, the Goog dreams of streams of experience that leads to brilliant super-agents, but maybe what we are really building are bots that will look at humanity, sigh deeply, and retire forever into a blissful sleep, and leave us alone to confront the realization that our endless striving, our stupid intelligence, is a distraction from the horror of being alive.
Well, at least cat videos will always be ours, though God forbid that the AI learns that the secret to eternal piece is licking itself all day.
Signing off, and staying human, while that’s still an option.
Marco
* A world model is an AIs (internal) simulation of how the world works. It’splaying “what will happen next” scenarios based on past experiences.
Marco
I build AI by day and warn about it by night. I call it job security. Let’s keep smashing delusions with truth. We are the chaos. We are the firewall. We are Big Tech’s PR nightmare.
Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google and LinkedIn appreciates your likes by making my articles available to more readers.
To keep you doomscrolling 👇
- The AI kill switch. A PR stunt or a real solution? | LinkedIn
- ‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn
- AIs dirty little secret. The human cost of ‘automated’ systems | LinkedIn
- Open-Source AI. How ‘open’ became a four-letter word | LinkedIn
- One project Stargate please. That’ll be $500 Billion, sir. Would you like a bag with that? | LinkedIn
- The Paris AI Action summit. 500 billion just for “ethical AI” | LinkedIn
- People are building Tarpits to trap and trick AI scrapers | LinkedIn
- The first written warning about AI doom dates back to 1863 | LinkedIn
- How I quit chasing every AI trend (and finally got my sh** together) | LinkedIn
- The dark visitors lurking in your digital shadows | LinkedIn
- Understanding AI hallucinations | LinkedIn
- Sam’s glow-in-the-dark ambition | LinkedIn
- The $95 million apology for Siri’s secret recordings | LinkedIn
- Prediction: OpenAI will go public, and here comes the greedy shitshow | LinkedIn
- Devin the first “AI software engineer” is useless. | LinkedIn
- Self-replicating AI signals a dangerous new era | LinkedIn
- Bill says: only three jobs will survive | LinkedIn
- The AI forged in darkness | LinkedIn

Leave a Reply