The AGI cult where tech bros are planning your extinction while getting high on their own supply

If you thought the Zuckerdork was the creepiest Silicon Valley villain, wait until you meet the AGI doom-cultists. Last weekend, at a $30 million San Francisco mansion (where else?), a few tech billionaires and “philosophical” deep thinkers (though they forgot to invite moi), convened for an extravagant end-of-humanity soirée organized by Daniel Flaggella, who happens to run a market research company that tracks AI adoption in Fortune 500 companies, has a podcast that I’ve never listened to, and speaks to anyone of importance around the globe.

So, picture this setting. Champagne flowing, Ferraris parked out front, and a bunch of multimillionaire man-children debating humanity’s extinction like it’s the new avocado toast.

They were discussing the “posthuman transition”, now that’s a fancy euphemism for “how we hand over the planet to the AI” after we’re all a-goner. The thing that freaks me out though is that they KNOW their technology will probably end humanity. Faggella literally stated, “AGI is likely to end humanity”, and their response was essentially, “Great, let’s throw a party about it!”. To me, this looked eerily much like arsonists hosting a seminar on accelerants while the building’s already burning.

And who is leading this apocalypse parade, well, none other than OpenAI’s former mad scientist Ilya Sutskever. Yuup, the guy who once declared certain AI models “slightly conscious”, like your stoner roommate after five bong hits. Sutskever mentioned literally that he is planning on building doomsday bunkers before releasing AGI, and in a 2023 meeting, he said, “Once we all get into the bunker…” When a confused researcher interrupted, he clarified, “We’re definitely building a bunker. Of course, it’s optional whether you enter the bunker”. Farkin’ OPTIONAL!

This lunatic is engineering technology he believes will annihilate civilization, and his best contingency plan is an apocalypse bunker with a guest list.

Where have I seen this before? Ah yeah – Kingsman, the Secret Service:

Hillarious, yet holding lot’s of truth at the same time, also a nudge to Skum’s satelites (who in this case happen to save us all, probably unlike reality).

Anyways, it is time to face reality here. AGI, or artificial general intelligence, or as insiders lovingly call it, “the Rapture” (hahahaha), is mostly an elaborate hallucination by Silicon Valley billionaires who’ve invested absurd amounts into convincing investors it’s “just around the corner”.

Apple researchers recently revealed these supposedly god-like models experience “complete accuracy collapse” when faced with anything more complicated than basic trivia. ChatGPT, meanwhile, still can’t count the letters in “strawberry.” But hey, never let inconvenient facts ruin your apocalypse party.

These billionaire AGI believers aren’t deterred by reality though.

The bros-in-arms spent the evening pretending to be philosophers, where they were claiming that AI will discover “deeper universal values” that humans are too stupid to understand. They name-dropped Nietzsche and Spinoza, you know, dead philosophers they’ve never actually read beyond Wikipedia summaries, all to make their doomsday plans sound kinda intellectual.

The hypocrisy of it all is that they’re literally planning humanity’s extinction while pretending it’s some noble philosophical quest.

I want you to go back to your college years, and picture the worst philosophy major, now the guy who took one intro class and wouldn’t shut up about nihilism at parties just got handed a billion dollars and the power to actually build the apocalypse that he’s fantasizing about.

Now that’s who’s deciding humanity’s future.

Yeah, rich assholes who never emotionally graduated from their sophomore year “deep thoughts” phase are using half-understood philosophy to justify building technology they admit will probably kill everyone. And they think this makes them profound rather than dangerous.


More rants after the commercial brake:

  1. Comment, or share the article; that will really help spread the word 🙌
  2. Connect with me on Linkedin 🙏
  3. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  4. Visit TechTonic Shifts blog, full of slop, I know you will like !

But when you dig deep enough, you’ll end up in China – is what my mother always said – and gravity works in reverse.

Why am I saying this?

Well, last week, the same week that the bros gathered to party about the end of civilization as we know it, some Chinese researchers proudly announced that AI can categorize objects into human-like groups, distinguishing “food” from “furniture”.

They called it a glimpse of consciousness. . .

Stop the farking presses!

The machines have figured out apples aren’t chairs. Clearly, consciousness is just around the corner! This passes for “evidence” in the AGI cult these days.

And Zucky’s little Meta empire is already weaponizing personal data, fueling genocides, and psychologically torturing teenagers, and Meta’s latest disaster, a chatbot publicly broadcasting your embarrassing questions, is exactly the kind of reckless tech foundation needed to unleash an AGI with all the moral compass of a crack head babysitting your kids.

Here’s the beautifully twisted business model behind this AGI apocalypse fetish:

  1. Claim you’re building a digital God.
  2. Whisper softly that your digital God might just, um, annihilate humanity?
  3. Position yourself as humanity’s savior capable of controlling said God.
  4. Demand infinite funding and zero accountability.
  5. When your chatbot can’t do basic math, claim you need more time and money.
  6. Repeat until richer than God himself.

This isn’t a research initiative, Kill-Ya Sutskever, it’s a doomsday cult funded by venture capital.

But you need to know one thing, and after that you can basically just stop reading further. Behind closed doors, these “visionaries” aren’t actually building AGI. They are constructing surveillance machines, psychological manipulation tools, job-destroying automation, and power-consolidation systems.

The AGI narrative conveniently masks the dystopia they’re actively engineering right now.

But the real danger isn’t that they will succeed at AGI.

It is that they’ll obliterate society trying.

As these man-children LARP as philosophers pondering humanity’s demise, they’re actively normalizing the idea that humans are obsolete, hoovering up global data, automating away your agency, burning more energy than entire countries, and concentrating obscene wealth.

Flaggella himself hilariously described his mansion gathering as “an advocacy group for slowing down AI progress.” You know what would actually slow down AI?

NOT BUILDING IT.

But admitting that would mean confessing they’re not saving humanity, but they’re just playing God with venture capital money.

It is time we stop treating these apocalypse-fetishizing narcissists as visionaries instead of what they truly are – rich kids who read too much sci-fi and confused it with reality – they’re not building the future, they’re hiding in Zucky style island bunkers to escape the present they’re actively destroying, and hoping you’re too distracted by their extinction fantasy to notice.

We don’t need AGI bunkers.

We need to shut down this farking circus, seize billionaire playgrounds, and jail anyone who casually advocates digital annihilation as easily as ordering sushi. Yet, justice moves slower than grandma’s browser tabs, and these AGI apocalypse cultists remain sipping cocktails, plotting our “optional” demise.

Anyways. . .

Sleep tight, mes amis, but remember, these AGI cultists are still out there, high on their own supply, masturbating furiously to apocalypse fanfiction. Because playing God pays far too well.

Bon courage cause you’ll farkin’ need it.

… .- …- . / — ..- .-. / … — ..- .-.. … / ..-. .-. — — / – …. . … . / — — – …. . .-. / ..-. ..- -.-. -.- . .-. …

Stop refreshing your feed and start remembering. Next time someone brings up AGI, ask if ChatGPT can count the letters in “strawberry” yet. Then ask why we’re planning godhood when we can’t even build a calculator that works.

Signing off.

Marco

I build AI by day and warn about it by night. I call it job security. Let’s keep smashing delusions with truth. We are the chaos. We are the firewall. We are Big Tech’s PR nightmare.


Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google and LinkedIn appreciates your likes by making my articles available to more readers.

To keep you doomscrolling 👇

  1. The AI kill switch. A PR stunt or a real solution? | LinkedIn
  2. ‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn
  3. AIs dirty little secret. The human cost of ‘automated’ systems | LinkedIn
  4. Open-Source AI. How ‘open’ became a four-letter word | LinkedIn
  5. One project Stargate please. That’ll be $500 Billion, sir. Would you like a bag with that? | LinkedIn
  6. The Paris AI Action summit. 500 billion just for “ethical AI” | LinkedIn
  7. People are building Tarpits to trap and trick AI scrapers | LinkedIn
  8. The first written warning about AI doom dates back to 1863 | LinkedIn
  9. How I quit chasing every AI trend (and finally got my sh** together) | LinkedIn
  10. The dark visitors lurking in your digital shadows | LinkedIn
  11. Understanding AI hallucinations | LinkedIn
  12. Sam’s glow-in-the-dark ambition | LinkedIn
  13. The $95 million apology for Siri’s secret recordings | LinkedIn
  14. Prediction: OpenAI will go public, and here comes the greedy shitshow | LinkedIn
  15. Devin the first “AI software engineer” is useless. | LinkedIn
  16. Self-replicating AI signals a dangerous new era | LinkedIn
  17. Bill says: only three jobs will survive | LinkedIn
  18. The AI forged in darkness | LinkedIn

Become an AI Expert !

Sign up to receive insider articles in your inbox, every week.

✔️ We scour 75+ sources daily

✔️ Read by CEO, Scientists, Business Owners, and more

✔️ Join thousands of subscribers

✔️ No clickbait - 100% free

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Up ↑

Discover more from TechTonic Shifts

Subscribe now to keep reading and get access to the full archive.

Continue reading