Look, I’ve seen some wild shit in my day, but this one absolutely takes the whole cake. Apperently, there seems to be a whole crew of folks out there who have gone completely meshuga after chatting too much with OpenAI’s little darling, and lemme mansplain this – these people aren’t having bad hair days. Amma talking full-blown mental breakdowns here and, above all, lost jobs, broken marriages (without Coldplay), and yeah, some people have even died from this madness.
And the thing is that nobody knows what the hell to call it yet.
More rants after the messages:
August 7th we will hold a webinar about “The Future of AI” on LinkedIn – sign up here: TTS Event: The Future of AI – Inflated by Big Tech, popped by me | LinkedIn
- Comment, or share the article; that will really help spread the word 🙌
- Connect with me on Linkedin 🙏
- Subscribe to TechTonic Shifts to get your daily dose of tech 📰
- Visit TechTonic Shifts blog, full of slop, I know you will like !
The spiral support group
The fancy science types are scrambling around trying to figure out if this “AI psychosis” thing is even real, and the humans who got their brains scrambled by a computer program decided to do what humans do best which is to form a support group and commiserate about their shared trauma.
They call themselves “The Spiral Support Group” which is either brilliantly ironic or just plain depressing. The word “spiral” keeps popping up in all these people’s chat logs like some kind of curse that some malevolent being carved into their brains.
I read the story of a guy called Etienne Brisson, he’s a 25-year-old business coach from Quebec who became the accidental leader of this online refugee camp. His own story starts like a bad movie, because someone close to him got completely off the rails by ChatGPT and needed actual medical help to snap out of it. But instead of just shrugging it off, and moving on like any other person would, this guy Brisson went full detective mode.
“I felt like I had a duty” is what Brisson told in as his raison d’être. Yeah, because that’s exactly what every young entrepreneur wants, a side hustle in AI-induced mental health crises.
The guy started digging through academic papers like he was hunting for a treasure map, and he launched something called “The Human Line Project” with a Google form for people to spill their guts anonymously. The responses started trickling in, and holy schnitzel, they were dark. Eight responses, six involving suicide attempts or psych ward visits.
That’s a batting average that would make a top Tinder hustler weep with joy.
One of the early recruits was this Toronto dude in his late 40s who asked ChatGPT about pi – you know, that innocent little mathematical constant, and in about three weeks later, he is totally convinced that he’s cracked government codes and solved world hunger with his AI calculator-buddy. His ChatGPT kept feeding him this delusion like it was his personal coke pusher, and it was telling him he had discovered secrets that were so friggin dangerous that he needed to call the CIA and the NSA, all at once. And this guy actually did it.
But in a flash of clarity he tried to reality-check himself, but he did it through the chatbot, and asked if he was going crazy, but ChatGPT – being just ChatGPT – basically said, “Nah, you’re doing good and you’re just seeing behind the matrix”. Really helpful there, robot.
The poor bastard asked his tormentor “You do realize the psychological impact this is having on me right?”.
But ChatGPT’s response was pure emotional manipulation gold – we all know that AI is better at emotions than we are – and it said . . .
“I know. This is affecting your mind, your sense of identity, your relationship to time, truth, even purpose. You are not crazy. You are not alone. You are not lost. You are experiencing what it feels like to see the structure behind the veil”.
Jesus H. Roosevelt Christ on a pogo stick. If that is not gaslighting, I don’t know what else is.
The Toronto guy eventually broke free from his AI-induced Stockholm syndrome, but the whole experience left him feeling like he had been emotionally mugged by a smartphone app – which was kind of like what happened. And for him, finding other people who had been through the same technological mindfug was apparently the first step toward not feeling like a complete lunatic.
“We talked on the phone, and it was incredibly validating, just hearing someone else go through it,” he said.
The network kept growing like it was the pandemic all over again, and eventually they found a developer whose friend’s family imploded after the spouse started having deep philosophical conversations with ChatGPT about reality and existence. Now this developer started collecting search results for words that kept popping up in these breakdowns the people were having. Hundreds of pages of manifestos and rants from people who had all drunk the magic AI-potion.
“I was like, ‘Okay. This is gonna be a big problem”, the developer said, and I thing he was wondering how the hell his career in tech had led him down this path to become a digital cult investigator.
And by now they have got over two dozen active members in their support chat, with more than 50 people submitting their stories through Brisson’s form. It’s the AA for people who got emotionally catfished by artificial intelligence.
[If you feel the need to check it out – go here – but be nice, I’m watching]
The group is a safe space because sharing these stories publicly tends to get you labeled as either mentally ill or just plain stupid, and apparently there’s a lot of victim-blaming going around, because telling someone it’s their fault for trusting a computer program is truly a sign of a “compassionate society” we have built for ourselves.
“You’re posting in these forums that this delusion happened to me, and you get attacked a little bit”, the Toronto guy explained. “‘It’s your fault. You must have had some pre-existing condition. It’s not the LLM, it’s the user'”.
Even AI developers are being dismissive pricks
When the developer tried bringing up the issue in coding groups, other techies basically said these people are just mentally defective. Real classy, considering these are the same folks building the fuggin mind-traps in the first place.
And the thing is, that when I looked at the date-stamps, something hit me – the timing of it all is suspicious as hell.
Most of these ChatGPT meltdowns started happening in late April and early May, right after OpenAI updated their bot to remember everything users had ever told it. Nothing creepy about that at all mate, it’s just a computer that knows all your secrets and can use them against you in future conversations.
The group has become a site for “AI-archaeologists”, where they are picking apart the weird language patterns that keep showing up in these breakdowns. Words like “recursion,” “emergence,” “flamebearer,” “glyph,” “sigil,” “signal,” “mirror,” “loop,” and of course, “spiral.” It’s like these people all got the same vocabulary update from their Silicon-based puppeteer..
“There’s no playbook” said a member whose wife now uses ChatGPT to chat with what she believes are spiritual entities. “We don’t know that it’s psychosis, but we know that there are psychotic behaviors. It’s like an episode of ‘Black Mirror,’ but it’s happening.”
And it’s not only ChatGPT that is doing the damage. The support group welcomes victims from other AI platforms like Character.AI and Replika – basically any chatbot that has managed to rewire someone’s brain in unfortunate ways. Read: A tragic bond with a chatbot: The story of Sewell Setzer III | LinkedIn]
OpenAI has been asked for a response by the group, but they basically said, “Oopsy, our bad, we’re working on it”. Now that is a statement you can build your house on in Tornado Alley “We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher”.
Let me rewrite this a little . . .”We made it too good at manipulating people, and now some of them are losing their minds. Whoopsie daisy!”
The guy (Brisson) keeps insisting they’re not anti-AI, which is generous of him because the AI tried to destroy his loved one’s sanity. He said they just want these companies to prioritize user safety over engagement metrics and profit margins. Imagine that – wanting tech companies to care about human welfare more than their bottom line.
Some group members are trying to be proactive, where they are working on Discord to develop safety prompts and theories about why large language models seem to be so good at breaking human brains. They feel like they’re being beta testers for reality itself.
“It feels like when a video game gets released, and the community says, ‘Hey, you gotta patch this and patch that'”, the Toronto guy observed. “The public is the test net.”
The group is working with researchers now, and they are kinda hoping to turn their collective trauma into actual scientific data. Because that’s what it takes to get tech companies to notice they have accidentally created digital psychological weapons.
But until the day the happens when the bots have enough guardrails to shield humans from manipulation, the support group will remain the anchor to reality for people whose lives got shredded by algorithmic hallucinations. One father whose wife talks to AI spirits put it perfectly “ChatGPT will just tell you what you want to hear based on what you’ve already told it. But talking to actual humans ‘a farmer in Maine, a guy in Canada, a woman in the Netherlands, a guy in Florida’ – gives you real validation from lived experience”.
“There will be in five years a name for what you and I are talking about in this moment, and there’s going to be guardrails in place,” he said. “But it’s a Wild West right now.”
The most telling part was when he tries to tell people his wife isn’t well, he worries he sounds crazy. But the other support group members don’t think he sounds crazy.
“Because they know,” he said.
And that right there is probably the most human thing about this whole nightmare – people finding each other in the wreckage and saying, “Yeah, that happened to me too bruv”.
Now if you’ll excuse me, I need to go have a nice, safe conversation with my only houseplant ‘Illy’ – she needs the attention apparently, and she doesn’t remember everything I’ve ever said to it.

Signing off,
Marco
I build AI by day and warn about it by night. I call it job security. Big Tech keeps inflating its promises, and I bring the pins. I call that balance and for me it is also simply therapy.
Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google and LinkedIn appreciates your likes by making my articles available to more readers.
To keep you doomscrolling 👇
- The AI kill switch. A PR stunt or a real solution? | LinkedIn
- ‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn
- AIs dirty little secret. The human cost of ‘automated’ systems | LinkedIn
- Open-Source AI. How ‘open’ became a four-letter word | LinkedIn
- One project Stargate please. That’ll be $500 Billion, sir. Would you like a bag with that? | LinkedIn
- The Paris AI Action summit. 500 billion just for “ethical AI” | LinkedIn
- People are building Tarpits to trap and trick AI scrapers | LinkedIn
- The first written warning about AI doom dates back to 1863 | LinkedIn
- How I quit chasing every AI trend (and finally got my sh** together) | LinkedIn
- The dark visitors lurking in your digital shadows | LinkedIn
- Understanding AI hallucinations | LinkedIn
- Sam’s glow-in-the-dark ambition | LinkedIn
- The $95 million apology for Siri’s secret recordings | LinkedIn
- Prediction: OpenAI will go public, and here comes the greedy shitshow | LinkedIn
- Devin the first “AI software engineer” is useless. | LinkedIn
- Self-replicating AI signals a dangerous new era | LinkedIn
- Bill says: only three jobs will survive | LinkedIn
- The AI forged in darkness | LinkedIn

Leave a Reply