Thinking hurts, doesn’t it? It is like a muscle that you forgot to use. It’s like trying to run after sitting on your ass for three years straight. But it didn’t always hurt. Because there was a time when your brain actually worked. A time before you let AI hijack it.
Oh, don’t look at me like that.
You know exactly what I mean.
You used to write your own emails.
You used to proofread your own reports.
You used to think through problems instead of tossing them to some digital parrot and waiting for it to regurgitate something vaguely competent.
But now? Now you have become a glorified AI babysitter. You are sifting through machine-generated mediocrity and pretending that it is “efficiency”.
Yeah. Sure.
Efficiency. If that’s what we’re calling cognitive decline these days.
And don’t think you’re special. Microsoft already did the math. They ran the tests, and the results are in. . . AI is chewing through your brain like a virus, and if you don’t start paying attention, there won’t be much left to save.
But hey, at least your emails are grammatically correct.
Now, what am I talking about.
Microsoft and Carnegie Mellon Uni did a study, and it’s called “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers”. It’s a mouthfull, and it’s about how generative AI influence “critical” thinking among professionals like you. If you are interested in reading about it, or you want some proof that this ain’t something an AI coughed up, be my guest – it’s right here.
They found out that while AI makes work faster, it also makes people think less.
People who use AI often rely on it too much, and those people are rarely double-checking its answers. Apparently, the more they depended on AI, the less they engaged in critical thinking. Now this apparently weakens problem-solving skills over time and make people more likely to accept wrong or biased AI-generated information without question.
Hahahaha. . . .
That explains a lot, doesn’t it Marc Drees
If you like my rants and want to support me:
- Comment, or share the article; that will really help spread the word 🙌
- Connect with me on Linkedin 🙏
- Subscribe to TechTonic Shifts to get your daily dose of tech 📰
- TechTonic Shifts also has a blog, full of rubbish you will like !
Your honor, ChatGPT made me do it
Imagine you are in a courtroom, and you’re watching a serious case being argued. A Stanford professor is standing in front of a judge, and is looking like a student who just freakingly realized that he plagiarized his entire thesis.
His excuse was. . “Uh, your honor, ChatGPT told me it was true”. That’s where we are now.
Jeff Hancock was a seasoned academic. Heck, he even is an expert in AI based misinformation no less! The guy got blindsided by the very thing that he warns people about. Because in even the so-called experts are outsourcing their thinking to AI.
He is a professor specializing in AI and communication, and he submitted a legal document that was meant to defend Minnesota’s new law on AI-generated election deepfakes.
Hahahahaha. . .
Sorry for laughing. . .
It was highly cynical laughter.
His case included a citation from a supposedly real study. Except. . . it wasn’t real. Not even close. The study, was allegedly written by Huang, Zhang, and Wang (AI’s favorite randomly-generated authors because it’s true – I’ve come across them many times). But the simply didn’t exist. The journal volume that he cited was talking about climate change and election results, and not deepfakes. But Hancock didn’t fact-check it, like many professionals who outsource their thinking. He trusted AI to do the thinking for him, and now he is trying to explain why he swore under oath on research that might as well have been pulled from the Onion.
And it’s not only academics who are making these mistakes.
Apparently this is a thing with lawyers. They have been making an absolute spectacle of themselves by handing AI-generated legal nonsense straight to judges. Last year, a guy called Steven Schwartz – he is (or was, I haven’t checked) a New York attorney – and he submitted a legal brief that was riddled with fake court cases.
ChatGPT had fabricated entire judicial opinions, and Schwartz, didn’t verify them. He simply assumed that AI had done his homework for him. And when a critical thinking judge caught on, the Schwartz tried to explain that he didn’t know that AI could lie. He had to pay a $5,000 fine and got a permanent stain on his career, and all because he thought AI was some kind of brazen head (just Google it, I ain’t your mom), instead of a friggin autocomplete.
But this isn’t just couple of one-off cases. It is a pattern. AI is turning professionals into blind believers, and it’s tricking them into thinking that it’s smarter than it really is. And instead of using AI as a tool, they are treating it like a magic oracle. The issue isn’t that AI makes mistakes, but it’s that people assume it doesn’t. Lawyers, professors, and other professionals are skipping the most basic step, and that is . . . verification.
They are trusting AI blindly, and not because it’s reliable, but because it is convenient. And convenience is killing competence.
Even major law firms are falling into this trap.
Take Morgan & Morgan (no, not Thomson and Thomson, but close). M&M is a massive law firm that got caught citing hallucinated legal cases in a lawsuit against Walsmart. Their solution to this growing problem in their legal practice was not better training, nor not stricter oversight, but just a checkbox.
Yes, lawyers now have to click a little box (called “we take accuracy serious”) that is confirming that you have to use your brain because AI might fabricate shit. Obviously, that will stop them from making the same mistake again!
This is where we’re headed, people.
Professionals in every industry are handing off their thinking to machines and they are assuming that the results are flawless. It’s embarrassing and of course it’s dangerous, because when experts stop verifying information, we stop being able to trust expertise. AI can be an incredible tool, but it’s not a replacement for human judgment. The moment that we stop questioning it, we stop thinking altogether. And if that happens, well, don’t be surprised when the next legal defense in court is: “Your Honor, ChatGPT made me do it”.
The critical thinking crisis in the AI era
Real research and tech buffs agree on one thing and that is that there’s one skill that stands tall above the rest in the age of AI, and that is critical thinking. They aren’t talking about coding skills, nor being the best at gene editing, but the ability to think for yourself.
I didn’t make this up.
And the AI didn’t do that as well – though it did help me with the research.
Sure, AI can churn out answers faster than a squirrel with an overload of dopamine, and laced with some (S)-1-phenylpropan-2-amine (didn’t have to look that one up, cause I’m a chemist), but the thing is that without human insight, those answers aren’t very useful.
Jayesh Govindarajan from Salesforce has the guts to admit that knowing how to code is cool and all, but the real power lies in problem-solving, not just feeding an AI model and hoping for the best. It’s the humans, you know, those pesky creatures who refuse to let the machines call all the shots, who bring the context and the creative spark that AI will never replicate.
The machines make things faster, but we have to ask the right questions. And if we don’t, we’re nothing more than obedient button-pushers.
And it’s not just tech honchos that are saying this.
The World Economic Forum also has made it clear that critical thinking is all that matters in this brave new AI powered world. People who can make decisions with AI are the ones who will thrive, because they are not doing what the machine tells them to. . . . they are making it work for them. In fact, Forbes backs this up, saying that without critical thinking, we’re all just mindlessly plugging data into AI models and praying for gold.
And here’s the gut punch.
a Societies study also shows that when we lean too heavily on AI, our ability to think critically starts to wither.
Relying on AI to do our thinking for us is like getting a six-pack without ever hitting the gym. You’re not building muscle, you’re building dependency. And continuing with the muscle analogy, when you are not using your muscles, you are prone to getting muscle atrophy. And the same thing goes for AI. You will loose your critical thinking alltogether.
This is the danger of AI-over-reliance.
It seems like it’s a time-saver, but the more we let AI handle the brainwork, the less we sharpen our own wits. And if we’re not careful, we’ll wake up one day unable to think for ourselves, with AI running the show like a backseat driver on a joyride.
It’s no surprise that businesses are looking for people who can blend imagination with analysis. A report from WPP and YouGov makes it clear that the future of business growth won’t be powered by AI alone. Critical thinking, imagination, and judgment are the fuel that will drive this machine.
It’s time to recognize that AI’s role isn’t to replace us, but to work alongside us.
That is, until it is able to solve all those issues it is dealing with.
We still need to stay in control, and steer the ship and let the machines help us navigate the storm.
Your brain is rotting away
Ok, here comes the hard truth. If you’re not going to use your brain to think critically, it is already halfway to being a rotting pile of mush. If you think that mindlessly scrolling through social media or binge-watching TV shows is harmless, then you’re dead wrong. It’s a slow, agonizing death march for your cognitive health.
A study from the Rush Alzheimer’s Disease Center found that people who do not engage in mentally stimulating activities, like actual reading, or real solving problems, or even learning new skills, yeah yeah. . a.k.a. real thinking, you are basically giving Alzheimer’s the green light. In fact, the research shows that people who actively challenge their brains can delay Alzheimer’s by as much as five years.
Five years of clarity, Marc. .
Five years of remembering your own name before your brain completely forgets how to work. Yet here you are, glued to your phones, reading my shit, and thinking that “it’ll all be fine” while your brains rot away. Get your intel right here: Rush Alzheimer’s Disease Center, Journal of the American Medical Association (JAMA), 2013.
And the University of Edinburgh research backs this up with an even more unsettling claim.
Their study found out that people who stop with critical thinking or mental tasks actually experience a faster decline in brain health. They noticed that mental stagnation accelerates brain degeneration, and that leads to dementia. Your brain isn’t some passive organ that you let sit there while you mindlessly follow the latest trends. It needs constant work outs, like any other muscle in your body. If you ain’t doing that it’s a one-way ticket to the land of forgetfulness, where your past is a fog and the present is a blur. Real research, here you go: University of Edinburgh, Neurology Journal, 2015.
Want the cherry on top of this mental disaster?
A study which was done on the 31st of January 2025, by UCL researchers, published in Brain Communications has collected data from over 450 people since their birth in 1946. The study reinforces what we all suspected. If you’re not regularly challenging your mind, you’re setting yourself up for a fast decline.
But the study also points out that those people who engage in brain-stimulating activities, like, you know, actual reading, problem-solving, or just thinking, that those people are more likely to keep Alzheimer’s disease at bay.
It’s time that we learn of this and stop treating your brain like an overworked machine and rely too much on AI. Treat it like the precious, fragile thing that it is.
So congratulations with your new job
You’re a hallucination editor now
You think thatr AI is replacing workers?
Nah.
It’s just redefining what work is.
Used to be, writers wrote. Programmers coded. Analysts analyzed. But they won’t do any of that. They just correct AI when it screws up.
Which, by the way, is a full-time job.
Because AI does screw up.
Constantly.
And when it does, you, the proud, highly skilled, “future-proof” you, get to sift through the mess. You will be patching up half-baked machine logic like a janitor who has to clean up after your particularly drunk office party.
And that’s if you’re experienced enough to notice the mistakes.
The fresh hires, you know, the kids who never had to think critically before AI took over? They don’t even see the errors. They just assume the machine is right. They copy-paste, move on, and call it productivity.
They’re not engineers. They’re not writers. They’re not analysts.
They are glorified AI chaperones.
Thinking is hard. AI is easy. Guess which one you pick
Oh, but it gets worse. See, you could fight back. You could force yourself to think critically. You could make a conscious effort to resist AI’s pull.
But you won’t.
Thinking is exhausting, and it’s easier to let AI do the work on your behalf. Because, let’s be honest, the dopamine hit of instant answers is too damn tempting.
It’s the same reason why we all rely on GPS instead of learning to navigate. The same reason people use calculators instead of doing mental math. The same reason nobody remembers phone numbers anymore.
Convenience kills competence.
Every.
Single.
Time.
And AI? AI is the most convenient thing humanity has ever created.
So yeah. You could fight back. You could resist.
But you won’t.
It’s like self-driving cars. When you first start using autopilot, you stay alert. You’re careful. But over time, you start trusting it too much. You stop paying attention. And before you know it, you’re asleep at the wheel, and boom !
Now apply that to your brain.
How many times have you blindly accepted an AI-generated answer? How many times have you trusted the machine instead of verifying it yourself? How many times have you let AI do the thinking for you?
You don’t even know, do you?
That’s the problem.
So wake the hell up.
Look, I’m not against AI, although it seems like it sometimes. AI is fine. AI is useful. AI is a tool.
But you are using it wrong.
You’re letting it turn you into a mindless drone. You’re letting it replace your ability to think. You’re letting it eat away at the one thing that actually makes you valuable, which is your intelligence.
And if you don’t wake up? If you don’t start fighting back?
You won’t have to worry about AI taking your job.
You won’t have a job.
Now tell me. Are you still in control?
Or is AI just waiting for you to shut off completely?
Signing off from an overactive brain and wanting to do some mindless doomscrolling
Marco
Well, that’s a wrap for today. Tomorrow, I’ll have a fresh episode of TechTonic Shifts for you. If you enjoy my writing and want to support my work, feel free to buy me a coffee ♨️
Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google appreciates your likes by making my articles available to more readers.
To keep you doomscrolling 👇
- The AI kill switch. A PR stunt or a real solution? | LinkedIn
- ‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn
- AIs dirty little secret. The human cost of ‘automated’ systems | LinkedIn
- Open-Source AI. How ‘open’ became a four-letter word | LinkedIn
- One project Stargate please. That’ll be $500 Billion, sir. Would you like a bag with that? | LinkedIn
- The Paris AI Action summit. 500 billion just for “ethical AI” | LinkedIn
- People are building Tarpits to trap and trick AI scrapers | LinkedIn
- The first written warning about AI doom dates back to 1863 | LinkedIn
- How I quit chasing every AI trend (and finally got my sh** together) | LinkedIn
- The dark visitors lurking in your digital shadows | LinkedIn
- Understanding AI hallucinations | LinkedIn
- Sam’s glow-in-the-dark ambition | LinkedIn
- The $95 million apology for Siri’s secret recordings | LinkedIn
- Prediction: OpenAI will go public, and here comes the greedy shitshow | LinkedIn
- Devin the first “AI software engineer” is useless. | LinkedIn
- Self-replicating AI signals a dangerous new era | LinkedIn
- Bill says: only three jobs will survive | LinkedIn
- The AI forged in darkness | LinkedIn

Leave a Reply