Yup. It is about time we got a new piece of research on AI (so he wrote, with a sarcastic smirk on his face)
Yesterday I wrote about companies all over the globe who are shoving AI up their products, and that a quarter of their customers just simply hate that, and the moment I had it finished another piece of research popped up stating that companies everywhere are ALSO shoving AI tools into workplaces.
Yeeeeey. It feels like they’re handing out free candy at Wallmart.
And sure, there’s decent logic behind this madness because early studies say that AI assistants can boost performance a little on mind-numbing tasks (while others say exactly the opposite – claiming they fark up about 70% of all tasks – but hey, who’s counting, certainly not me, cause it makes good content, amma right?), and it help leaders become better at communication without sounding like complete morons, and it also helps them expand their customer bases faster.
But a few researchers, with nothing else important to do, and wanting a bit of fame, ventured out with the following question:
“What happens when we let AI play executive wingman in high-stakes decision-making scenarios?”
Well now, this could be interesting, though the title kinda gave it away already.
They decided to run a little experiment that would make data nerds all ‘round weep with joy. They hustled more than 300 managers and executives – you know, the type who think they’re geniuses because they wear expensive suits and drink $8 lattes – and they axed them to predict stock prices after reviewing past trends.
Now guess what happened.
Half got to chat with their equally delusional peers, while the other half could consult with ChatGPT et al. like it was some kind of fortune teller. Then these ‘brilliant’ minds could revise their predictions if they wanted so, because first impressions are for us, mere peasants.
The results were holy Scheiße hot!
ChatGPT made executives significantly more optimistic in their forecasts and peer discussions encouraged caution like a therapy group for gambling addicts. But be cautious. . . because optimistic doesn’t mean that they’re right though, because executives who were armed with ChatGPT made worse predictions than they had before consulting the tool, based on actual stock figures that don’t lie (or kiss ass).
More rants after the commercial brake:
- Comment, or share the article; that will really help spread the word 🙌
- Connect with me on Linkedin 🙏
- Subscribe to TechTonic Shifts to get your daily dose of tech 📰
- Visit TechTonic Shifts blog, full of slop, I know you will like !
The experiment that broke my faith in human intelligence (again)
This study was conducted during executive-education sessions on AI between June 2024 and March 2025, because that’s when smart people apparently have time to play with their toys. Their participants were managers and executives from various companies, and they were taking part in what they probably thought was some kind of intellectual dick-measuring contest. It started by showing everyone Nvidia’s recent stock price chart – you know NVIDIA – that chipmaker whose fortunes have skyrocketed faster than a SpaceX rocket blow up, and all-o-that thanks to powering AI technologies back in 2024.
Nvidia’s share price had been climbing like a coked up ape, which made it perfect for testing forecasting ability. (And it certainly grabbed these executives’ attention like a shiny object catches an ape’s eye).
Each participant started making an individual forecast. .
What did they expect Nvidia’s stock price to be one month into the future? They submitted this initial estimate privately, probably feeling smug about their “insider knowledge” and “market intuition” – you know, the same intuition that led to the 2008 financial crisis. The research dudes then randomly split the participants into two groups for a brief consultation period because random assignment is the only fair way to distribute stupidity:
Peer discussion (control group). These executives discussed their forecasts in small groups for a few minutes. They didn’t use any AI tools, just human conversation and whatever thoughts rattled around in their heads. This was supposed to mimic traditional decision-making like getting input from colleagues who are probably just as clueless as you are.
ChatGPT consultation (’treatment’ group). These executives could ask ChatGPT anything they wanted about Nvidia’s stock – analyze recent trends, give a one-month forecast, maybe even ask for the meaning of life, but they were told not to talk to any peers, and thereby they created a scenario where an executive might consult an AI advisor instead of colleagues.
Everyone made a revised forecast for Nvidia’s price one month ahead after this séance, and the researches collected those responses like a teacher gathering homework.
The results that made them question everything
In short: “ The AI led to more optimistic forecasts”.
The groups started with comparable baseline expectations. Before any discussion or AI consultation, both sets of executives had statistically indistinguishable forecasts for Nvidia. They were equals in their ignorance, yet utterly united in their overconfidence.
However, after the consultation period, the executives who used ChatGPT became more optimistic in their forecasts. On average, the ChatGPT group raised their one-month price estimates by about $5.11.
And on the other end of the spectrum the execs who were left with only peer discussion made forecasts more conservative.
The peer-discussion group lowered their price estimates by about $2.20 on average, in contrast to the AI group, apparently they actually understood that what goes up might come crashing down someday, and additionally, this group was more likely than the ChatGPT group to stick with their original estimate. Now, that showed the kind of stubborn consistency that would make a mule proud, but if they did make a change, they were more likely to decrease their initial forecast, because misery loves company and so does financial caution. And these patterns held true even when they were controlled for the most extreme forecasts in the dataset.
AI consultation made predictions worse.
After conducting the study, the researchers compared the predictions against actual Nvidia data. Both groups were too optimistic on average because – as-it-seems – reality doesn’t care about your PowerPoint presentations or your MBA from Wherever University. However, those who used ChatGPT made even worse predictions after their consultation, and to top it all up, they felt like students who cheat and still fail the test.
Those who discussed with their peers made significantly better predictions than they’d made before their consultation, and they proved that sometimes two heads are better than one, even if both heads are filled with hot air.
AI caused overconfidence.
About a third of all participants offered pin-point predictions in their initial estimates – meaning they offered figures with one or more decimals, which previous research has established as an indicator of overconfidence.
What surprised me though was that after consulting with peers or ChatGPT, the participants’ overconfidence changed systematically. It probably felt like watching a psychological experiment unfold in real time. Conversing with ChatGPT significantly increased participants’ tendency to offer pin-point predictions, and participants in the peer discussion group became significantly less likely to use pinpoints in their revised estimates. In other words, their overconfidence decreased faster than the Nvidia stock price crashed (because of Chinese GPUs)
Why AI created overconfidence and optimism
What might explain these different outcomes? Why would consulting an AI tool lead to inflated estimates and overconfidence, and peer discussions provokes greater humility and conservatism?
The brains isolated five reasons that will make you question everything. . .
Extrapolation and “trend riding”
ChatGPT may have encouraged extrapolation bias (when the model is asked to predict values outside the range of the data that it was trained on), the AI’s knowledge is based on historical data, so it might have simply extended Nvidia’s recent upward trend into the future like in maths where you draw a straight line through infinity. Nvidia’s stock had been climbing steep indeed in the months leading up to the sessions.
Lacking up-to-the-minute context or any sense of an upcoming turning point, ChatGPT’s analysis likely assumed “what has been going up will keep going up” and that is the same logic that brought us all the dot-com bubble and the housing crisis. The AI’s guidance was based purely on past data patterns, and may have skewed toward optimism by default (compare it to a salesman who only tells you about the good features).
Authority bias and detail overload
Many executives in the ChatGPT condition reported being impressed by the detail and confident tone of the AI’s answer, and in post-experiment discussions, some participants noted that ChatGPT provided a wealth of data and reasoning that was perceived to be so thorough and self-assured that it made their own initial estimates seem inadequate by comparison.
This is authority bias in its purest form.
Because of the fact that the AI spoke in a confident, analytical manner, the users gave its suggestions more weight than they gave their own judgment, and essentially, the medium’s authority – an advanced AI sounding like an expert report – boosted the credibility of an optimistic forecast.
Emotion (or lack thereof)
Humans have emotions and instincts that can act as a check on extreme forecasts, like they had an internal horseshit detector. An executive looking at a meteoric stock chart might feel a tinge of wariness and an inner voice calling out “If it’s at a peak, it could crash”, you know, the same voice that tells you not to drink that suspicious milk.
The researchers suspect that emotional caution played a role in peer discussions. People might voice doubts or fears like “This stock feels bubbly; maybe we should rein it in a bit”, you know, the kind of wisdom that comes from actually having skin in the game.
ChatGPT, on the other hand, has no such emotion or intuition – it doesn’t care if you’re about to walk off a cliff. It doesn’t feel fear of heights the way a person might when seeing a price chart soaring toward the stratosphere. The AI offers analysis uncolored by anxiety, which can be useful, but also means it might not second-guess an optimistic trend like a human would.
Users who rely on the AI’s output alone don’t get the benefit of that cautious gut-check that evolution spent millions of years developing. In the experiment, the absence of “fear of being wrong” on the AI side may have allowed bolder forecasts to go untampered.
Peer calibration and social dynamics
The act of discussing with peers introduced a different set of biases and behaviors, which generally pushed them toward caution and risk averse consensus. Individuals hear diverse viewpoints in group sessions, and often discover that others’ expectations differ from their own.
This dynamic often led to moderating extreme views and finding middle ground, and in professional settings, no one wants to be the person with an absurdly bullish forecast because looking like an idiot in front of colleagues is career suicide.
There’s a “don’t be the sucker” mentality that runs deeper than corporate culture, because executives know that unbridled optimism can look naïve, and this may have created a spiral of skepticism in the group setting whereby each person, consciously or not, trying not to appear too rosy-eyed, which resulted in collectively lower forecasts.
This is almost the opposite of classic “groupthink” but instead of cheerleading each other into euphoria, these executives reined each other in, perhaps maybe to avoid standing out like a sore thumb. The end result was a more conservative consensus though.
Illusion of knowledge
Real research (not the AI-kind) has shown that when people have access to vast bodies of information, like the internet, or tools for information processing, like AI and computers, they’re more likely to fall for the illusion of knowing it all – kind of like how reading one article about quantum physics makes you think you understand the universe.
A significantly large number of participants showed the effect of this illusion when they tapped into ChatGPT which is one of the most intelligent technologies with access to a big amount of knowledge.
How to use AI wisely in executive decisions
I can nearly hear you sigh with relief. . .”finally, the conclusion”.
This research had some important lessons for leaders and organizations as they integrate AI tools into decision-making, assuming they actually want to make good decisions instead of just looking cool and tech-savvy.
Let me sum them up (and yes, it’s stating the obvious, I know):
In short:
“AI doesn’t think for you. That’s still your job”
Because, you know, somehow we forgot.
But for the rest of you who want more text . . .
Be aware of AI’s limitations and biases
AI tools like ChatGPT can sound convincing, but they often present information with more confidence than accuracy. When you use these systems for forecasting or decision-making, there’s a real risk of overestimating their reliability. To counter that, you should clearly define the data you want the model to work with, ask it to include confidence intervals in its outputs, and request an explanation of how or why its prediction might be wrong. These steps help expose hidden flaws or assumptions in the response.
Use human input to check and improve AI-generated insights
Even though AI is fast and efficient, it doesn’t replace human judgment. Team discussions are valuable because they provide context, challenge unrealistic assumptions, and expose weak reasoning that AI might not catch. The most effective approach is to use AI to generate options or perspectives, and then discuss those outputs with others.
Establish clear rules for how teams use AI
If you’re rolling out AI tools in your organization, it’s important to train people on how to use them responsibly. AI should not be treated as a final decision-maker. Encourage teams to review predictions critically, discuss possible risks, and evaluate worst-case scenarios before acting on AI output. Teams will fall into the trap of accepting AI recommendations without question.
And last, at the end of the journey, the TL;DR, just to piss off @DreesMarc
AI can improve decision-making, but only if it’s paired with human judgment and critical thinking. Trusting it blindly is no smarter than following anyone or anything without asking questions.
Hahahaha, sorry dude.
Signing off with a grin,
Marco
I build AI by day and warn about it by night. I call it job security. Let’s keep smashing delusions with truth. We are the chaos. We are the firewall. We are Big Tech’s PR nightmare.
Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google and LinkedIn appreciates your likes by making my articles available to more readers.
To keep you doomscrolling 👇
- The AI kill switch. A PR stunt or a real solution? | LinkedIn
- ‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn
- AIs dirty little secret. The human cost of ‘automated’ systems | LinkedIn
- Open-Source AI. How ‘open’ became a four-letter word | LinkedIn
- One project Stargate please. That’ll be $500 Billion, sir. Would you like a bag with that? | LinkedIn
- The Paris AI Action summit. 500 billion just for “ethical AI” | LinkedIn
- People are building Tarpits to trap and trick AI scrapers | LinkedIn
- The first written warning about AI doom dates back to 1863 | LinkedIn
- How I quit chasing every AI trend (and finally got my sh** together) | LinkedIn
- The dark visitors lurking in your digital shadows | LinkedIn
- Understanding AI hallucinations | LinkedIn
- Sam’s glow-in-the-dark ambition | LinkedIn
- The $95 million apology for Siri’s secret recordings | LinkedIn
- Prediction: OpenAI will go public, and here comes the greedy shitshow | LinkedIn
- Devin the first “AI software engineer” is useless. | LinkedIn
- Self-replicating AI signals a dangerous new era | LinkedIn
- Bill says: only three jobs will survive | LinkedIn
- The AI forged in darkness | LinkedIn

Leave a Reply