AI search. The biggest con since snake oil, now with a subscription fee

Have you ever paid someone to confidently lie to your face?

No?

Then congratulations, because you haven’t been suckered into paying $20, $40, or even $200 a month for an AI search engine that hallucinates more than a 1960s acid trip.

A new study just confirmed what every sane person already knew.

AI search engines suck at accuracy.

Sixty percent of the time, they’re wrong every time. And if you’re using Grok-3? Oh boy. You’d get more reliable results from flipping a coin while blindfolded.

A team of researchers at the Tow Center for Digital Journalism decided to put eight AI search engines through their clutches. They picked 200 real news articles, ran AI searches on them, and checked if the responses were even remotely accurate.

The results.

Absolute disaster.

AI search engines either lied outright, misattributed sources, or just made up information altogether. And they did all of this with the smug confidence of a Silicon Valley schmuck selling NFTs in 2021.

Now let’s talk about the biggest offenders.

ChatGPT Search?

57% completely wrong.

Copilot?

Couldn’t even be bothered to answer half the time, and when it did, it was wrong 70% of the time.

And then there’s Grok-3, which managed an astonishing 94% inaccuracy rate.

That’s right.

Ninety-four.

As in, “why even bother?”

You’d get more reliable answers from a fortune cookiel.

Or your best friend at kegg fest.

And yet, these companies have the audacity to charge people for this garbage.

Perplexity Pro and Grok-3 Search, both paid services, actually performed worse than their free versions. It’s almost like they’re running a scam.

But don’t worry, some people are still drinking the Kool-AI-d.

TechRadar’s Lance Ulanoff, for example, said ChatGPT Search was “fast, aware, and accurate”.

Fast?

Sure.

Aware?

In the same way a goldfish is aware.

Accurate?

Only if you define “accuracy” as “sounding confident while being totally wrong.”

Let’s be real now for a change.

AI search engines aren’t search engines at all. They’re bullshit generators with big gonads. They don’t retrieve knowledge. They fabricate it. They don’t “analyze”. They guess. They don’t “reason.” They autocomplete.

And yet, people are being duped into believing that this is the future of search. The only thing they’re searching for is their missing common sense.

If you like my rants and want to support me:

  1. Comment, or share the article; that will really help spread the word 🙌
  2. Connect with me on Linkedin 🙏
  3. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  4. TechTonic Shifts also has a blog, full of rubbish you will like !

AI found a new way to waste your time

Now, let’s shift gears to the latest miracle cure in the AI snake oil industry: Deep Research.

Yes, AI is no longer just replacing those interns you get on board to do the grunt work for you. They even had the audacity that it’s gonna be replacing PhDs!

At least, that’s what they’re claiming.

But in reality, it’s just AI doing Google searches, regurgitating a long list of results, and vomiting them onto your screen in a slightly more sophisticated way.

Remember when they said Deep Learning would change the world?

It did.

Uh-huh.

By making ad targeting slightly better and YouTube recommendations uncomfortably accurate. Then they said AI could “reason”.

Ding dong.

Newsflash!

It still can’t count the letters in “strawberry” without tripping over itself.

And now, we’ve reached the next stage of AI nonsense. “Deep Research” which is supposed to “analyze thousands of sources and generate insights like a hooman expert!”

Right.

And I’m supposed to believe that because…?

What AI actually does is freaking plagiarism.

It scrapes a bunch of existing content, mashes it together, and presents it as if it discovered something new. It’s the Wikipedia copy-paste special, now in chatbot form.

And yet, we have LinkedIn AI Fanbois, the fancier kind of Fanboy, screaming that “McKinsey is dead!” because of this.

Buuuut, I hate to break it to you.

McKinsey et. al. is going to outlive all of us. Corporate execs don’t pay consultants for research. They pay them for PowerPoints that justify their pre-existing bad decisions!

And let’s talk about the ghost in the shell.

AI still hallucinates.

Which means that if you trust Deep Research for something important, like, say, uhm… medical insights or legal advice? You are essentially gambling with your career – or your life. But sure, let’s trust AI with deep analysis.

What could go wrong here?


Three lil’ AI piggies and the big bad wolf of hype

The AI hype train keeps rolling, and the tech world keeps falling for it like a kiddie chasing bubbles. Google, OpenAI, and Perplexity are all pushing their own versions of “Deep Research” like it’s the second coming of Alan Turing. DeepSeek Search (remember them?) has also joined the circus. These companies are practically begging us to believe that AI is now capable of “intellectual reasoning” and “academic-level research”.

Reality check.

It’s not.

It is still the same old pattern-matching nonsense, just with a fresh coat of paint and a fancier name.

AI doesn’t “think”.

And nor doesn’t it “analyze”.

It freaking parrots.

And it’s not even a particularly smart parrot, it’s the kind that confidently tells you that the Eiffel Tower is in Tokyo and then apologizes when you correct it, and makes the same mistake the second time.

But hey, the marketing teams are already preparing their next set of buzzwords to keep the hype going. Soon, we’ll be hearing about “Chain-of-Draft”, “AI Intuition” (just autocomplete with a fancier name), “Machine Cognition” (just Chain-of-Thought with extra steps), and eventually “Conscious AI Agents” (just ChatGPT with a deeper voice and more hallucinations).

The cycle will never end, because the stakes are too high. .

Between 2020 and 2024, around 400 billion USD has been invested in AI.

This means we will be hopping from hype to hype.

A never ending hype-cycle.

Generative AI isn’t delivering?

Hip! Deep Research.

Deep Research turns out to be a fluke?

Hop! Agentic AI.

And-so-on.

Here’s some deep research around the investments:


A final kick in the plums: Exa Websets

Before I wrap this thing up, let’s take a moment to talk about another scam that I personally fell victim to Exa Websets. Yes, the same company that scammed me out of $200 after I was asked to ß-test the darn thang, after which they ghosted me like an AI chatbot that suddenly forgets its training data.

Why bring it up here?

Because it’s the perfect metaphor for AI itself.

AI search and Deep Research are just like Exa Websets. They promise intelligence but deliver nonsense. They sound sophisticated, but they’re just recycling the same garbage. And worst of all is that they charge you for the privilege of getting scammed. Whether it’s a useless AI search engine, a Deep Research tool that fabricates sources, or a web service that takes your money and disappears, the game is the same.

And people keep falling for it.

. . . And digging deep into their pockets to cough up money.

So here’s my advice.

Stop giving these AI con artists your money.

Stop believing the hype.

And most importantly, stop thinking that just because AI says something confidently, that means it’s true. AI at this moment in time, isn’t the future of research. It’s just a fancy new way to spread misinformation faster.

You think AI is replacing human intelligence?

Think again.

It’s just replacing your common sense.

Signing off from the shallow research I did for this post.

Marco.

Oh? Did I ask you to take the survey?


I build AI by day and warn about it by night. I call it job security. Stick around if you like what I write. If not, don’t worry, the AI already already knows you were here.


To keep you doomscrolling 👇

  1. The AI kill switch. A PR stunt or a real solution? | LinkedIn
  2. ‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn
  3. AIs dirty little secret. The human cost of ‘automated’ systems | LinkedIn
  4. Open-Source AI. How ‘open’ became a four-letter word | LinkedIn
  5. One project Stargate please. That’ll be $500 Billion, sir. Would you like a bag with that? | LinkedIn
  6. The Paris AI Action summit. 500 billion just for “ethical AI” | LinkedIn
  7. People are building Tarpits to trap and trick AI scrapers | LinkedIn
  8. The first written warning about AI doom dates back to 1863 | LinkedIn
  9. How I quit chasing every AI trend (and finally got my sh** together) | LinkedIn
  10. The dark visitors lurking in your digital shadows | LinkedIn
  11. Understanding AI hallucinations | LinkedIn
  12. Sam’s glow-in-the-dark ambition | LinkedIn
  13. The $95 million apology for Siri’s secret recordings | LinkedIn
  14. Prediction: OpenAI will go public, and here comes the greedy shitshow | LinkedIn
  15. Devin the first “AI software engineer” is useless. | LinkedIn
  16. Self-replicating AI signals a dangerous new era | LinkedIn
  17. Bill says: only three jobs will survive | LinkedIn
  18. The AI forged in darkness | LinkedIn

Become an AI Expert !

Sign up to receive insider articles in your inbox, every week.

✔️ We scour 75+ sources daily

✔️ Read by CEO, Scientists, Business Owners, and more

✔️ Join thousands of subscribers

✔️ No clickbait - 100% free

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Up ↑

Discover more from TechTonic Shifts

Subscribe now to keep reading and get access to the full archive.

Continue reading