How corporations fell in love with AI, woke up with ashes, and called It ‘innovation’

Hell yeah, folks! I always wanted to write an intimate autopsy of artificial intelligence in the enterprise, based on someone else’s failures (obviously, not my own) .

So here we are.

You and I.

Snug, as always.

Me writing, you reading, about flaming hype-cycles, and – at the end – machine generated regrets.

I think it is the ultimate revenge and the absolute cure against corporate hubris.

But will they ever learn?

I ain’t got no clue, but without furthe ado (rhymes), let’s stretch this beast of executive delusion out until it snaps under the weight of its built-in sarcasm.

And, for once, please read until the very end, if you really want to learn something useful.

Here we go!


AI was thought to be revolutionary

It was said it would change the world.

Real Che Guevara style, including mustache, military shirt, and beret showing a big red star.

Well, it changed something alright.

It changed the PowerPoint templates.

It promised magic.

Delivered migraines.

Somewhere along the way, “AI transformation” became a synonym for “budget hemorrhage” with extra syllables. And yet (shockingly!) the board still claps when a pilot project generates a summary of last week’s meeting notes in 0.2 seconds.

Yes, Sam, Skum, Zucky, and Bozo the Benevolent.

Very disruptive.

Now please explain why we’re $18 million over budget.

The thing is that the deeper you look, the funnier it gets. Well that is if you’re into tragic comedy.

So here’s some AI-hallucinated spec I know you’ll be gloating and quoting: Forty-two percent of companies have abandoned most of their AI initiatives.

That’s not a mild hiccup.

That’s a full-scale corporate exorcism.

And it gets worse even more.

Most “projects” never make it out of proof-of-concept purgatory.

They linger, half-built and unloved, while some VP insists that they are “very close to going live”.

Lemme tell ya.

I know for a fact that, they’re not.

They’re dead.

D.O.A.

They’re weekend-at-Bernie’s-ing their way through QBRs*.

And what do they blame? Not the C-suite’s ADHD strategy. Not the fetish for new tools that do bling. Nah, it’s always the usual suspects, like cost, privacy, security.

Yada yada yada.

All the things they were warned about if they would have spent a little time reading my shit, but heck, they ignored it, in their rush to be “first to market”.

As if being first at failing counts for something.

* Go watch the cult classic film Weekend at Bernie’s. Two guys cart around the dead body of their boss. They pretend he’s still alive to avoid getting in trouble. They put sunglasses on him. They move his arms like he’s dancing. And more. It’s peak ‘80s. Like when you were in your 30’s @DreesMarc.


If you like my rants and want to support me:

  1. Comment, or share the article; that will really help spread the word 🙌
  2. Connect with me on Linkedin 🙏
  3. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  4. TechTonic Shifts also has a blog, full of rubbish you will like !

It all started with a prompt and a lie

I actually wanted to say ChatGPT, but I’ve janked this lil’ buggers’ tale so many times, the pun is getting stale.

Back then ChatGPT was a little chatbot with a big ego.

Businesses fell head over heels.

It was love at first prompt.

Here was a bot that could write emails, summarize text, and hallucinate legal advice in the voice of Oscar Wilde. Executives stared into the interface and saw the future.

Buuuut what they didn’t see was the absolute chaos it would unleash when someone asked, “Can we plug this into our CRM”.

The initial euphoria was like a sugar rush, laced with caffeine, fairy dust and a line of white. It was fast, loud, and followed by a crushing crash. And suddenly, everyone needed an AI strategy. Ding: a task force. Dong: a roadmap. Ding: Consultants crawled out of their dens to sell “prompt engineering workshops” (you remember those?) and $400/hour chatbots named Clarissa. Dong: IT teams got dragged into “innovation steering committees” that couldn’t steer a tricycle.

And then reality slapped them across the face.

Oopsy daisy. Um, guys? This thing isn’t plug-and-play.

Um, it’s um, kinda um, plug-and-pray.

But even now. You know, two years deep into failure after failure, most companies are still snorting the digital Kraken. Still convinced the next model, the next upgrade, the next prompt trick will finally make it all work.

Sigh.

Woe me.

Cause I’m the one cleaning the mess.

Better call myself the AI janitor.

It’s not hope. It’s delusion. Duct-taped to ambition.

Some random image found on the internet

The theater of innovation is out of tickets

Pilots. Proof-of-concepts. Sandbox projects. The graveyard is full of them. Nearly 50% of AI “projects” never make it to production, and those that do often wish they hadn’t.

These aren’t experiments.

They are rituals.

Mear performances.

They are designed to give the execs the illusion of innovation while producing absolutely nothing. “We’re just testing out use cases”, they say.

Yeah right.

Like how I test gravity by throwing spaghetti at the wall.

Even the ones that “go live” end up doing menial “meh” tasks.

Things like auto-tagging documents.

Summarization.

Generating templated reports.

Rewriting FAQs for customer support bots that were already useless.

Nothing transformational.

Nothing strategic.

Just tech theater, a shallow imitation of progress performed for eager stakeholders who don’t know the difference between a confusion matrix and tic tac toe.

And let’s not forget the fact that AI isn’t failing because the models are bad.

Nah, surely not.

Here comes the secret:

It’s failing because the people are.

The problem isn’t the tech.

It is the lack of purpose.

Companies don’t know what they are solving. They throw AI at vague problems, like “optimize operations”, “enhance insights”, “disrupt the workflow”, like it’s your daughters glitter. But glitter doesn’t fix anything. It just makes the mess harder to clean up.


The data is always the villain

Ah yes, the data. Let’s talk about the data. Or as I like to call it, “”he festering swamp of rot where AI dreams go to die”.

Eighty-five percent of AI projects die because of bad data (just Google it yourself – don’t AI-search it please, or haven’t you learned from my previous rants??). No, not the android from Star Trek (if that even says anything to you, youngsters and anti-tech-culturalists).

Bad data.

Not “less than ideal data”.

Not “slightly noisy data”.

I mean rotten.

Teeth breaking, incomplete, irrelevant, inconsistent data which is stored in seventeen formats across twelve systems built in six different decades.

And this isn’t just a minor inconvenience.

It is existential.

Garbage data makes garbage models. Doesn’t matter how fancy your transformer is, but if your training data says that all customers named Mohamed are fraud risks, guess what your AI will do? It’ll discriminate, confidently. And you’ll sit there wondering why your hiring algorithm only selects white men named Marc.

ReadL One man’s fight against Dutch AI corruption | LinkedIn

Poor data is the grimy hand dragging AI back into the mud. And nobody wants to fix it. Cleaning data is boring. I know. I always leave that to others. Because it’s slow. It’s also very expensive. So instead, companies build flashy interfaces on top of statistical sewage and wonder why the model hallucinates when asked to do basic math.

What did you expect, genius?

You trained it on your SharePoint directory and a PDF from 2003.

Duhhh.


Meanwhile, CIOs are in hell

But it’s a polite, corporate kind of purgatory, full of whiteboards, visual KPIs, and overly enthusiastic vendor demos. They are supposed to be delivering value. Yet they are stuck babysitting “initiatives” that are 80% aspiration, 15% technical debt, and 5% pure lies.

And that 5%?

That’s of course what’s gonna end up in the shareholder report.

Poor CIOs

Wrangled by all the other C-suite, shareholders, tail-wagging vendors, succesful captains-of-industry summits.

The emotional arc of a CIO in 2025: excitement, confusion, fear, resignation.

This is not a good time to be a CIO, I can tell you that.

  • They launched another AI chatbot without a use case? Woe them.
  • Still training models on spreadsheet rot from 2006? Woe them.
  • Cut data engineering to fund another “AI Center of Excellence”? Woe them thrice.

They have spent the last two years trying to make something, like anything, work.

They’ve run out of synonyms for “pilot”.

They are being asked for results from tools that can’t explain how they came to their conclusions. Meanwhile, the CEO wants a dashboard that shows “how AI is improving synergy”.

Da whatnow?

Yeah, that’s the kind of bullshit they get from the uninitiated.

You might as well ask for a unicorn that does taxes.

Under the hood, most of these organizations are a bloody mess. Siloed data. Disconnected systems. Half-baked cloud migrations. And now they have slapped generative AI on top like frosting on a landfill.

Truth be told.

It looks impressive from a distance.

But smells like failure from up close.

PS.

If you have a CIO in your inner circle, and you want to support him/her. Send them this article, and already fax them their eulogy, because they won’t be around anymore by the time they’ve read this piece.

Another random image just to touch up this otherwise boring piece of cr*p

No one has the talent and everyone is lying about it

Talent. Let’s just say it. . you don’t have it. I don’t have it. We might have some smart folks running around in our teams, sure. But they are underpaid, overworked, and sick of explaining what a vector embedding is to a CMO who thinks it’s a yoga position.

True-AI-talent-is-rare.

Keeping them is harder.

You hire one PhD and chain them to Jira tickets until they quit and move to a start-up that sells cat-meme NFTs powered by LLMs.

The few who stay. . .

They are trapped in Kafkaesque org structures where approvals take months and innovation is a four-letter word. These are people trained to work with data at scale. To fine-tune models. To evaluate drift and latency. But in your org, they are stuck building chatbot scripts for HR. No wonder half your data scientists spend their time looking at job postings on their second monitor.

I am a lucky guy.

I got a chance to work with the best and the brightest.

Young peeps, straight outa uni. Fresh degrees, ambitious as f*ck, know how to clean shit, know how to handle the coding hammer, and sure as hell know how to wield the broad sword of GenAI. Here’s their link. Don’t steal ‘m from me please.

And don’t even get me started on upskilling, or the more fancier version “AI-literacy”.

You think giving a six-hour Coursera course to your sales manager makes them “AI fluent”?

Hate to break it to ya.

Cause it makes them dangerous.

Now they are sending prompts to the bot asking it to write cold emails in pirate speak.

Arrr, ye scurvy dog o’ a customer! Be ye wantin’ to buy a heaping pile o’ our finest corporate bilge? This week, it be on sale, ye won’t believe the plunderin’ we call a bargain.

“Disruptive”, they say.

I say it’s a cry for help.


Infrastructure is just a fancy word for screwed

Let’s talk infrastructure.

No, let’s scream about it.

You cannot run AI workloads on a Raspberry Pi and hope for the best. Buuuuut here we are. Trying to scale large language models on machines that barely support Microsoft Teams.

No GPU support.

No data lake.

Just duct tape and dreams of grandure.

Ok, think of it this way. . .

Building real AI infrastructure is like building a launchpad.

You need compute, storage, security, observability, deployment pipelines, and a team that knows their shit, and how to use all of it. But what most companies have is a PowerPoint slide that says “cloud-enabled”.

Um.

Uploading a CSV to OneDrive is not cloud transformation.

Worse, half of them are trying to integrate AI with legacy systems that were old when MySpace was new. And guess what, AI doesn’t play nice with systems that don’t have APIs or proper logging. These Frankenstein architectures are held together by prayers and weekend hacks of even older sysadmins.

But sure, let’s throw machine learning into that and see what happens.

But you already know the outcome, yet you dare only whisper it to your wife (unless you work at Meta).

What happens is failure.


Ethics are a PowerPoint slide you make after the lawsuit

And then, when it all goes sideways, they call in the ethics team. Or more accurately, they create one.

Overnight.

Usually staffed by people who know nothing about AI but have strong opinions about “responsible innovation”, and “legal” or “privacy” matters.

Ethical governance is always bolted on after the data leak.

Bias? Totally avoidable… if you had thought about it beforehand.

Privacy? Maybe you shouldn’t have dumped your entire CRM into OpenAI’s API or Microsoft’s Copilot without a DPA.

Accountability and AI-Governance? Who even owns this thing anymore? IT says it is Marketing’s fault. Marketing says the model is autonomous or worse – a black box. The model says it’s sorry, but it has no memory of that interaction (like ex Dutch prime minister, turned NATO schmuck Mark Rutte)

And let’s not pretend this is new.

The warnings were there.

Ethical AI has been a known thing for years.

But most companies just ignored it.

Why you think, because I know you, you are conscientious. In favor of Ethical AI – yes with a capital “E”.

I tell ya why.

Because they wanted speed.

Speed to demo.

Speed to pilot.

Speed to headline.

And now they’re trying to outrun the consequences with PowerPoint decks and policy docs no one reads.

Apart from me.

And that’s because I write them.


ROI is the corpse you’re still parading through town

ROI is the ghost that haunts every AI project. You can’t touch it. You can’t really measure it, even when you try.

But it’s always in the room.

Most orgs have no clue how to calculate value from AI. They talk about “efficiency gains” and “streamlined workflows” but when you dig deeper, it’s all anecdotal.

Karen saved 30 minutes on her report.

Wow!

Bloody fantastic.

That totally justifies the $1.6 million in cloud spend.

This is the tragedy of it all.

You can’t measure what you never understood. But that won’t stop you from writing a Q1 update about how “AI enabled business acceleration through cross-functional synergy”. Whatever that means.

We built these incredibly powerful tools, and then use them to make quarterly reports slightly more readable. Nobody’s transforming anything.

They are tweaking.

Automating busywork.

Creating dashboards nobody checks.

And the tech that could actually solve hard problems sits unused because it requires . . gasp. . effort.

You want ROI?

Start by solving an actual business problem.

Not a vanity metric.

Not a “digital twin” of your boardroom. A real, painful, expensive problem. And no, summarizing doesn’t count.

In the end, AI in the enterprise isn’t failing because it’s bad tech. It is failing because it’s being used by people who don’t know what it is they want, what it is all about, what is required to build it, or why they even started it in the first place.

They bought into the hype.

Built castles in the cloud. But you didn’t transform a GD thing.

You hallucinated just like your lil-AI-buddy.

And now they are staring at their Slackbot and wonder why it can’t save your profit margin.


Final warning, to y’all

If you ever build an AI tool that sends inspirational Slack messages to your team, I will personally train a model to write aggressive haikus about your poor life choices and beam them into your dreams.

🖤

Signing off from the corporate sheepl out there.

Marco.

Oh? Did I ask you to take the survey?


I build AI by day and warn about it by night. I call it job security. Let’s keep smashing delusions with truth. We are the chaos. We are the firewall. We are Big Tech’s worst PR nightmare. Stick around if you like what I write. And if not, don’t worry, the AI already already knows you were here.


To keep you doomscrolling 👇

  1. The AI kill switch. A PR stunt or a real solution? | LinkedIn
  2. ‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn
  3. AIs dirty little secret. The human cost of ‘automated’ systems | LinkedIn
  4. Open-Source AI. How ‘open’ became a four-letter word | LinkedIn
  5. One project Stargate please. That’ll be $500 Billion, sir. Would you like a bag with that? | LinkedIn
  6. The Paris AI Action summit. 500 billion just for “ethical AI” | LinkedIn
  7. People are building Tarpits to trap and trick AI scrapers | LinkedIn
  8. The first written warning about AI doom dates back to 1863 | LinkedIn
  9. How I quit chasing every AI trend (and finally got my sh** together) | LinkedIn
  10. The dark visitors lurking in your digital shadows | LinkedIn
  11. Understanding AI hallucinations | LinkedIn
  12. Sam’s glow-in-the-dark ambition | LinkedIn
  13. The $95 million apology for Siri’s secret recordings | LinkedIn
  14. Prediction: OpenAI will go public, and here comes the greedy shitshow | LinkedIn
  15. Devin the first “AI software engineer” is useless. | LinkedIn
  16. Self-replicating AI signals a dangerous new era | LinkedIn
  17. Bill says: only three jobs will survive | LinkedIn
  18. The AI forged in darkness | LinkedIn

Become an AI Expert !

Sign up to receive insider articles in your inbox, every week.

✔️ We scour 75+ sources daily

✔️ Read by CEO, Scientists, Business Owners, and more

✔️ Join thousands of subscribers

✔️ No clickbait - 100% free

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Up ↑

Discover more from TechTonic Shifts

Subscribe now to keep reading and get access to the full archive.

Continue reading