Stanford’s hilarious look at our AI-powered future

Hear ye, hear ye, he who has ears, let him hear, because Stanford University did a study on my favorite subject, which is – shockingly – AI, that is basically a crystal ball for your job. And let me tell you, it’s less “Skynet is gonna murder us all” and more kinda like “your spreadsheet might finally listen to you”. And they conducted this study somewhere between January and end of May 2025 when they published it (yes, it’s that fresh!).

And lemme tell ya, this research ain’t about robots stealing your lunch money, but it’s about finding that sweet spot where humans and AI decide to play nicely together and complement each others weaknesses. . .

But – as usual – I want to start with a low-kick in the crotch.

Because, apparently, 80% of U.S. workers say they only sprinkle a little bit of fairy dust on at least 10% of their tasks. So, if you thought you were safe hiding under your desk, think again!


More rants after the commercial brake:

  1. Comment, or share the article; that will really help spread the word 🙌
  2. Connect with me on Linkedin 🙏
  3. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  4. Visit TechTonic Shifts blog, full of slop, I know you will like !

The Human Agency Scale

The big kahuna of this study is something called the Human Agency Scale. Now, before y’all roll your eyes, this isn’t your regular tech bro buzzword. It’s a five-level system, from H1 to H5, which really smart people made to measure just how much you, being this marvelous human, actually want to be involved in a task.

In my case that can be captured in a simple mathematical formula which goes something like “My Involvement = e^(-∞) “.

But theirs is different.

Forget that old “automate everything with AI” mantra that the AI industry is vomiting over us all, because this scale recognizes that sometimes, you actually enjoy doing things (which the AI is bad at).

Shocking, I know.

Now here’s the model:

I know, it’s a difficult model, with too many blocks, colors and text. So first I’ll be mansplaining this too you, and next I’ll try to cast it in a simpler model.

Why not the other way around, I hear you think.

Well, if pain could write, I was its author.

Ok, here it comes:

This Stanford study is set out to understand how our jobs will change as AI (agents, not the LLM kind) become more common in and around our place that we still like to call “work”.

They created a special way to talk about how much humans want to be involved in different tasks, and they – in all their academic wisdom – they decided to call it the Human Agency Scale, or HAS.

They could have called it “the Level Of Humans Allowed Scorecard”, but that one was already taken by the Klarna guy, Sebastian Siemiatkowski after dumping 20% of his staff in favor of a farkin’ chatbot.

Anyway, this scale has five levels, from H1 to H5, and it helps us see that it’s not just about whether a task is fully automated or not. By the way, why H? What does it stand for? I think it stands for Hooman, but I’d like to hear your thoughts about it in the comments.

At the lower end of the scale, you got H1 and H2, and here the AI takes the lead. These are tasks where AI can pretty much do everything by itself, like um, copying information into a computer or, uh, running reports. And face it, you’d rather outsource this kind of work altogether, so here the human involvement is very minimal. But as you go up the scale, toward H3, H4, and H5 (you do the math next time), the human’s role becomes much more important.

At H3, humans and AI work together as equals, where we are both contributing to get the best results in like a kumbaya hava nagila kind of yin yangish arrangement, like when you are creating storylines for a presentation, or have it do all the writing for you, whilst you take all the credits.

Now, by H4, the human is primarily in charge, with AI helping out – or at least that is what the AI wants us to think – like um, coordinating financial planning. And at H5, the task fully relies on human involvement, with AI just enhancing human abilities, like participating in conferences to stay updated on trends, and lying on the couch eating chips and watching Netflix CAUSE YOU JUST BECAME FARKIN’ UNEMPLOYED BECAUSE OF THE AI, YES!

And now for the simple image:

Now, the cool thing with this study is that it nuances a lot. For instance it reveals that we don’t always want AI to take the wheel. Sometimes we prefer to backseat drive, or even drive ourselves with AI giving us helpful directions. And I found some early 2025 usage data from Anthropic that kinda backs this up – some workers in 36% of occupations are already letting AI handle at least a quarter of their tasks.

Ha!

Apparently we aren’t dipping our toes in the AI pool. We are doing cannonballs!

The study also found that people often want to be more involved in tasks than what experts think AI could handle on its own. This means there’s a bit of a gap between what technology can do and what people prefer. This difference is actually a good thing, though, because it shows that AI developers can focus on making AI agents that fit what humans actually want, instead of just pushing for full automation everywhere.


Humans want way more fun than AI thinks we need

As it turns out, us hooman workers have some surprisingly diverse opinions on how much AI we want in our daily grind to earn a living.

The study found a “rich tapestry of HAS profiles”.

OMG. More academic speak.

They were actually trying to say that not everyone wants a robot handler. In fact, a lot of us prefer higher levels of human involvement than experts think are “technologically necessary”. It’s like the experts are saying, “AI can totally do your taxes perfectly” and we’re kinda like responding, “Yeah, but, um, I kinda like the thrill of figuring out deductions myself”.

Now, this is a delightful mismatch, but it is also a golden opportunity for AI developers to actually make tools that we want to use, and not just ones that are shoved down our throats.

Good news, though, 46.1% of tasks have workers actually positive about AI automation. So, for nearly half of your to-do list, you’re probably dreaming of an AI taking over while you sip your latte.

The big wet dream of everybody that was part of the study, people working – in the AI industry, in academia, or just fumbling away behind their shitty desk in a cubicle all day – is that the AI takes on the “repetitive, low-value tasks”, and let’s us do the fun stuff. Yes, no more mind-numbing data entry, because you’ll be freed up to focus on the truly important stuff, like, um, the uh, um, I don’t know what it is you can offer society.

But honestly people.

Go back to sleep, you naive souls.

Apparently you haven’t read my shit.

The one thing to me, that this study proved, is that y’all worker bees are all lulled to sleep and need to wake the fuck up!


When AI gets a nega-response, a.k.a. La Résistance.

Now, luckily not all people interviewed saw rainbows of harmony and having grand delusions of becoming a conjoined robotic twin of uber efficiency.

Luckily around 28% of workers are still giving AI the side-eye.

These are the smart people, or the ones with a well developed intuition.

Now what are their top concerns:

  • “Is this thing even accurate?!” (45%). Yupski. A totally valid question here, people. Nobody wants their AI-generated report to look like it was written by an AI or a drunk with a delirium.
  • “Am I getting fired?!” (23%): The age-old fear. Don’t worry, they still say it’s about “augmentation”, not “replacement”. Yeah sure. We’ll see how that plays out.
  • “Where’s the human touch, the creative spark, the ability to make questionable decisions after a long day?” (16.3%). Ha! They did decide to accept my response ! Apparently, some of us still like feeling unique and irreplaceable. It’s a charming quality of us carbon based lifeforms, you know – being naive – really.

Your skills. Up in value or down the drain

Looky here at this beautiful totally unnecessary chart. This chart compares skills based on average wages (left side) and required human agency (right side).

BAM! Here it is – right in your face!

  • Green lines. These are the unsung heroes! Skills that actually gain rank when you value human involvement over mere pay. It means these roles need more of your glorious human brainpower than their current paycheck might suggest. Undervalued, perhaps, buuuut quite essential.
  • Red lines. Uh oh. These are the well-paid skills that oddly require less human effort, often tied to those delightful, automation-friendly tasks like data processing. It’s a clear signal that if your job is basically “be a very fast calculator”, or staring at a screen all day, start learning to knit.

The big takeaway from this very, very lengthy study is that we’re heading for a revaluation of skills.

Less “information-heavy” abilities, more “human-centered” ones.

So, brush up on your empathy, perfect your persuasive arguments, and maybe even take an improv class. It’s time for your soft skills to shine!

So what’s next, my slightly brilliant friend?

This Stanford study is a call to action. AI agents are going to fundamentally reshape what it means to work. Ok, they are far from being ready for the real world, playing around with real money, but it does mean that we need to get serious about reskilling and retraining programs. Time to dust off those old textbooks, or perhaps, learn how to charm an AI.

Of course, the study has a lot of caveats. It is based on existing data, so it might not predict the truly bonkers new jobs AI might create (or destroy). And sure, workers might be a tad biased (who wants to admit an AI can do their job better). But the researchers bravely put workers at the center, and let them vent their concerns.

Signing off,

Marco

I build AI by day and warn about it by night. I call it job security. Let’s keep smashing delusions with truth. We are the chaos. We are the firewall. We are Big Tech’s PR nightmare.


Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Google and LinkedIn appreciates your likes by making my articles available to more readers.

To keep you doomscrolling 👇

  1. The AI kill switch. A PR stunt or a real solution? | LinkedIn
  2. ‘Doomsday clock’: it is 89 seconds to midnight | LinkedIn
  3. AIs dirty little secret. The human cost of ‘automated’ systems | LinkedIn
  4. Open-Source AI. How ‘open’ became a four-letter word | LinkedIn
  5. One project Stargate please. That’ll be $500 Billion, sir. Would you like a bag with that? | LinkedIn
  6. The Paris AI Action summit. 500 billion just for “ethical AI” | LinkedIn
  7. People are building Tarpits to trap and trick AI scrapers | LinkedIn
  8. The first written warning about AI doom dates back to 1863 | LinkedIn
  9. How I quit chasing every AI trend (and finally got my sh** together) | LinkedIn
  10. The dark visitors lurking in your digital shadows | LinkedIn
  11. Understanding AI hallucinations | LinkedIn
  12. Sam’s glow-in-the-dark ambition | LinkedIn
  13. The $95 million apology for Siri’s secret recordings | LinkedIn
  14. Prediction: OpenAI will go public, and here comes the greedy shitshow | LinkedIn
  15. Devin the first “AI software engineer” is useless. | LinkedIn
  16. Self-replicating AI signals a dangerous new era | LinkedIn
  17. Bill says: only three jobs will survive | LinkedIn
  18. The AI forged in darkness | LinkedIn

Become an AI Expert !

Sign up to receive insider articles in your inbox, every week.

✔️ We scour 75+ sources daily

✔️ Read by CEO, Scientists, Business Owners, and more

✔️ Join thousands of subscribers

✔️ No clickbait - 100% free

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Up ↑

Discover more from TechTonic Shifts

Subscribe now to keep reading and get access to the full archive.

Continue reading