I’m about to say something deeply inconvenient from the mouth of someone who builds AI-first companies for a living, rants about them, but keeps building them anyway.
Here it is:
“AI is going to make a lot of companies less competitive”.
Right now, yes, AI still gives you a competitive advantage you can taste. It strengthens customer relationships without replacing them, when you use it to listen better, respond faster, and personalize messages without turning your customers into KPI livestock. It also makes your employees more effective, and I can’t even pretend I’m immune to that. My own output went up roughly 2 to 3 times, mainly in research and communication. And lately I focus more on automating processes, especially in supporting services, where the “paperwork economy” finally gets what it deserves. Together that means more revenue lift and higher productivity and serious efficiency gains in 2026, and the business world will ride that wave into the second half of this decade.
So far this looks like every other automation wave. We got servers in the sixties and seventies. You’ve probably seen pictures of those huge room-sized monoliths. Then client server computing entered our world with their thick desktops and even thicker IT departments. Back offices got mechanized step by step. And then the internet showed up and retail got hit first, then everybody learned to exchange information with customers at scale, then marketing, sales, and operations moved online and squeezed even more efficiency out of everything.
That was our world up to 2023.
Now we live in the age of intelligence, where software can reason through messy work, do research, plan things, pick routes around obstacles, and ultimately land on workable decisions, and this will lead to a whole new level of automation, I’ve talked about extensively in prior blogs.
Now let’s run a thought experiment, Einstein style.
We teleport to 2030.
The workforce is fully augmented or replaced, whatever comes first because the back office is heavily automated through AI. Many routine decisions are machine assisted, workflows are executed by agents and remaining chatbots run under some human supervision. Even physical steps start to get absorbed where robotics makes sense.
So where do we end up? In a highly efficient AI native company.
AI native does not mean “we bought Copilot licenses”, but it means that the operating model is designed around intelligence as infrastructure. Work is decomposed into machine suitable tasks, human oversight points, and feedback loops, and evaluation and audit logging are part of the workflow, not an afterthought. Teams treat models and automation the way they treat core IT, a foundational capability that needs governance, resilience, and measurable quality.
AI first is different. AI first is adoption behaviour. You add assistants to existing workflows, bolt tools on top of old processes, and get immediate productivity gains. It is often chaotic, because the organisation is still built for human workflows, and AI gets shoved into the cracks.
AI native is redesign.
By 2030, most firms in competitive industries will have moved toward some version of AI native, because the efficiency gains are too large to ignore.
And that is where the uncomfortable question of this blog shows up.
This story is not about humanity, universal basic income, or how we pay for the future. Those are bigger problems and they deserve their own nightmares. This story is smaller.
It’s about competitive advantage.
Because once the baseline becomes common, differentiation collapses.
If everyone has access to the same frontier models, trained on largely the same public corpora, supplied by the same vendors, delivered through the same assistant interfaces, then everyone gets a similar intelligence baseline. Your competitor buys the same brain, they get the same research speed, planning patterns, basically the same “strategic frameworks” that all suspiciously resemble each other.
Even creativity begins to converge.
Generative models are great at producing plausible variations. They are less reliable at producing ruptures. I’m talking about the kind of ideas that break a mental model instead of decorating it. The status quo that gets disrupted by things like Relativity (vs Newtonian gravity), Quantum Mechanics, Nuclear fission, the Contraceptive pil, the Computer, the Internet, the iPhone, e-Commerce and Generative AI.
Is this something you can infer from existing text corpora by simply sliding the temperature a tad towards red?
I tried giving it a kick in the butt in my paper called the Wander’s algorithm where I looked at creativity, neurodivergence and ADHD in particular and came up with a model and a lot of math. You can read the blog here and download the paper. Some extra traffic is always welcome: Attention isn’t all you need: The wanderer’s algorithm | LinkedIn
The thing with our current (generative) AI systems, and even the new kind on the block, the World Models, they all pull people toward the centre, because the centre is where the training data is thickest and the probability mass is safest. If you accept the first good looking output, you drift toward the average with frightening efficiency.
So yes, AI gives you an advantage today because adoption is uneven, and you bet your sweet dollar, that the AI makes you average because it becomes ubiquitous.
That is the core risk.
And the question I ask in this here piece is quite mundane but painfully practical: “How do you stay different when everyone has the same intelligence”.
How do you avoid becoming a perfectly optimized commodity with a logo, and how do you keep producing ideas, decisions, and strategies that do not converge into the same shade of polished beige.

More rants after the messages:
- Connect with me on Linkedin 🙏
- Subscribe to TechTonic Shifts to get your daily dose of tech 📰
- Please comment, like or clap the article. Whatever you fancy.
2026 is the year of AI automation breakthroughs
In 2026, we will see lot’s of companies going through AI based process-automation. And in these programs, three technologies are fusing into one machine. We already have agentic AI, the systems that can plan and act across tools, ok, it may not be the most stable approach we have, but it will mature in the next coming years, and there’s also RPA, the industrial era of automation that rules, screens, and bot farms. Yes, RPA isn’t the sharpest pencil in the box, but when you start mixing RPA with UI based automation driven by an algorithm, I’m talking about the things you see in ChatGPT Atlas where an AI uses interfaces the way a human would, clicking, typing, navigating, extracting. The browser is quickly becoming a universal adapter for systems nobody wants to integrate properly.
In practice you’ll see multiple types of systems being used to power various types of processes. A process is usually chopped up into five layers, because humans love pretending reality fits into neat boxes.
The image below gives you an idea of which technologies are going to be positioned at certain levels of a business process, later on in this piece:

L1. The value stream level
This layer is the reason the process exists. The promise to the business and the customer. The outcome definition, success criteria, guardrails, and risk tolerance.
Let me illustrate this with an example. Take the new employee onboarding process for a regulated company, because it includes policy, decisions, orchestration, real systems, and enough compliance to keep auditors emotionally nourished. The company wants a new hire productive fast, secure by default, compliant, and not accidentally locked out of everything on day one.
What does this layer look like in the real world
A hiring manager submits a request with a target date. HR and IT commit to a defined outcome
By 09:00 on the employee’s first day, the person has hopefully access to email, Teams, core apps, a laptop, mandatory training assigned, and the minimum permissions needed for their role. The process must also meet identity and security requirements.
What automation supports here, on this top-level process is for instance an AI-Copilot that helps the HR manager draft the outcome statement, the risk framing, and the success metrics or propose a checklist based on role type, yet humans own the accountability. It’s basically a human, working in a gray chat-coffin with help of AI making thinking and communication easier†.
† Read: (10) Working in a chatbox was a mistake and Generative UI is the antidote | LinkedIn

L2. The level of process chains and domains
This layer defines how choices get made. It is about rules and exceptions. Who gets what, who approves, when you should escalate, what is forbidden. It’s the layer where the organization’s paranoia becomes tangible in the form of documentation.
Take the example of the onboarding process again.
The system decides access and controls based on role, location, contract type, and risk score.
If the person is, say, an external contractor (like me, usually), you grant them time boxed access, you block sensitive repositories, always require sponsor approval, and you schedule automatic offboarding already at the start. And if the role is high risk, like finance or engineering with IP access or sensitive information, you trigger a background check, require manager plus security approval, apply stricter monitoring, and restrict admin rights. When the employee is remote or based in a restricted jurisdiction, you enforce compliant device posture, you restrict data sharing, and block specific services. You get the drift.
What automation supports here An AI-Copilot helps a human translate policy into clear decision tables and plain language rules, but agentic AI can tests edge cases and flags missing rules. Humans still approve the final policy, because audits run on signatures.

L3. Subprocesses and scenarios
This layer is about decision making, sequencing and coordination of activities. It containts lots of steps and handoffs across HR, IT, security, facilities and procurement, the bit where nothing is difficult until you involve three departments and an onboarding deadline with timers, queues, and dependencies between teams and systems. This is about choreography, and the dancers are systems, humans, and deadlines.
In our example, once HR confirms the contract, the orchestration runs an end to end onboarding flow.
→ Create employee record in HR system → Create identity in IAM → Create mailbox and Teams account → Assign baseline groups based on L2 rule outcome → Trigger laptop request and shipping → Register device in MDM and apply compliance policies → Create ITSM tickets for app access where manual approval is required → Request building badge and physical access → Assign mandatory trainings and schedule first week sessions → Send welcome package email to employee and manager → Monitor progress and escalate if any step misses SLA
The goal of this part of the process is the workflow that coordinates multiple systems and teams to deliver a day one ready employee with traceability.
Automation, at this level, really starts to kick in. Agentic AI runs orchestration, it can call upon tools to do its job (like polling if a badge has already been issued), it monitors deadlines, writes the updates, and maintains an audit log. You can also use classic RPA here which can be triggered for legacy steps and browser based RPA can handle web portals that lack APIs.
It all sounds like a bunch of work and technical dependencies, but trust me, when you’re implementing this at scale and you’ve got the guardrails in place, this is where you start seeing lots of efficiencies.

L4. Task execution in applications
This layer is the actual doing of tasks. People are clicking through apps, filling out forms, copying data, generating documents, updating records, sending emails, creating tickets. It is also the place where your efficiency dreams go to die, traditionally because your operational debt becomes visible.
In our example I’m referring to tasks that are executed across common enterprise systems that you get to work with when you’re in HR, facilities, IT etc to onboard a person. .
In (usually) ServiceNow, you:
→ Create onboarding ticket → Attach contract reference → Assign to IT queue → Create sub tasks for mailbox, access rights, laptop, badge
In Azure AD or IAM portal, someone: → Creates user account → Assigns license bundle → Adds user to baseline groups → Applies conditional access policies
In the Master Data Management platform, someone:
→ Registers a device → Applies encryption and compliance rules → Pushes baseline apps → Enables remote wipe
And in the facilities portal, someone:
→ Requests badge creation → Assigns building zone access → Schedules badge pickup
This entire process is called “provisioning”, and the work happens across HR, IAM, ITSM, MDM, and facilities systems. AI based automation is going to have a field day here. ‘Classic’ RPA executes stable repeatable tasks inside fixed UIs and browser based RPA navigates web portals and can adapt intelligently to UI variability. Agentic AI can call APIs where available and decide which task path applies based on L2 outcomes.

L5. Interface level work instructions
This layer is all about the pixel level work. The location of buttons on a screen, the fields you have to fill in, selectors, screens, paths. It is brittle, boring, and it is everywhere, and it’s the job of the keyboard warrior to know how these things work. For now.
In our onboarding flow, we’re talking about the exact click path someone follows inside an IAM portal (Identity & Access Management, “who has rights to what”).
The HR support employee opens a portal, a typical part of the workflow looks like this:
→ Click Users → Click Create new user → Enter first name, last name → Set username using company pattern → Select license bundle E3 → Click Next → Add groups Teams Default, Finance Read, Reporting Basic → Click Create → Copy user ID → Paste user ID into ServiceNow onboarding ticket → Wait for provisioning confirmation → Take a screenshot of confirmation page for audit evidence (ok, old school, but it happens more than you think).
This is basically the click by click sequence inside each tool, including fields, buttons, and evidence capture. People learn this stuff by doing, and going through training. Same thing goes for AI as well. This type of AI trains by running through the application and clicking on things, filling out forms, submitting stuff, and all of that under the watchful eye of a human who knows their current job is going to go obsolete in a year.
AI based automation, especially browser based RPA is born for this stuff. Classic RPA runs the exact steps when a traditional (old skool) UI is stable and browser based RPA – the stuff we see in ChatGPT atlas – handles web based UI drift and exceptions, wrong data formats, etc. a lot better. But then again both will still break when someone redesigns the portal, because even AI based enterprise automation software treats backward compatibility like an optional hobby.

Where each technology performs best
In practice, most organizations that undertake this kind “Manhattan Project” type of automation, end up with a hybrid stack. An AI-copilot for L1 and L2 thinking support. Agentic AI for L2 and L3 execution. Browser based RPA for L4 and L5 web sludge, and classic RPA for the legacy desktop museum.
The image below basically sums it all up:

And yes, this means more work will collapse into that gray rectangle, the chat coffin. We will have to get used to working inside the chat-window for rewriting our poorly constructed sentences, but progressively more integration with backend systems will happen, so prepare to get more approvals, be nudged every now and then by your bot, and you definitely get to babysit automated flows while you keep on pretending it feels like progress.
Friend, that hybrid is going to eat the back office. It is already licking its lips, and you described that future with your post-human back office framing.
If you want to read more about how these things work, I’ve written a whole bunch about how this all works in slightly more detail:
- The post-human back office | LinkedIn
- Empirical reflections on the silent murdering of the workforce via task-level automation by overconfident algorithms causing occupational extinction | LinkedIn
- The AI productivity divide | LinkedIn
- Working in a chatbox was a mistake and Generative UI is the antidote | LinkedIn
- The AI productivity paradox – AI works fine, you’re just measuring it like it’s 1950 | LinkedIn
The plateau problem when everyone buys the same brain
AI gives advantage right now because adoption is uneven. Some firms have it baked into workflows, while others still have it as a toy. Some teams have learned how to work with it, but then others are still arguing about whether they are allowed to paste an email into it and some companies have real AI-automation in running production – the JPMorgan’s, Walmart’s, and Delta Airlines of this world – where others only have a demo and a Slack channel called “AI pilots” where nothing ever gets shipped.
The thing is that this unevenness is only temporary.
When AI becomes the norm in every organization, meaning it is cheap and reliable enough for the masses, then it is the default layer in every tool you use, and then the competitive advantage begins to melt. Not so much because the AI itself becomes weak, but simply because everyone gets access to the same baseline intelligence.
And that is the dirty secret behind every productivity boom. AI now feels like a superpower because you are early enough that your competitors are still crawling, but when they catch up, the advantage is table stakes.
And the plateau will not arrive gently.
Because companies will not only adopt AI assistants. They will also adopt automation, with orchestration, and agents that execute tasks and the innovators will start to use browser driven automation that can operate any web tool with the enthusiasm of my Weiner (dog) running after a ball. They will eventually all combine it all into an automation stack that makes the back office look like a simple vending machine. Invoice – insert data – clank, you have one invoice. Onboarding – insert data – clunk, you have one new employee.
Scroll-breaker. Anthropic actually tried to automate a vending machine using AI. The results are simply hilarious, read Anthropic’s AI ran a shop, and holy crap, it was a beautiful disaster | LinkedIn
At that point, every company has access to similar reasoning, writing, research and planning. Even creativity begins to converge, because many people will accept the first good looking output and move on with their lives.
And “Good Enough” becomes the new religion, and most companies already worship at that altar.
So what is left.
Speed and cost. And also branding. That is not a strategy, it’s going to be good for agencies though, but for the rest, it will be a race to become a commodity with a logo.

The chat coffin economy
The interface matters more than people want to admit. Not because design is cute, but because the interface shapes behavior.
Chat based work has a specific psychological trap because it rewards quick replies, it supplies you with plausible answers, and moreover, it trains the brain to accept the first coherent output as truth, especially when you are tired, under time pressure or surrounded by people who confuse confidence with competence (the corporate narcissist, you know the kind).
Now take that trap and give it system access.
When your assistant can read your files and touch your CRM, write your documents for you, push changes to systems and run your workflows, you will be living inside this gray rectangle, and then the AI-assistant is the gateway to your work, and your role will be one of approving, nudging, correcting, and occasionally panicking. Read: Working in a chatbox was a mistake and Generative UI is the antidote | LinkedIn
In the article I argue that the current chat-boxes are simply the wrong container. A chat window forces every task into a linear conversations, even tasks that need comparison, spatial thinking, visual evidence, and structured decision making. And in that piece, I suggest we start using Generative UI, to infuse it with UI and a better experience.
For this blog, however, this is also why chat based automation will be a competitive problem and not only a productivity problem because when work collapses into one interface, the experience across vendors will be similar.
I’m not only talking about things like language (read this piece: The Sloppiverse is here, and what are the consequences for writing and speaking? | LinkedIn), but the workflows become similar and ultimately the ideas†. And these outputs will start to rhyme, not literally sadly, but in that eerie way where every paragraph has the same language and the same hygienic confidence. People will to sound like the same polite model wrote them, because it did. And I’m not delivering this warning from a safe distance.
I’m a walking case study myself.
I’ve caught myself using words I never used before. I’ve caught myself writing in those tidy, “reasonable” patterns that feel efficient and also faintly inhuman. Sentences that land too cleanly and phrases that sound like they were approved by an invisible committee of helpfulness. And these changes in language are subtle at first, but then I realized my voice is being gently sanded down into a corporate smoothie.
I didn’t notice it until late, because it still sounded like me, but with the edges filed off.
Read: † Lobotomized AI* | LinkedIn

The post human back office and the great hollowing
Automation does not only remove tasks. It gets rid of learning as well.
Remember this piece my friend, because it is important.
A lot of organizational capability is not stored in documents. It lives in the muscle memory of the organization. You find it in small decisions that are made a thousand times and in people knowing the edge cases. And sadly enough, also in judgement and in corporate bias. . .
Judgement is the human ability to interpret context, detect when a “normal” rule does not apply, and choose a path that protects the real outcome. Corporate bias is the shadow side of that same muscle memory. It is the instinct to keep doing the familiar thing because it has worked before, because it keeps blame away, because it keeps the system predictable.
You get a corporate bias for free when an organization mistakes its habits for reality.
In the beginning of your career, you’ve probably heard something like “This is how we do things here”, and when it is said with the confidence of a law of physics, processes turn into truth and exceptions become folklore and workarounds turn into policy, and after a few years, nobody remembers why the steps exist, only that stepping outside them is treated like a minor crime.
This is the bias toward the status quo, built into the corporate nervous system. And it hampers creativity as well. But it feels good because it is predictable. It is planable and you can measure it and it fits snug into dashboards. It ultimately makes managers feel in control, because they can point at a process map and pretend that the map is the territory. Corporate bias gives the organization emotional comfort because it reduces uncertainty but it also limits surprises. It gives people a script, and scripts are soothing when you are afraid of being blamed.
And it is also dangerous and killing creativity and preventing internal disruption.
Because organization become slow to notice that the world changed and it trains people to optimize for compliance with the process rather than outcomes in reality and it punishes initiative because initiative creates variance, and variance is where risk lives. Corporate bias gradually turns a company into a machine that defends its own routines, even when those routines are outdated or harmful.
That is why it matters for capability.
When you automate work, especially when you move toward chat-based orchestration and agentic execution, you risk freezing that corporate bias into the system. The workflow will quite literally become code and approvals turn into rules.
The “usual way” is the only way. And this way the organization gets faster but also more rigid because it loses the small human rebellions that used to correct the process when reality did not match the template.
So corporate bias is not only a cultural annoyance.
It is a competitive risk.
Because the companies that survive the next decade will not be the ones that execute yesterday’s processes perfectly. They will be the ones that can detect change early, override their own muscle memory when needed, and build new habits before the old ones become a prison.
Now back to automation, because it operates along the same lines.
When you automate L4 and L5 heavily, the visible work disappears. That feels great at first. And when you automate L3 as well, orchestration will be a thing driven by a machine and humans will see less of the process and so they will learn less from not doing because they are now the supervisors of a system that behaves like a black box, but with with good, approved manners. When that runs for a few years, you will notice that internal capability starts to hollow out. The organization is now fully dependent on the vendor stack, tied to model behavior, dependent on tooling updates, and dependent on the assistant being up and behaving.
That future is the backbone of the AI-automated backoffice. It is not going to be about robots taking our jobs in one dramatic moment, but tasks will disappearing quietly, and roles will collapsing into supervision and exception handling, approvals.
And this is where a lot of leaders make an expensive mistake when undertaking an AI-based intelligence transformation. They will celebrate the productivity gains, but they will miss the capability losses. They will fire people who used to carry institutional knowledge, then replace them with subscriptions, and then being stuped because everything breaks the moment the model shifts behavior, the vendor changes pricing, or a smart competitor introduces a disruption.
A company can become extremely efficient and also extremely fragile at the same time. That fragility will be the strategic weakness of the future of work.
And strategic weakness is what competitors love.

AI first versus AI native, and why native makes sameness worse
AI-first is the phase where existing companies bolt AI onto everything that moves, automate and augment until most of the workforce gets squeezed into two categories. Customer-facing roles that still need a pulse, and everyone else lingering at the edges, handling exceptions, approvals, and the weird cases the system still can’t digest.
AI native on the other hand is when the company is redesigning the operating model around intelligence. And hereby, work gets decomposed, tasks get assigned to agents, RPA, UI automation, and humans, and decision points get structured. Governance in this case moves closer to delivery.
AI native is a powerful thing, and I usually can be found in programs undertaking this kind of work, but AI native also makes the sameness problem worse, because it industrializes intelligence. It turns thinking into a utility and once that utility is common, the market becomes flatter.
This is why the “everyone buys the same brain” problem I’m addressing here, is not some cute philosophical point. It is a future business reality.
When your competitors adopt the same assistant model, the same orchestration patterns, the same RPA and browser automation stack, and the same guardrails, then your operational advantage shrinks. The industry does not become smarter in a differentiated way, but in a uniform way which makes the ceiling rises for everyone and let’s the distance between competitors shrinks.
Let that sink in for a couple of minutes.

What is competitive advantage gonna be like
If intelligence equals commodity, differentiation moves to places that AI does not commoditize easily.
It moves to inputs.
It moves to taste, things like brand, to (workflow) design, and also to things like trust and especially to company culture, because your company identity will in fact be the driver of your future profitability. Same thing goes for freelancers and execs by the way, and if you want a scroll-breaker, go read this†.
When I mention inputs, I’m talking about your proprietary data, your internal telemetry, customer signals, your edge cases and the decision history. Taste means the ability to judge quality and novelty. It is the ability to reject the first plausible answer. It is the ability to hold a higher standard than “sounds good” and with workflow design I’m talking about your interface choices, your process design choices, and your control points. If your whole company runs through the chat coffin, you will drift toward average faster than you expect.
Trust means reliability, safety, and consistent delivery. In a world full of fluent output, reliability is going to become rare and valuable, but on the other hand, culture means curiosity and experimentation. A company that trains people to explore and test will outmaneuver a company that trains people to approve and comply.
And now comes the part that everyone wants to skip because it requires actual work.
Hear me out. . .
If intelligence is cheap and everywhere, then competitive advantage will be a design problem. Not a procurement problem by the way, because you do not win by buying the same model faster. You only win by building the system around it in a way other people cannot copy without copying your identity.
That’s why I am a big proponent of AI+ design instead of +AI design. Want to know more about the difference, read China’s AI+ plan and the Manus middle finger | LinkedIn
Inputs are not about data alone. They are what your company chooses to pay attention to, what it measures, what it ignores, and what gets recorded (or what it refuses to record because it is inconvenient). Inputs include your customer conversations, your incident logs, support tickets, process telemetry but also the operational scars, and the weird edge cases that never show up in the quarterly slides and when you capture those well, and you feed them back into your workflows, the automation becomes specific to your reality.
Your competitor can rent the same “brain,” but they cannot rent your lived experience unless you sell it to them like an idiot.
Taste is the missing organ, and most firms are already dying from it.
When I say ‘taste’, I mean the ability to look at a polished answer and say, no, not good enough, too generic, too safe, similar to what everyone else will ship. And yes, taste is also part of your brand, but brand in the grown-up sense. Not a logo and a tone-of-voice guideline thingy. I mean Brand with a capital B, as an operating system, that is very close to identity, what you stand for, what you refuse, what you optimize for but also how you treat risk, how you treat customers, and how you treat truth when that truth is inconvenient. And if you cannot articulate that, then the model will articulate it for you and the thing with these models values is that they are a statistical average of the internet with a corporate safety lick of paint put on top of it.
With AI, and without Identity, Brand and taste, you just outsourced your company to autocomplete.
Workflow design is the spot where the battle is most visible.
Most companies will push everything into the gray rectangle because it is convenient. One assistant, one interface, one generic stream of requests and approvals, and above all, one place to paste blame when something goes wrong. The chat coffin is your workplace of the future and it will make you fast, but it will also make you shallow because it encourages “good enough” answers, quick approvals, and minimal exploration. If you want to stay competitive, you need workflows that create divergence on purpose. Interfaces that force comparison, show evidence, surface uncertainty, and punish premature certainty. You need decision points that demand real judgement, not a checkbox.
Trust is the boring weapon, but boring weapons do win wars.
Trust means that your system behaves reliably under pressure (or that it fails in predictable ways). So it needs guardrails and logs what it did and it also needs to be able to explain itself enough for governance and enough for humans to recover when it breaks. In a world where everyone can generate fluent output, reliability is the scarce asset. Customers will not reward you for sounding smart. but for being dependable when it matters.
And culture is the multiplier, and also the trap.
Culture is the set of behaviors your organization repeats when nobody is watching. I’ve always liked that definition. It is what people do when the process does not fit or when the customer is angry and when the assistant outputs something plausible and wrong.
A company that trains people to explore, test, challenge, and argue will outmaneuver a company that trains people to approve, comply, and forward the output.
Curiosity is key, and should not be treated as a slogan, but as a capability. On my personal bio website, on the homepage there’s one statement that stands out. It’s from Einstein (for real), and it says “Curiosity has it’s own reasons for existing”. Experimentation is not a hackathon, but a habit and things like psychological safety is not some stupid poster on the wall, but it’s whether r someone can say, this is wrong, without being punished.
And yes, your company’s identity becomes a driver of profitability because identity shapes decisions, which shape behavior in turn, and behavior shapes outcomes that shape trust and ultimately (customer)trust shapes your revenue. That’s the loop.
It is not poetic, but rather mechanical in nature and nothing wrong with that.
So if you want a short version that you can tattoo on the inside of your eyelids, here you go: “When intelligence is a commodity, advantage comes from what you feed it, how you judge it, how you design the work around it, how reliably it behaves, and whether your people are trained to think or trained to obey”.
†Read
- Welcome to the solopreneur apocalypse | LinkedIn
- Study reveals Gen AI made execs dumber than before | LinkedIn

How to stay different while using the same automation stack
This is the practical playbook I live by, and it is written for people who want future-proof results, mixed with a bit of vision instead of a spiritual relationship with hype.
⭐ 1 You build divergence into the workflow
Chat is fine for requests and summaries, but terrible for exploration. Competitive advantage requires exploration.
Use structured interfaces where it matters and dashboards that show alternatives side by side. Decision screens that force tradeoffs, or evidence views that show source material next to conclusions.
That is the core of what I mean with the Generative UI argument. The interface should match the work, rather than forcing every task into a linear conversation.
So, please do experiment with GenUI and Google is backing you already with their A2UI protocol, so you’re definitely not alone in this.
⭐ 2 Treat automation as a learning system
When agents and bots do everything, humans will become passive, and the thing with passive humans is they lose judgement and therefore your organizations loses resilience.
So design workflows where humans review reasoning, edge cases, failures, and exceptions. Give people a reason to understand what happened, not only approve the outcome. Keep a deliberate loop where policy and rules evolve from real behavior.
This keeps capability alive.
⭐ 3 Own your evaluation
Most companies measure output volume. They love counting tickets, incoming emails, time saved. They simply celebrate activity.
But in an organization automation with AI, activity is boring and bland.
Competitive companies measure quality, risk, and novelty.
So you build evaluation rubrics for decisions, for customer communications, for knowledge work outputs. Track when the assistant helped and when it misled. Track the failure patterns. Train the organization to recognize cheap fluency.
If you do not own evaluation, the model defines quality by default, and the model’s taste is average.
⭐ 4 Use the stack to compress boring work, not your identity
Automate L4 and L5 aggressively, because nobody becomes a better human by copying IDs between portals.
Be careful with L2 and L3. Those layers contain judgement, escalation logic, and organizational intent. If you outsource that logic blindly, you outsource your strategic behavior.
That is how companies become similar without noticing.
⭐ 5 Build proprietary inputs and proprietary loops
Your data plus your workflow plus your evaluation loop becomes a moat.
Not a magical moat, I’m talking about a real one.
A competitor can buy the same base model. They cannot buy your internal context, your feedback loops, your process telemetry, your decision history, your incident learnings, your service reality.
That is where differentiation lives.

AI makes you average if you let it
AI will deliver big gains in this decade, then the plateau arrives, and at that point, the companies that treated AI as a tool will look like everyone else but the companies that treated AI as infrastructure, and protected judgement, curiosity, and evaluation, will keep their edge.
The outcome is not predetermined.
Average is a choice.
It’s a very convenient choice and profitable for vendors and above all very comfortable for leaders who love dashboards more than thinking, but when you want to win, you need the boring discipline that have better inputs, redesigned workflows and better interfaces and ultimately, more human curiosity.
Otherwise you will get speed and lose differentiation.
And you will do it while sitting inside a gray rectangle, approving tasks you no longer understand, for systems you no longer control, while telling yourself this is the future.
Signing off this too long-a piece,
Marco
I build AI by day and warn about it by night. I call it job security. Big Tech keeps inflating its promises, and I just bring the pins and clean up the mess.
👉 Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn, Google and the AI engines appreciates your likes by making my articles available to more readers.
To keep you doomscrolling 👇
- I may have found a solution to Vibe Coding’s technical debt problem | LinkedIn
- Shadow AI isn’t rebellion it’s office survival | LinkedIn
- Macrohard is Musk’s middle finger to Microsoft | LinkedIn
- We are in the midst of an incremental apocalypse and only the 1% are prepared | LinkedIn
- Did ChatGPT actually steal your job? (Including job risk-assessment tool) | LinkedIn
- Living in the post-human economy | LinkedIn
- Vibe Coding is gonna spawn the most braindead software generation ever | LinkedIn
- Workslop is the new office plague | LinkedIn
- The funniest comments ever left in source code | LinkedIn
- The Sloppiverse is here, and what are the consequences for writing and speaking? | LinkedIn
- OpenAI finally confesses their bots are chronic liars | LinkedIn
- Money, the final frontier. . . | LinkedIn
- Kickstarter exposed. The ultimate honeytrap for investors | LinkedIn
- China’s AI+ plan and the Manus middle finger | LinkedIn
- Autopsy of an algorithm – Is building an audience still worth it these days? | LinkedIn
- AI is screwing with your résumé and you’re letting it happen | LinkedIn
- Oops! I did it again. . . | LinkedIn
- Palantir turns your life into a spreadsheet | LinkedIn
- Another nail in the coffin – AI’s not ‘reasoning’ at all | LinkedIn
- How AI went from miracle to bubble. An interactive timeline | LinkedIn
- The day vibe coding jobs got real and half the dev world cried into their keyboards | LinkedIn
- The Buy Now – Cry Later company learns about karma | LinkedIn

Leave a Reply