I wake up at five in the morning, but not because I am some kind of productivity monk but because my brain made a unilateral decision years ago that this is when the day starts and it has not consulted me since.
The phone comes first, and yes, this is the kind of confession that would disappoint my younger self and delight my current one, because I have long since made peace with the fact that the field I work in does not sleep, does not observe Dutch public holidays, and absolutely does not wait for me to finish my coffee before publishing something I need to read. So I scroll through whatever the night produced, which is usually more than I can process before the caffeine kicks in.
Then the coffee happens, and then Slob happens, which requires some explanation. Slob is my Weiner – a dachshund if you thought otherwise – and Slob the dog wants his morning cuddle, he tolerates it for exactly as long as he decides is sufficient, and then he is done with me and moves on with his day, which I find both relatable and slightly insulting.
After the shower and getting dressed, the ritual begins in earnest, and I use the word ritual because that is what it has become, a sequence of actions so repeated that deviating from it feels physically wrong. The agentic AI agents I have set up have been running through the night, pulling papers, flagging developments, sorting signal from noise across the sources I care about, so I move through their output with the specific efficiency of someone who has learned to skim without missing things, saving what matters and discarding the rest without guilt.
Yes, trying to keep up in what happens in AI requires discipline, unfortunately. I should have stayed in chemistry.
Then Teams comes next, where I check who was still active at midnight, because the timestamps on messages tell me more about the real state of a project than any Asana status update ever will.
And then I open Edge.
Yeah, Microsoft Edge, the browser that shipped pre-installed on a Windows machine, the browser that I use every morning despite having professionally tested ten agentic browsers, working on a Mac and despite lecturing on AI transformation for a living, despite building automation systems and despite knowing with full technical clarity that what I am using is a relic of an interaction paradigm that is actively being replaced while I sit here using it. Then I open a Citrix instance on top of it because the light plastic laptop the company provided feels like a form of corporate punishment I did not earn, and I read through the news, specifically the non-woke variety, which has the practical advantage of being completable in a reasonable amount of time.
And sitting there, in the early morning light with my old fashion Citrix session running inside Edge on a Mac because the corporate laptop is a crime against productivity, I am – objectively – a person who understands better than most where this is all going. I know that the webpage I am reading will eventually be assembled for me by an agent that knows my preferences better than the editor who wrote it and I know that the browser I am using is being redesigned around me by people who have correctly identified that clicking things is an inefficiency the market will eventually price out and I certainly know that the ritual I just completed – the feeds, the papers, the Teams timestamps, the news – will one day be a single briefing, generated fresh each morning, tailored to exactly what I need, delivered without me having to open seven things in sequence like some kind of knowledge worker from 2019.

And I know all of this with the specific confidence of someone who builds these systems professionally and then explains them to people who will build more of them. And then I close the news, and I open Edge again for something else, out of pure muscle memory, without thinking about it.
Old habits.
Intermezzo: A word from my other hobby
AI projects fail because nobody in the room understood what they agreed to.

- The AI Expert Programma at Inholland Academy that I’m teaching gives you the technical depth to lead AI projects. From Alan Turing via classical Machine Learning to Transformers, World Models, strategy, governance and adoption.
- Basically you will learn how to lead AI projects when you’re done.
- Open to all professionals. Not a technical background required, but a love for it makes things a lot easier.
- Rated 9+ (out of 10, and no, that’s not the minimum age).
The internet you know is already a corpse
Here’s a number that should ruin your afternoon. In the meta-study I conducted in 2025, I estimate that somewhere between 42% and 65% of all web content is generated by bots, and next to that there is a growing number of bots that are scraping content, indexing pages, running price comparisons, filling out forms, checking inventory, probing for vulnerabilities, and occasionally doing something useful.
The web was built for human eyeballs. Every button placement and even the friggin’ cookie consent banners that requires a law degree to decline, yes all of it designed for a carbon-based creature with a mouse and a short attention span. But that creature is increasingly not the majority visitor.
This is what people mean by the zombie internet. And in the dead-internet theory, the content is there and the servers are running, but the interaction is largely bots talking to infrastructure that was designed for people. A website built for humans, read primarily by machines, and a digital storefront where most of the footfall is automated and the shop assistant is increasingly also automated.
Nearly everything in the transaction is a robot except the credit card, and give it six months because Paypal and Visa already have their agents rolled out.

What agentic browsers actually are
Most agentic browsers are built on Chrome with AI, some scaffolding and a computer vision model bolted on and it operates a browser the way you do – except it doesn’t need the same amount of coffee I drink every morning and doesn’t get distracted by LinkedIn notifications like I do, and it certainly doesn’t spend eleven minutes deciding which tab to close first. It reads the pages and clicks buttons, fills forms, compares prices and even books things, and all of this without you being present.
OpenAI’s Atlas does this. The Browser Company built Dia with this in mind. Perplexity has Comet in the pipeline. These systems receive a goal like “find me the cheapest flight to Lisbon under 200 euros, not Spirit, I have dignity” and they go execute it across multiple websites, the same way you would, just faster and without the suffering.
The problem is that these agents are doing something structurally awkward.
They are using interfaces built for humans to accomplish tasks that are fundamentally machine-to-machine.
That is similar to sending someone to a library to manually copy out every page of a book because the publisher won’t give you the PDF. The information is there and has the access, but the method is simply absurd.
Amazon noticed this in early 2025 and started blocking agentic browsers from certain product flows. Their argument was essentially that agents bypass advertising and ignore the carefully engineered persuasion architecture that turns browsers into buyers. They are not wrong of course because an AI agent does not care that the promoted product is at the top of the page. It optimizes for your stated criteria and ignores the rest. The entire attention economy including every countdown timer, every “only 3 left in stock” lie – those all become irrelevant when the entity doing the browsing cannot be manipulated.
The agent finds the thing you asked for and stops, which is genuinely useful for you and genuinely catastrophic for the attention economy that has been monetizing your psychology since 1996.
This is why Amazon banned them. Not for your protection of course, but for theirs.

The web is built for fingers
Websites are expensive to maintain for an audience that is increasingly not the one looking at them, and the companies building the next generation of the internet have noticed this with the specific clarity of people who have a financial interest in noticing it fast.
The current situation is a transition period, in which agentic browsers are navigating the internet by pretending to be a confused human clicking through pages designed for confused humans, extracting information the hard way from interfaces that were never meant to be machine-readable, like a Formula 1 car stuck behind a tractor on a country road because nobody built the motorway yet.
The transition we find ourselves in at current is all about replacing human-facing websites with something that was built for agents in the first place.
Companies will increasingly expose services to the outside world instead of a website, and these will be structured, machine-readable endpoints that an AI agent can query directly and execute transactions through, without any HTML or stylesheets and certainly not with pop-ups asking if you would like to join a newsletter that you will never read. The interaction becomes more transactional, ‘here is what we offer, here are the terms, here is the price, confirm or walk away’ and the entire choreography of persuasion that sits between those two points dissolves because there is no-body left to persuade.
This is already happening through MCP, the Model Context Protocol, an open standard that Anthropic created and Microsoft donated to the Linux Foundation in December 2025, effectively turning it into infrastructure. MCP is a universal connector for AI systems, it lets an AI model plug into a service, whether that service is a calendar, a database, a shop, a government form, or a hospital record system, and interact with it through a standardized interface that both sides agreed to speak before the conversation began. The AI does not need to visually parse a webpage anymore but simply call the service directly, receives structured data back, and acts on it, the same way that applications have been calling APIs for twenty odd years, except now the thing doing the calling is making decisions rather than following instructions.
And some time ago, Google is backing something called A2UI, Agent-to-UI, which takes this whole concept a step further by letting the agent generate the interface itself rather than simply querying data and running a transaction. An agent assembles a view of exactly the information relevant to your specific decision at this specific moment, then renders it in your browser, and dissolves it when you are done with it. It does not load a webpage and it has no navigation menu and especially no footer with seven links to documents that nobody reads.
With A2UI, you get the interface your task requires, assembled on demand, and gone when the task is complete, and the UI that is generated through it, is called Generative UI.
The webpage – a persistent destination with a URL and a design and an intended user journey – is becoming optional.

Generative UI and the death of the internet
Traditional websites work on a publication model. A designer makes a page and then a developer builds it, it gets deployed to a server through pipelines, and every person who visits that URL sees the same thing, more or less, regardless of who they are or what they actually came to do. The interface is fixed in advance by someone who had to guess what you needed before you arrived, using data from people who came before you, and all of this is optimized for outcomes that may or may not align with yours. The average user this interface was designed for does not exist as an individual. They are a statistical ghost haunting the design system and captured in the form of ‘personas’.
But increasingly, I see chatbots being outfitted with Generative UI. And this technology is architecturally different in the sense that you’re not loading a pre-built page, but instead an AI generates the interface for you in real time, and all of that is based on your current context and your history with the service, what you intent to do and everything the system has learned about how you work. The result is that two people asking about the same product see different interfaces – one sees a detailed comparison table because they said they are evaluating options and another sees a streamlined purchase flow because they have already decided, and a third person sees a returns policy summary because the system inferred from their browsing pattern that they are trying to figure out whether to keep something they already bought.
With Generative UI, the interface is assembled on demand, specific to the moment, and it dissolves when the moment is over.
Companies like Thesys are already building the middleware infrastructure that makes this possible at production scale. Their product, C1, sits between an AI model and your application’s frontend and handles the generation and rendering of UI components in real time. It works with a library of pre-built components that have been designed specifically for AI assembly like interactive charts, data tables, forms, comparison views, confirmation dialogs and it assembles them based on what the AI determines the current interaction requires. The result streams to your browser and renders as a live, functional interface. It sits somewhere between a chat window and a dynamic webpage, and it did not have a name or a market two years ago.
Anthropic has been running experiments with this approach inside Claude, where they are generating contextual interface cards that appear alongside responses when the conversation reaches a point where a rendered component would be more useful than a text answer. The difference between this and a chatbox is that a chatbox enforces a linear conversation structure on every task, and it does this simply because linear conversation was the easiest thing to build first, but to us humans, a chatbox feels more like a chat-coffin than anything alive. But Generative UI abandons the assumption that every interaction is a conversation and generates whatever container the task actually requires, which is a more radical idea than it sounds because it means the interface is no longer a product that gets designed once and shipped. It is an output that gets generated continuously, shaped by the interaction, and discarded when it is no longer needed.
I wrote an article arguing that working in a chatbox was a mistake. I was right when I wrote it, and the argument has only gotten more correct since.

The agent marketplace and the skills economy
Your AI agent, in its current form, is rather capable, and in its near-future form, it will be extensible in ways that change what the word “capable” means.
Take the concept of skills – the modular, downloadable capabilities that an agent can acquire and deploy – is already live in early form across several platforms, and it is pointing toward something that will eventually look like an app store, except the apps do things autonomously on your behalf rather than waiting for you to open them and click around inside them.
A skill is a defined capability with a specific interface and you have them in all sorts of forms, or you build them yourself. Skills exist around booking a flight, extracting data from a PDF, file an expense claim or search a database or check inventory across suppliers or even negotiate a return. Skills come from marketplaces maintained by platform providers or they can be downloaded from open source repositories on GitHub where developers publish them under permissive licenses, and increasingly more are entering the market which come from enterprise vendors who build proprietary skills for their (corporate) customers and distribute them through managed channels.
Your agent acquires a skill the way you used to install an app, except the agent actually uses the skill when it is relevant rather than leaving it on a home screen for eight months between accidental taps.
The economic structure of the skill marketplace will determine a lot about whose interests your agent actually serves. A skill that is free to install and makes money by steering your agent toward certain vendors is functionally an advertisement embedded inside your decision-making process, and it’s more invasive than a banner ad because it operates before you have formed a preference rather than after.
We are seeing the first signs of this within OpenAI with their ads, and if you think that is a coincidence rather than a roadmap, I have a sponsored recommendation I would like your agent to consider. You only have to look at the ad-supported internet as an analogy. It arrived as a default, and then it became infrastructure, and then it became invisible, and by the time anyone thought seriously about what it was doing to the information environment it was too late to build something different without destroying something people depended on.
The skill marketplace is at the default stage right now. The decisions made in the next two years about what monetization models are acceptable inside agent capabilities will determine whether the agentic internet is a tool that works for its users or an optimization engine that works for its advertisers while wearing a very convincing mask of helpfulness. These are not the same thing, and they will not feel different until they are.
And then there’s the skill that costs money upfront and has no alternative revenue model but to work purely for you. The distinction between these two things will not always be obvious, and the people building the marketplaces have strong incentives to make the distinction as invisible as possible.
The open source ecosystem around agent skills is developing in parallel with the commercial one, and it introduces a different set of problems. Skills built by developers who have varying levels of rigor about what their code actually does when an agent runs it with access to your accounts, your data, and your transaction authority. The security implications of a poorly built skill installed into an agent that has your banking credentials are not abstract.
What the skill marketplace creates, at the structural level, is a new layer of the economy between you and the services you use, one that is populated by small specialized agents doing specific jobs on your behalf, coordinated by a primary agent that routes tasks to the right capabilities and synthesizes the results.
Your primary agent is the manager and the skills are the specialist contractors. The services are the external organizations the contractors interact with. And you are the executive who specified the outcome without necessarily understanding every step of the process that produced it.

What happens to the companies on the other side of the screen
If users are living in a generated browser environment managed by their personal AI, the organizations that used to build websites to reach those users face a question they were not architecturally prepared for. Organizations are now thinking about what to build when the person you are trying to reach is not the one doing the browsing.
The answer emerging from the more forward-looking corners of enterprise strategy is that you build services, you deploy agents, and you expose structured data under governance frameworks that let other agents access it without you losing control of it.
A company’s customer service function, in this model, becomes an agent with transactional capabilities and a rules engine that defines with precision what it can approve, what it can negotiate within defined parameters, and what it must escalate to a human who has the authority to make a decision the rules engine was not built to handle. When your personal agent contacts a retailer’s service agent to resolve a delivery problem, nobody opens a browser or fills out a contact form and certainly no one will have to listen to hold music, and talk to someone who is reading from a script. It will be two agents that exchange structured information and a rules engine that evaluates the situation against its policy tree, and in the end, a resolution is approved or escalated, and you receive a notification with the outcome.
The entire interaction takes seconds and generates an audit trail that neither party had to maintain manually.
Sales and marketing, in this environment, do not disappear, but they transform. The brand that reaches you through your agent’s recommendation layer does so by being genuinely relevant to your stated context rather than by out-spending competitors for placement in a search result. This is a worse outcome for companies with large advertising budgets and a better outcome for companies with genuinely differentiated products, which is one of the reasons the advertising industry is watching the agentic browser space with the specific expression of someone who has been told their building has a structural problem but has not yet been told how serious.
Companies that survive this transition will be the ones whose services are machine-readable and whose agents are capable of representing their offerings accurately in autonomous negotiations and especially those whose data is actually theirs – not leased from a cloud provider under terms that could change and exposed through interfaces that other agents can trust.

Your data is finally allowed to stay home
This brings me to something the EU has been building since approximately 2005, that most people outside government circles have not heard of, and that most people inside enterprise circles reference in meetings without being entirely certain what it means.
I’m talking about data spaces.
A data space is not a database or a cloud storage bucket, and it’s certainly not a data marketplace, though it is frequently confused with all three. It is, however, a governed ecosystem where organizations share data with each other without transferring ownership of it.
The latter part of this sentence matters to the concept. You are not exchanging data nor changing ownership of it. It remains in your datastore, technically and legally. And that may sound like a minor administrative distinction until you think through what data ownership actually means in a world where your data is your competitive position.
Today, when company A shares data with company B, the data moves. It gets copied, ingested, stored on company B’s infrastructure, and from that point forward company A’s control over it is limited to whatever the contract says, and enforcing data contracts is expensive and slow and usually only happens after something has gone wrong in a way that is impossible to ignore. The data is effectively gone.
But in a data space, the data stays at the source. Company A keeps everything on its own infrastructure. Company B’s systems get access to query that data under specific, machine-enforced conditions – who can query it and how many times and for what declared purpose and under what legal basis – and when those conditions expire, the access expires with them, automatically, without anyone having to remember to revoke a permission. The data never left the building, which means the competitive value embedded in that data never left either.
The technical component enforcing this is called a connector – specifically, the Eclipse Dataspace Connector – which is an open-source implementation of the International Data Spaces Association’s reference architecture. Every participant in a data space runs a connector and this thing is the enforcement layer. I validates identity, checks the governance rules and it monitors what is being accessed and how, and if your requests fall outside the agreed terms, it refuses it.
It is the mechanism that makes “data sovereignty” something other than a marketing phrase.
The EU has been building data spaces at sector level for several years. The European Health Data Space creates a framework where hospitals across member states can share patient data for research purposes without that data ever centralizing – federated AI trains models across distributed hospital datasets, the model learns from the combined data, and the sensitive records never leave the institution that holds them. The Netherlands alone had 47 active data-sharing initiatives as of a 2022 study, with iSHARE – a Dutch trust framework that started in logistics and is expanding into other sectors – providing the legal and technical scaffolding that lets organizations join without building the governance layer from scratch. Manufacturing-X is creating the same infrastructure for European industrial supply chains. The Data Governance Act and the Data Act are providing the legislative foundation that makes participation legally coherent rather than contractually improvised.
What this means for the new internet is that the federated model – data stays at the source, access is governed, value flows without ownership transferring – becomes the architecture for how organizations interact with each other’s agents. Your agent does not need to receive a copy of a supplier’s inventory data to help you make a procurement decision. It simply queries the supplier’s data space connector, gets the answer it needs under the agreed terms, and the supplier’s data never moved. Scale this across an economy and you have a fundamentally different data infrastructure than the one that currently exists, one where the competitive advantage embedded in proprietary data can generate value through sharing without the sharing destroying the advantage.
The hyperscaler model – give us your data and we will do useful things with it, trust us – now has a structural alternative, and it is being built in Europe with regulatory backing and a level of institutional seriousness that is easy to underestimate from the outside.
Now, let’s bring everything together. Let’s teleport ourselves into the future a couple of years, and if everything is ok by then, we’ll be waking up in the following scenario . . .

2027. A morning in the browser you never leave
It is worth spending time on what this actually feels like when it arrives, because the individual components are easy to describe but the compound experience is harder to convey, and the compound experience is the thing that will actually change how we are going to spend our conscious hours.
It is 2027.
I still wake up at five, which has not changed, because my brain made a decision about sleep that it has no intention of revisiting. The AI has been running for several hours, but it has not running in the background making small decisions, but it’s actively managing my morning context – in my case, pulling the research that appeared while I slept, flagging the papers and industry developments that fall within my defined areas of attention which it learned, and it was drafting summaries of the things it assessed as high priority, and prepared a briefing that will be waiting when I indicate I’m ready for it.

I indicate this not by opening an app but by picking up my phone, which the AI interprets as a signal that I am awake and functional enough to receive information, and my browser – which at this point is less a browser than a continuously rendered personal environment – assembles the morning view. It’s a generated interface built from what the AI knows about what I need at this specific moment on this specific morning, pulling from my professional context, the ongoing projects I’m working on, my scheduled commitments, and the external developments that intersect with my work.
The papers my agents found overnight appear as a generated reading interface, and the ones the AI assessed as high priority are already open with key sections highlighted and a brief synthesis of why this matters to your current work. The ones flagged as potentially relevant but lower priority are collapsed into a summary. I skim it all and save the ones I want to return to, and the ones that I dismiss teach the system something about where my attention threshold sits.
Teams still shows you who worked late.
The AI has already cross-referenced this with the project timelines and flagged the ones where the late activity suggests a problem rather than just dedication, and it has drafted a message to the relevant project lead asking if support is needed. I now only have to review the draft, adjust one sentence, and send it. The whole interaction takes forty seconds.
My news briefing is not a news site anymore.
It is a generated document that pulls from seventeen sources the AI has learned I trust, filtered by the topics it knows are relevant to the work I do and my interests, with conflicting accounts of the same story surfaced side by side rather than hidden in favor of the most engaging version. The AI does not pick a side but shows the disagreement and lets me form the judgment. This is either what good journalism always wanted to be or a complete replacement of it, depending on where you sit in the media ecosystem.
Work begins without a context switch, because the context was already there.
The professional AI environment – I’m talking about the one my employer provisioned, or the one I negotiated with my employer to bring from home, because by 2027 the BYOAI policy is as standard as the BYOD policy was in 2015 – generates the workspace view based on what is actually in front of you today. The project dashboard is not a fixed screen someone designed last year but it’s generated from the live state of the work, showing the things that are blocked and the decisions that are waiting, and everything else that is required to get the work done, and all is in a layout optimized for the kind of thinking the current moment requires.
The agent also handles the coordination layer.
It contacts the supplier’s agent to chase the delayed component and receives a structured response with a revised delivery estimate and a partial credit offer, then it evaluates the offer against the procurement policy, accepts it within its authority, and logs the transaction with full provenance for the audit trail. I receive a notification with a summary. I did not attend that meeting, the meeting did not happen at all, because two agents reached a resolution faster than two humans could have scheduled a call.
Outside work, the same environment continues without a visible boundary, because the AI does not know the difference between your professional context and your personal one unless you tell it, and increasingly you have stopped telling it because the integration is more useful than the separation.
Your music is not a streaming service you visit but a continuous generated curation that pulls from Spotify’s catalogue through a structured service endpoint, shaped by history and your current mood as the AI infers it from your activity patterns, and it adapts in real time as your context changes. Your evening reading is not a list of bookmarks. It is a generated digest assembled from the sources that matter to you.
Commerce happens almost entirely through agent negotiation.
Your agent knows your preferences and your budget parameters, and it handles procurement through structured interactions with service agents that represent the suppliers.
You set the policy. The agents execute within it. You review the outcomes and adjust the policy when the outcomes are not what you wanted. The interface for any given purchase is generated at the moment you want to review it and dissolves when you are done, leaving only the transaction record.
The people who opted for the premium tier of this environment – the ones paying for an AI that works exclusively for them, with no sponsored recommendations embedded in the agent’s preference layer and no behavioral data flowing to third parties – they are functionally living in a different internet than the people on the free tier, and the difference is not going to be aesthetic. Their will be a huge divide between the people who can afford to pay for quality premium agentic AI and the free tier user’s agents that have preferences that were not set by the user. The premium tier user’s agent has only the preferences the user set.
The internet is already bifurcating by the willingness and ability to pay for an AI that is genuinely yours.
And somewhere in this environment, the last webpage anyone ever really loaded has already loaded, and nobody noticed the moment it happened, because the replacement was smoother than anyone expected and more total than anyone was willing to admit in advance.

Everything I just described is already happening
The zombie internet was always going to end this way, metabolized by something more efficient that retained the same underlying power structures while making them considerably harder to see.
The web that was built for human fingers is being rebuilt for agents.

The services are being re-exposed in machine-readable formats and the interfaces are being generated rather than designed and the underlying data is being federated rather than hoarded, at least in the parts of the world that had the regulatory infrastructure to make federation possible.
And somewhere in the middle of all of this, your personal AI is managing your morning, your work, your purchases, and your entertainment, knowing more about you than anyone you have ever met in person, and working either for you or for the company that gave it to you for free, and the difference between those two things is the most important consumer choice of the next decade.
I build these systems for a living. I teach the people who will build more of them. I remain convinced it is the correct direction, and I remain personally suspicious of every single incentive structure involved in getting us there.
I think Slob would understand.
Signing off,
Marco
I build AI by day and warn about it by night. I call it job security. Big Tech keeps inflating its promises, and I just bring the pins and clean up the mess.
👉 Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn, Google and the AI engines appreciates your likes by making my articles available to more readers.
To keep you doomscrolling 👇
- I may have found a solution to Vibe Coding’s technical debt problem | LinkedIn
- Shadow AI isn’t rebellion it’s office survival | LinkedIn
- Macrohard is Musk’s middle finger to Microsoft | LinkedIn
- We are in the midst of an incremental apocalypse and only the 1% are prepared | LinkedIn
- Did ChatGPT actually steal your job? (Including job risk-assessment tool) | LinkedIn
- Living in the post-human economy | LinkedIn
- Vibe Coding is gonna spawn the most braindead software generation ever | LinkedIn
- Workslop is the new office plague | LinkedIn
- The funniest comments ever left in source code | LinkedIn
- The Sloppiverse is here, and what are the consequences for writing and speaking? | LinkedIn
- OpenAI finally confesses their bots are chronic liars | LinkedIn
- Money, the final frontier. . . | LinkedIn
- Kickstarter exposed. The ultimate honeytrap for investors | LinkedIn
- China’s AI+ plan and the Manus middle finger | LinkedIn
- Autopsy of an algorithm – Is building an audience still worth it these days? | LinkedIn
- AI is screwing with your résumé and you’re letting it happen | LinkedIn
- Oops! I did it again. . . | LinkedIn
- Palantir turns your life into a spreadsheet | LinkedIn
- Another nail in the coffin – AI’s not ‘reasoning’ at all | LinkedIn
- How AI went from miracle to bubble. An interactive timeline | LinkedIn
- The day vibe coding jobs got real and half the dev world cried into their keyboards | LinkedIn
- The Buy Now – Cry Later company learns about karma | LinkedIn

Leave a Reply