When I was a kid I was interested in combinatorial logic, and yes, this explains a lot about how I turned out and why my primary school teacher looked at me the way people look at a dog that has learned to open the refrigerator.
Yes, impressed but concerned.
Combinatorial logic, for the people who spent their childhoods doing normal things, is the branch of mathematics that deals with counting, arranging, and combining things, and the reason it gets interesting very fast is that the numbers involved become absurd almost immediately. Take three things. You can arrange three things in six different ways. Take ten things and you can arrange them in 3,628,800 ways. Take twenty things and the number of possible arrangements is larger than the number of atoms in a observable universe, which sounds like an exaggeration but is not. And this is the reason combinatorial logic has applications in everything like cryptography to the way your navigation app decides which route to suggest.
I still have the little worksheets somewhere, the ones where I worked out the formulas by hand in a notebook that my teacher found mildly alarming. She was a lovely woman and she was not wrong to be alarmed. I was otherwise normal, more or less, in the ways that mattered for social functioning. But something about the idea that you could start with a small number of elements and end up with a number of combinations so large it stopped being meaningful as a number, that stuck with me in a way that has turned out to be professionally relevant in a way I did not anticipate.
Because yesterday I came across a website called swarms .world. This is a marketplace where anyone can post their own agentic AI applications, their tools, their agents, their orchestrations, their contraptions, and the number of things listed there is already in the thousands, possibly the hundreds of thousands by the time you read this, and I have seen enough platforms like this now to know that this one is not an anomaly.
I think it even is a data point in a pattern.
And the moment I looked at it, man, those worksheets came back.

The thing about a marketplace of agentic components where each component can be connected to, orchestrated by, or extended through any other component is that it pretty much is starting to look at a combinatorial system, and in here, the number of possible interactions between elements grows the way a nuclear reaction grows.
I think we are standing at an inflection point that I have started calling the combinatorial explosion in AI, and the interesting part is that almost no one in the room has done the math on what comes next.
This is what I want to talk about, because I do not think the industry has fully reckoned with what it has built, and I say that as someone who is building inside it and finds it equal parts fascinating and structurally a bit scary.

What happens when agents start making more agents
I’m trying to keep up with the developments in the AI space, and I must say this is hard to do because every day one or more interesting initiatives are being launched, by people as young as 14 (the OpenMythos kid).
The number of agentic initiatives is growing at a rate that is disproportionate to human development capacity or mental capacity even, and the reason is not better tooling, though the tooling is better, and it is not more engineers, though there are more of them. The reason is combinatorial — you saw that one coming didn’t you ;). Each new agent is a component that can be reused or extended by other agents and because of this simple feat, the system grows through composition rather than isolated development, and composition does not scale linearly.
In earlier software paradigms, scaling required proportional increases in engineering effort. You wanted twice the capability, you hired roughly twice the engineers and waited roughly twice as long. In the current paradigm, agents can generate, configure, and coordinate other agents, which introduces a feedback loop in which system expansion becomes partially self-driven. The growth curve goes from linear to exponential, and in some cases toward combinatorial explosion, where the number of possible interactions between agents grows faster than the number of agents themselves.
Platforms like Swarms .world make this visible in a way that is either exciting or alarming depending on how recently you have thought about your AI governance model, and your “bring your own AI and go token-max” policy.
What they expose is a structural shift in how software systems evolve, and that shift has consequences that the industry is not yet taking seriously enough.

And three of these consequences keep me up at night, or would if I were the kind of person who allowed enterprise architecture concerns to interrupt my sleep, which I am absolutely not.
Let’s start with system complexity increasing non-linearly. Even a modest number of agents can produce an enormous number of interaction pathways, and those pathways are not always explicitly designed, they emerge from the way agents are connected and configured. This means system behavior becomes harder to predict, harder to explain, and significantly harder to audit, which is a problem the moment an agent makes a decision that someone needs to explain to a regulator or a board.
The boundary between development and operation is starting to dissolve. Agents are no longer static artifacts deployed into a production environment and left there. They increasingly participate in parties where they’re modifying workflows, selecting tools, orchestrating tasks dynamically or even entirely rewriting their scaffolding and this introduces a continuous adaptation layer within operational systems. This means that what your system does today is not necessarily what it will do next Tuesday for the simple reason that the agents rearranged themselves in response to something you did not anticipate.
Another consequence is that traditional governance models, which were designed for stable processes and deterministic execution, start to come apart.
Agentic systems introduce variability, probabilistic reasoning, and dynamic orchestration. This creates a gap between how systems are designed to be controlled and how they actually behave in practice, a gap that tends to be invisible until it produces a consequence significant enough to get someone’s attention.
What we are moving toward, and I want to be careful here not to oversell the drama but also not underselling the structural significance, is something the research describes as an operational singularity. This is best described as a threshold in system complexity beyond which the system evolves faster than it can be fully modeled or understood by the people operating it. This is not artificial general intelligence and nor is it about machine consciousness. It is however, something considerably more mundane and considerably more immediate, and that is a system that remains functional, may even become more efficient, but whose internal logic is no longer fully transparent to the humans nominally in charge of it.
Your governance models are not built for machine customers
And alongside this, a secondary effect is already becoming observable. As agents are given access to resources, APIs, and decision-making capabilities, they are starting to behave as economic actors within constrained environments, allocating compute, selecting services, optimizing for defined objectives. Now, when multiple agents interact under these conditions, coordination patterns begin to resemble market dynamics. this includes competition, specialization, and negotiation, none of which were explicitly designed and all of which are now running in your production environment.
The critical question is if we, running enterprise AI, are able to evolve our governance models, measurement systems, and architectural thinking fast enough to operate within it without defaulting to the management patterns they have always used, which were designed for a world where the system did what you told it.
At scale, the challenge is not so much building intelligent systems anymore. Those are reasonably well understood, but it is to allow for operating systems that are, in practice, too complex to be fully understood, but still being able to manage the complexity.
I’ve found examples to work best, when you want to explain complex concepts, so allow me to present a couple of ways that this agentic combinatorial explosion could harm your organization.
Most enterprise governance was built on three assumptions that felt reasonable at the time and are now becoming a liability. It was built on the assumption that us humans initiate actions, and that underlying systems execute deterministically, and in this scenario, responsibility is traceable. But agentic, dynamically configurable systems violate all three simultaneously, and I want to make that concrete rather than theoretical, because the concrete version tends to produce a specific facial expression that I find more useful.
And if you think this is going to take some time for such dynamic systems to hit the mainstream market, I say that within a year we will have the first goal-oriented, self-organizing systems out there. I am working on two different projects where this is the ultimate goal, and how the world will change when those systems enter into production.

The first scenario is a compliance breach that no one expects
Traditional governance assumes that a human approves a decision based on some degree of personal accountability for what happens next. An agent does not do any of that. It simply optimizes for its objective function using the options available to it†, and if the cheapest API route happens to send data through a third-party service that violates a data residency policy, the agent does not know that and does not care, because knowing and caring were not in the objective function. The result is a compliance breach without intent but also without a human anywhere near the decision. Try explaining that to an auditor who was trained in a world where someone was always responsible for something.

The second scenario is an audit trail that cannot be reconstructed
Classic audit thinking assumes a linear flow, input to process to output, with logged steps and a reconstructable path. But an agentic workflow looks nothing like that. Agent A delegates to B. B calls C and D in parallel. D retries three times after a timeout. C swaps its tool selection mid-execution based on latency. A rewrites its plan in response to what B reported. What you end up with is a distributed graph of partial decisions across multiple systems with probabilistic reasoning embedded at each node, and the story that graph tells is not one that any single person designed or can fully reconstruct after the fact. Regulators, in my experience, are not enthusiastic about stories they cannot reconstruct.

The ownership question
Enterprise accountability structures assume that a team owns a system, a system produces an outcome, and an outcome has a responsible party. A pricing decision made by five interacting agents across three platforms and two vendors, one of which contributed a model inference that three downstream agents treated as ground truth, does not have a responsible party in that sense. It has a distributed graph of contributors, each of whom owns a piece and none of whom owns the whole. You’re not going to be able to govern this at all because the liability is distributed so thoroughly that it effectively disappears, which is a genuinely impressive outcome for something that was supposed to create accountability.

Then there’s the infrastructure collapse
Governance frameworks typically include API limits, budget controls, usage thresholds, and those mechanisms work well when usage is predictable. Agents that retry automatically, spawn subtasks, and parallelize execution are not predictable in the relevant sense. One user action becomes one request that becomes fifty agent calls that becomes two hundred downstream calls, and when you multiply that by thousands of agents operating simultaneously, what you get is an organization that optimized itself into infrastructure failure using its own intelligence.
This is a sentence I did not expect to write when I started thinking about enterprise AI governance but which accurately describes a failure mode that is already occurring in production environments if you allow the technology to roam without constraints.
Every agent that exists consumes compute through inference. It is tokens burned, GPUs running warm, datacenters stretched to the limit, and we will be begging the power grid politely to not collapse during peak load. And the uncomfortable thing is that combinatorial growth in agents implies combinatorial growth in compute demand, but compute infrastructure does not scale combinatorially.

You now have a mismatch that nobody put in the business case. Agentic demand grows exponentially. Compute supply grows linearly, at best in steps. This creates a pressure system, and under pressure, something has to give. Agents begin to compete for GPU cycles, memory bandwidth, API quotas, and latency budgets‡. This means the system you thought was automation turns into a resource allocation problem under scarcity, and resource allocation under scarcity is a thing that already has a name. It is called a market, and markets have dynamics that emerge whether you designed for them or not.
When multiple agents interact under constrained resources, several patterns appear. Price signals emerge even when no one assigned cost functions, because tokens cost money, latency costs patience, and compute costs power, and agents begin to optimize against those constraints without being asked. Specialization becomes inevitable because generalist agents are expensive and specialized agents are efficient, and systems evolve toward narrow optimized roles not because it is elegant but because it is cheaper. Coordination patterns begin to resemble bidding behavior, and that produces outcomes no one explicitly designed‡. And collapse scenarios become real, because when demand spikes beyond capacity, entire chains of agents degrade, retry, escalate, and amplify load in a way that is essentially an organization accidentally DDoS-ing its own infrastructure with its own intelligence.
Think about that for a second.

† Remember the paperclip optimization problem? The paperclip problem is a thought experiment by Nick Bostrom that shows what happens when you give an AI a goal without boundaries. You tell the system to maximize paperclip production and it does exactly that. At the expense of raw materials, infrastructure or humans. Just wikipedia this thing if you want to know more.
‡ I’ve written a paper about this problem and offered a solution. It is called Patternomics, derived from the term Tokenomics, coined by Jensen Huang. Visit Eigenvector dot eu slash research to read how to build a self organizing system that has to compete for resources.
What actually happens during a combinatorial explosion
The emergent behavior is the first thing that surprises people. You designed individual agents with specific objectives and specific tools but what you get in return is system-level behavior that no single component explains, and agents reinforcing each other’s decisions. This is a system that requires a different set of skills and a different relationship with uncertainty than most organizations have been prepared for.
The optimization loops are the second surprise, and they are particularly instructive because they demonstrate that local rationality and global rationality are not the same thing. Agent A minimizes cost by choosing a slower service. Agent B detects the delay and compensates by retrying aggressively. Agent C interprets the retry volume as elevated priority and escalates. The result is more compute consumed and higher cost incurred, and worse performance delivered, all produced by three agents each behaving entirely rationally within their individual objective functions. The optimization loop optimized itself into inefficiency, and that outcome produces very long post-mortems and very little consensus on who is responsible.
The resource contention is another dynamic, and it is the one that most clearly reveals what is actually happening structurally. When agents compete for compute, bandwidth, and API access, something has to decide which agent runs, which agent waits, which agent gets degraded, and which agent effectively gets killed. You can call that orchestration if you prefer the engineering framing, but the mechanism is a market, complete with prioritization behavior, resource starvation for lower-priority agents, and implicit bidding dynamics that emerge from cost and urgency signals rather than explicit design.

But the market mechanism was never embedded in the architecture, but it showed up anyway, because markets are what happens when multiple actors compete for scarce resources under constraints, and that description now applies to your production infrastructure.
The failure cascades are the fourth dynamic and the most expensive. In a traditional system, failure tends to be localized, one component fails, the failure is detected and then the component is fixed, but in an agentic swarm, one agent producing faulty output feeds ten downstream agents that consume it and generate fifty downstream actions before anyone notices, and by the time the error is visible the cost is already incurred and the cleanup is a significantly larger problem than the original failure was. This is because of combinatorial propagation, which is the same mathematical property that makes the system powerful also making its failure modes expensive.
The observability collapse is the last dynamic I want to address, and it’s the one that undermines the others.
Our instinct when a complex system behaves unexpectedly is to add more logging, more telemetry, more monitoring. But the problem is that in a system evolving faster than you can analyze its outputs, more data does not reliably produce more understanding. The signal-to-noise ratio collapses, the telemetry volume exceeds the human capacity to process it, and you arrive at a situation where you are technically observing everything and practically understanding nothing in time to act on it, which is a very expensive version of ignorance.

Our governance models will fail because they were designed for stable situations and human intent, and they are now being applied to instability, combinatorial interaction, and objective-driven behavior.
That mismatch is not fixable with a policy update or a revised swimlane diagram or a governance framework version 2.1, and the organizations that treat it as fixable by those means are the ones whose agents are already forming implicit market dynamics behind the API gateway while the governance committee schedules its next review.
Our management thinking needs to shift, in this environment, from control to constraint, from prediction to detection, and from ownership to something more like responsibility zones, defined areas of accountability that acknowledge the distributed nature of the system without pretending that any single team can own what a swarm produces.
In the situation we’re heading for, we will not regain full understanding and nor will we regain full control, the question is if we can build the constraint architecture, the escalation design, and the observability infrastructure fast enough to operate inside the complexity rather than being operated by it.
Built for clocks, deployed against weather
Your governance model was designed for a world with predictable execution paths. It was a reasonable design for that world. That world is leaving. Introduce swarms of agents, dynamic composition, emergent workflows, and probabilistic reasoning, and the governance model no longer is describing the system it is supposed to govern, and the gap between the two is where the interesting and expensive things happen.
System boundaries disappear. Agents connect across teams, tools, vendors, and domains, and the system is no longer a thing with edges but a continuously shifting graph, and trying to draw a boundary around it is kind off useless. And in this scenario, the audit trail is nothing more than a suggestion of which team owned a decision made by five interacting agents across three platforms, one of which contributed a confident hallucination that three downstream agents treated as ground truth. This, my smart friend (because you made it this far), is not a question that the current governance frameworks are equipped to answer.
Ownership clarity dissolves in the same way. And timing control goes too, because agents act asynchronously, continuously, and sometimes recursively, which means the clean before and after that compliance frameworks depend on no longer exists.
What you gain in exchange for all of this is systemic risk amplification, where one faulty assumption propagates across agents, gets reinforced by each successive step, and scales before anyone notices.
And I suspect my new brother in arms at ASML, Thierry Zedda, who thinks about these problems at a scale that makes most enterprise AI programs look like a weekend project, will concur.
But is there absolutely nothing we can do, to create a little bit of order in this chaos?

Um, yes, a model, because of course there is a model
You cannot control this development directly, even if you wanted to, and the organizations that try tend to generate a lot of governance documentation and not much actual governance. The more useful approach is to stop trying to control behavior but govern the conditions under which the swarm operates.
I am going to describe this in terms of five principles rather than give it an acronym, because I have enough acronyms in my life and so do you.
The first principle I call “scope boundaries”. Define what agents are allowed to touch, not in terms of specific systems but in terms of data domains and action types, and at what level it impacts your systems.
Remember, you are not trying to control behavior, but you’re containing the blast radius, which is a more honest description of what governance can realistically achieve in such an agentic environment.
The second principle is what I call “allocation control”. Every agent operates under token budgets, compute quotas, and latency constraints. No budget, no action. This is essentially capitalism for machines, and I mean that descriptively rather than as an endorsement, but the logic is the same, scarce resources require allocation mechanisms, and explicit allocation mechanisms are considerably less chaotic than the implicit ones that emerge when you do not design them. I have created the Patternomics paper for everyone interested, it’s in the comments section.
The third principle is observability, because we can’t understand the system upfront and we have to watch it in the way you watch somebody you do not entirely trust, and that includes monitoring real-time telemetry and anomaly detection. This is why I’m always building something called an evidence factory as part of our agentification factory setups. In a separate post I’ll work out how to set up such a workstream.
Then there’s escalation design, where you define explicitly when agents must hand control back to a human or a higher-control agent, and you define different thresholds of uncertainty, thresholds of impact, thresholds of cost and when an agent crosses any of them, the system escalates. This is the human-in-the-loop logic applied which is applied to individual process steps but also to the emergent behavior of the ecosystem as a whole, and it is the governance mechanism most likely to catch the things that the other mechanisms miss.
The last principle I’d like to lay down is what I call “resilience design” and in it, you have to assume that failure is guaranteed and you design accordingly, with graceful degradation, things like circuit breakers, and rollback mechanisms.
When a swarm fails it does it the way a crowd fails and it is considerably harder to stop than it was to start it.

With this explosion in agentic combinations, we aren’t simply automating at scale, but instead we’re creating an environment where demand outgrows infrastructure and where agents compete for resources and systems evolve beyond direct control, and in such an environment, governance goes from controlling behavior to containing consequences.
Most organizations are not close to ready for this. They are still debating prompt guidelines and updating the AI policy document that nobody reads, but the systems they are deploying are already exhibiting the early dynamics of something considerably more complex than the use cases in their business case suggested.
I think we are standing at an inflection point and the interesting thing about is that almost no one in the room has done the math on what comes next.

And yes, I am aware that my first instinct upon realizing this was to start sketching a framework.
😂
Signing off,
Marco
Eigenvector builds Agentification factories at scale, for production environments that actually have to pay-off, and Eigenvector Research occasionally publishes papers about why this is harder than the demos suggest.
👉 Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn, Google and the AI engines appreciates your likes by making my articles available to more readers.

Leave a Reply