We asked AI to be predictable and it laughed at us

The promises we made to ourselves

Even though I had been professionally playing around with GPTs somewhere in 2019, it took a few years until my first production system around an LLM got developed somewhere beginning 2023 and I can still recall me confidently telling my team it would behave like any other API. Same input, same output, clean handoff to the downstream parser, done. Bada-bing-bada-boom! And I recall saying this with the energy of a man who has never been wrong about anything and was about to be wrong about everything.

I remember the system went live on a Tuesday.

But by Thursday, the parser was choking on outputs that looked nothing like what we’d tested against. By Friday, a retry loop we’d copy-pasted from our deterministic microservices playbook had tripled our OpenAI bill while generating three different answers to the same question, none of which were the one we wanted. And the dashboards were green the whole time. Green. Latency normal, error rate zero, everything fine, but our building was on fire.

The assumption I’d made, the one that sank us, was so obvious I’d never bothered to examine it. Even though I had graduated from a renowned college in the US, famous for it’s data science curriculum, I had assumed wrong that the AI would behave like a library. Call the function, get the return value, move on. Software engineers have been building on that assumption since before I was alive, and it’s such a good assumption and it’s so battle-tested, so load-bearing, that most of us forgot it was an assumption at all.

Well. Turns out the AI didn’t get the memo.

Article content

More rants after the messages:

  1. Connect with me on Linkedin 🙏
  2. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  3. Please comment, like or clap the article. Whatever you fancy.

I asked AI to be predictable and it laughed at me

Confession time. Look, I’ve shipped a lot of bad code in my career. I’ve deployed microservices that fell over during demos, I’ve written regex that consumed entire CPU cores like a hungry goat, and I once caused a staging database migration to run against production because I misread an environment variable at 11pm on a Friday.

All of those failures had one thing in common, that I could explain all of them.

I could reproduce the error, read the stack trace, find the offending line, fix it, ship a hotfix and end the day by drinking something stronger than coffee.

Then I started building with LLMs in regulated environments and discovered a category of failure I had no vocabulary for. The system worked, but the output was wrong, and the funny thing was that the audit trail said everything was fine. And I could not, under any conditions, make it break the same way twice.

This, my friend, is the defining moment that separates software engineers who’ve only read about Transformer based AI from those who’ve actually tried to get a compliance officer to sign off on an autonomous agent touching financial data. The compliance officer asks a reasonable question “If the AI approves this transaction incorrectly, can you show me exactly why it did that?”

You pause.

And you smile the smile of a person who knows they are about to have a very long afternoon.

And whatever you do or say, you cannot explain. The model sampled toward an answer. The internal inference path is not something you can replay. You can validate the behavior but you cannot reconstruct cognition, and no auditor on earth is signing a SOX declaration based on a Gefühl and a dashboard screenshot.

Article content

We miss determinism like a lost friend

Software engineering spent sixty years building something quite extraordinary. Not the glamorous stuff, not the machine learning and certainly not the neural networks. I am of course talking about “The Boring Stuff™”. The assumption that f(x) = y every single time, forever and ever, on every machine, in every timezone, until the heat death of the universe. That one boring assumption is the reason we have unit tests, CI pipelines, reproducible bugs, meaningful stack traces, and the ability to say with a straight face to a room full of executives that the system behaves as specified.

Determinism gave us debuggability.

A broken deterministic system is a cooperative patient. It sits still on the operating table while you examine it and when you hand a failing test case to a colleague, they run it, it breaks identically and now there are two of you working the problem. The feedback loop is tight because the failure is stable.

Determinism gave us testability.

The entire intellectual edifice of test-driven development, from Kent Beck’s original XP work to the sprawling CI infrastructure every modern engineering team runs, assumes the system produces identical output for identical input. Remove that assumption and tests do not become harder, but they do turn into a philosophically different category of thing. You’re no longer verifying behavior but sampling it and hoping the sample is representative.

And determinism gave us composability.

You could wire deterministic components together and reason about the whole by reasoning about the parts. Isolation, mocking, integration testing – all of it rests on the bedrock assumption that the thing under test does not change its mind between test runs.

And then, the AI walked into this carefully constructed world and started rearranging the furniture.

Dang!

Article content

The API tricked us all into false comfort

Here is the seduction I’m talking about. You hook up a large language model through a REST endpoint. There’s a base URL, a messages array, a content field in the response. The request-response cycle looks exactly like every external service you’ve ever integrated. You copy your existing retry middleware. You set a timeout. You write a quick smoke test that checks the response is non-empty and valid JSON. You push to main. You feel professional.

That familiarity, however, is the trap.

Underneath that perfectly normal HTTP interface is not a function but a probability distribution. Temperature, top-p, top-k – these are not simple configuration parameters, but dials that are controlling how much randomness the model injects into its sampling process during inference. Set temperature to zero and you approach determinism without quite reaching it, because floating-point arithmetic on GPUs introduces variability that depends on hardware scheduling and batch size. Same prompt, different day, potentially different output. Same prompt, model silently updated by the provider, definitely different output.

And this last one is the one that gets AI dev teams.

OpenAI spent months shipping silent model updates before community pressure forced them to introduce version-dated endpoints. The model behind a given endpoint is not a pinned dependency but rather a living thing that evolves on someone else’s schedule, for reasons that cannot be expressed as a changelog, because there is no diff for a neural network. Your tests may pass today, but the model updates tonight, and tomorrow your tests fail for no reason anyone can articulate, and whoever is on call is going to have a miserable morning.

Article content

The retry is now a comedy bit

In deterministic systems, a retry is the most boring operation in software engineering. Request failed due to transient network issue. Wait, retry. The computation that runs the second time is identical to what would have run the first time if the friggin’ network had cooperated.

But with AI, a retry is a new sample from the distribution.

Not a repetition – mind you – I’m talking about a fresh draw. The output can be structurally different, semantically reversed, or so confidently wrong in a completely novel way. And the thing that should make you, as AI architects lose sleep is that the retry appears to succeed. Initially that is. You get HTTP 200. Your retry logic marks the attempt as resolved and your monitoring shows the retry pattern working as designed, but what the monitoring does not show is that the first output would have been a clean structured JSON object and the second output begins with “Certainly! Here’s what I think about your invoice:” and your downstream parser is now feeding garbage to whatever processes payments next.

The retry amplified the failure though it appeared to succeed, and in the end it cost extra money, because the model charged you for both calls and all the while your dashboards stayed green and the only signal that anything was wrong was a number moving slowly in the wrong direction in a business metric three days later, and tracing it back to the specific AI call that caused it is an exercise in forensic archaeology rather than engineering.

Article content

Observability grew up and got complicated

Traditional monitoring is built for loud failures. Things like error rate spikes or latency that suddenly blows up in your face, and of course the requests 404. And when it happens, the on-call engineer gets paged, looks at the dashboard and sees the thing that is wrong. Now, this whole model assumes that failures announce themselves through the infrastructure layer.

But AI failures are quieter.

Take this example. The system responds and latency is normal, it is operating within set boundaries and error rate is zero, but for some reason or another, the output is subtly, confidently, fluently wrong. A large language model can return a perfect HTTP 200 containing a grammatically impeccable hallucination and the infrastructure will have no clue, and the monitoring platform has no idea, the error rate even stays at zero while the downstream system ingests contaminated data and the compliance audit six months from now becomes a very uncomfortable meeting.

What this forces is a shift in what you actually watch.

Infrastructure metrics are necessary to monitor AI based systems, but they’re not sufficient. You need output-level observability, which means logging prompts and responses in a form you can analyze, tracking output distributions over time to detect when the shape of responses changes before any individual response fails a hard check, and building qualitative metrics that catch semantic drift before users start submitting complaints.

The signal for AI degradation is subtle, qualitative and it creeps in slowly. Heck, it shows up in user feedback even way before error rates do and teams that catch it early built output observability from day one, even when the signals felt rough and imprecise, because a rough signal two weeks early is better than a precise alert two weeks late.

Article content

What guardrails actually are and what they are not

Sigh. Every vendor selling AI infrastructure uses the word guardrails. It is a word that has been stretched so thin you I’m starting to see through it. A guardrail, in the way the industry currently uses the term, is almost always a surface-level check. It verifies that the output is valid JSON and filters for prohibited phrases or checks that the response don’t contain a social security number pattern (Hi Odido!). These are useful, but they’re nowhere near sufficient for regulated environments.

The problem is that current guardrails are syntactic.

What? Syntactic means the structure and rules of how words or code are arranged, not what they mean. Guardrails check the form of the output but not its meaning. A guardrail can ensure the agent’s response is structurally valid but it cannot determine whether the action described in that structurally valid response is semantically coherent within the business domain.

Let me give you an example.

The agent says for instance “approve invoice INV-2024-9981”. The output is valid JSON. The request passes format validation. And the policy check asks “does this user have permission to approve invoices?” And when the answer is yes, permission is granted and the invoice is approved.

But nobody checked whether INV-2024-9981 is in a DISPUTED state. Nobody checked whether the invoice total matches the sum of its line items or if this vendor is on a sanctions list. The action was syntactically valid and policy-permitted and at the same time, it was also semantically nonsensical and potentially a SOX violation. The audit trail recorded a successful approval, but the compliance officer will find a violation in six months.

This is the gap that has been sitting in plain sight while the industry sprinted toward introducing agentic capability for process automation.

Guardrails check permission, but nothing checks whether the action makes sense in the first place.

Article content

I had been dealing with this for quite some time, so at Eigenvector we decided to do something about it, and now we’re developing neuro-symbolic AI for agentic process-automation at scale.

And we call it . . .


The Ontological Compliance Gateway and the Mona Lisa problem

Let me explain what the Eigenvector OCG actually does, because the architecture paper we’re releasing next week is rigorous and precise, and most enterprise executives would rather die of a PowerPoint overdose than read it.

Think about how Leonardo da Vinci worked.

He was not simply a skilled hand as we all know. Before the man even picked up a brush, he had done years of anatomical study and observation of light on surfaces. So when he sat down to paint the Mona Lisa, his hand did not move randomly, but it moved within a dense, internalized framework of knowledge about how faces are structured, and how sfumato**†** works.

His creativity operated inside a structure of deep domain understanding, and that structure is what separates a master from a person who knows how to hold a brush.

Now back to AI.

Current AI agents are the person who knows how to hold a brush. They have enormous capability, they can generate fluent plausible outputs, process natural language and produce structured responses but they have no internalized understanding of the domain they are operating in. They do not know, from first principles, that an invoice in a DISPUTED state cannot be approved. They learned from statistical patterns in training data but those patterns do not constitute domain knowledge.

In fact, they constitute sophisticated guessing.

But I position the OCG as the missing anatomical study.

OCG is the formal, symbolic‡ representation of the domain that the AI is operating in, and it sits as a mandatory checkpoint between every action the agent proposes and the enterprise systems that would execute it.

Here is the architecture in plain language, using the metaphor we use internally.

Imagine that instead of letting the robot painter roam free and then trying to erase mistakes after the fact, you give him a magic coloring book. This is not a normal coloring book (because it’s magic, duh!) because the pages have deep grooves pressed into them and this means that the robot’s pen physically cannot leave the grooves. He still gets to choose his colors though, we let him retain his intelligence, so he can still exercises creativity within the page but the pen cannot reach the wall, the furniture, or my Weiner dog, because the grooves define what is possible and not only what is permitted.

The OCG builds those grooves, and it builds them in two sequential stages.

Gate one is all about semantic coherence.

This is the question that current AI systems never ask “Does this action make sense?”

Before checking permissions, before even consulting policy, the OCG validates the proposed action against the formal enterprise ontology, a machine-readable, logically structured model of the business domain built in OWL and validated with SHACL shapes.

The ontology(✎) defines entities, their valid states, their relationships, and the invariants that must always hold. An invoice exists. An invoice has a lifecycle from DRAFT to PENDING_APPROVAL to APPROVED or DISPUTED. An invoice in DISPUTED state cannot be approved, not because a policy says so, but because the ontology defines that as semantically impossible. You cannot approve a DISPUTED invoice for the same reason you cannot draw outside the groove, the structure itself prevents it.

This is the piece that every LangChain implementation, every Semantic Kernel orchestration, every BabyAGI loop currently lacks.

They have no semantic grounding.

The agent does not understand the domain, it simulates understanding though, but the OCG provides actual understanding in the form of formal logic, and it validates every proposed action against that logic before the action proceeds anywhere.

Article content

Gate two is Policy Compliance.

Only actions that pass semantic coherence reach this gate. Gate Two asks the question that current systems do ask, but in isolation “Is this action allowed?” Here the OCG consults a policy engine, take OPA or Cedar, checking authorization, role constraints, separation of duties, budget limits, temporal constraints. An invoice above $50,000 requires dual approval. A procurement officer cannot approve a vendor they submitted. Transactions above a threshold cannot execute outside business hours. These are policy rules, enforced deterministically, but they only run after the action has been confirmed to make sense in the first place.

However, this separation is not cosmetic. Traditional systems conflate semantic validity and policy compliance into a single blob of logic, usually embedded in application code, usually inconsistently enforced, and usually impossible to audit in any meaningful way. But then the OCG pulls them apart. Domain logic lives in the ontology, authorization logic lives in the policy engine, and each is independently maintainable, independently auditable, and formally verifiable using methods that would let every compliance officer simply weep with relief.

Article content

🤟

The third gate is the Evidence Bundle.

For every action the OCG evaluates, whether approved or rejected, it generates a cryptographically signed data structure that records everything like the agent’s original intent, the entity resolution trace showing how ambiguous natural language was resolved to canonical identifiers, the semantic validation results showing which ontological rules were checked and what they found, the policy evaluation showing which policies applied and what they decided, the execution outcome, and the provenance of the OCG version, ontology version, and policy version that were active at the time. This bundle is hashed, signed with Ed25519, and written to immutable storage.

That is the answer to the compliance officer’s question “If the AI approves this transaction incorrectly, can you show me exactly why it did that?”

Yes. Here is the cryptographically signed, tamper-proof, time-stamped, fully provenance-tracked record of every decision in the chain. The agent did not make an opaque probabilistic decision because the symbolic gateway made a deterministic, formally verified, auditable decision. The agent proposed and the ontology checked so the policy engine confirmed and the Evidence Bundle recorded.

Your auditor can now have a good day and he’s going to trust your AI forever.

The beautiful irony is that this architecture solves the nondeterminism problem without eliminating the probabilistic AI. The LLM still does what LLMs are good at which is parsing natural language intent, handling ambiguity in user requests, generating structured action proposals from free-form input and whatnot. And then the OCG takes that probabilistic output and subjects it to deterministic validation before anything real happens. You capture the capability of the neural system and you contain its variability within a symbolic framework that provides formal guarantees. That is the whole game.

That is what neuro-symbolic AI actually means when it is implemented seriously rather than invoked as a marketing term.

We designed the OCG specifically for the regulated enterprise environment where this gap is most expensive. Finance, healthcare, critical infrastructure. I’m talking about environments where the EU AI Act is not optional and SOX Section 404 is not optional, and HIPAA is definitely not optional. Environments where “the model sampled toward a probably-correct answer” is not an acceptable explanation for a billion-dollar transaction.

Article content

It’s Italian and it comes from the Latin fumare which means “to smoke” or “smokiness”, yeah, kinda like your last AI budget. The technique is named after what it looks like – as if the edges dissolved into smoke.

Introducing the concept of Neuro-Symbolic AI in a friendly manner: The boring AI that keeps planes in the sky | LinkedIn

Introducting the concept of the concept of an Ontology in an easy to understand piece: Generative models guess, ontologies clean up the mess | LinkedIn


The agentic architecture you need when the auditors are coming

The mental model shift the OCG forces is harder than the technical implementation because those are relatively easy.

I mean, engineers who have spent their careers working with deterministic systems find it uncomfortable to define requirements as acceptability criteria rather than exact specifications and compliance officers who have spent their careers signing off on rule-based systems find it uncomfortable to approve a system whose core reasoning layer is probabilistic.

But the OCG resolves this discomfort by being honest about the architecture. The probabilistic component, the LLM, operates in a sandboxed role where it can generate proposals, but it does not execute them. Before that can even happen, the deterministic component, the symbolic gateway – OCG, validates every proposal before anything reaches the enterprise system. The agent cannot cause the system to violate its invariants, not because of prompt engineering or content filters, but because the formal ontology makes it architecturally impossible for a semantically invalid action to pass Gate one, and the policy engine makes it logically impossible for an unauthorized action to pass Gate two.

What this requires in practice is investment in the ontology.

Sigh. You felt this was coming, didn’t you?

The OCG’s effectiveness is entirely dependent on the quality and completeness of the enterprise knowledge graph. Building a production ontology for a financial services firm covering Procure-to-Pay and Order-to-Cash processes is six to twelve weeks of work involving domain experts, knowledge engineers, and compliance officers. That is not a small investment. It is, however, a much smaller investment than the one you make when an autonomous agent approves a $2 million payment to a sanctioned vendor because nobody built the semantic check that would have caught it.

But hold on. Stop your pointer from clicking the x !

Because there’s a way around having to involve a team of people for every process in your domain.

And yes, it involves AI as well.

Let me explain . . .


Let the AI build its own cage

So I left you on a cliffhanger.

Six to twelve weeks of ontology work. Domain experts. Knowledge engineers. Compliance officers in a room together for longer than any of them would voluntarily choose. I could feel you slowly reaching for the back button, and I respect that instinct, but stay with me for exactly one more section because this is where it gets interesting.

The obvious objection to the OCG is the one every enterprise architect raises approximately thirty seconds after understanding it. “Marco, this sounds great, but you are essentially asking me to hire a small army of knowledge engineers to hand-craft a formal model of every business process before a single agent can touch production”. And I get it, that objection is completely fair. If building the ontology required the same effort as building the system it governs, you have not solved the problem, but you’ve merely added a very expensive prerequisite to it.

Here is the thing though. We are building the ontology with AI.

Let that sit for a second.

The traditional approach to building enterprise ontologies is what academics call knowledge engineering, which is the process where you lock a domain expert and an ontologist in a room and wait. The domain expert knows what an invoice is and what states it can be in and what business rules govern its lifecycle and the ontologist knows how to express that knowledge in OWL and SHACL. Neither one can do the other’s job. So they talk, slowly, expensively, with the specific energy of two people who do not naturally speak the same language trying to assemble furniture from instructions written in a third language neither of them reads fluently.

This process works. It produces good ontologies. But it also takes months and costs a small fortune and requires people who are genuinely rare in the market, which is how you end up with six to twelve week estimates that make CFOs reach for antacids.

What we realized is that a well-prompted large language model is a surprisingly competent first-pass knowledge engineer.

Not a perfect one and certainly not a replacement for domain expertise, but a capable enough first-pass generator that the human expert’s job shifts from construction to validation, and that shift compresses the timeline by a significant factor.

Here is what this looks like, and I call it . . .


The ontology bootstrap pipeline

Because every component of the OCG architecture requires a cool name otherwise it won’t sell, that is on every Big Tech Sales Canvas 101. You feed the LLM your existing documentation – think of stuff like your finance process manuals, work-instructions, compliance frameworks, system integration specs, the SOX narrative your auditors produced last year, the HIPAA policy your legal team keeps updating. You know, all the stuff that already exists in the enterprise and already describes, in imprecise human language, what the business domain looks like and what rules govern it.

The LLM reads all of it and produces a draft ontology.

It draws a list of entities, relationships, lifecycle states, candidate invariants, the whole structure. And in OWL syntax of course, and with SHACL shape candidates, and all of that in a form the OCG can immediately begin evaluating.

But this draft is wrong of course.

Let me be completely clear about that. It will have gaps where the documentation was ambiguous, contradictions where different documents described the same process differently, and missing invariants that experienced practitioners know intuitively but nobody ever wrote down because it seemed too obvious to document. The draft is a smart, well-structured, incomplete first attempt produced in hours rather than weeks.

And that is exactly what you want.

Because now your domain expert is not building from a blank page. They are reviewing, correcting, and extending a structured artifact. The cognitive load is completely different. Building something from nothing requires creative effort and constant decision-making about scope and structure. Reviewing something that mostly makes sense and flagging the parts that do not is editing work, which is faster, cheaper, and requires less specialized expertise.

The domain expert who would have spent eight weeks in knowledge engineering sessions can now spend two weeks in structured review cycles. The knowledge engineer who would have been responsible for the entire construction is now a reviewer and integrator. The compliance officer who would have signed off at the end of a long process is now embedded in short feedback loops because the artifact is already coherent enough to reason about.

Article content

The validation loop that makes it production-ready

The bootstrap pipeline does not stop at the first draft, of course it won’t. That would obviously not be cool. After the initial ontology is generated and reviewed, you run it against historical transaction data containing real approved invoices plus some rejected purchase orders with real access logs from your regulated systems, anything that represents actual business behavior that the ontology should be able to explain.

The OCG then attempts to validate every historical transaction against the draft ontology and reports what it finds – transactions that the ontology correctly classifies as valid, transactions that the ontology incorrectly flags as violations, transactions that reveal missing invariants because they are semantically unusual in ways the draft did not anticipate, and edge cases that your documentation never mentioned because they were handled by institutional knowledge rather than written policy.

Each mismatch between what the ontology predicts and what the historical record shows is a specific, actionable signal. Not “the ontology is wrong” but “this specific entity relationship is modeled incorrectly” or “this lifecycle state transition is missing” or “this invariant fires too aggressively and would block a class of legitimate transactions.” The LLM, given the mismatch and the context, proposes a correction. The domain expert reviews the correction. The cycle repeats until the validation rate against historical data crosses the threshold your compliance officer has agreed is sufficient.

You are doing test-driven ontology development. TDOD. Just came up with the acronym because every AI-architecture needs acronyms. And in TDOD, the historical transactions are the test suite, and the bootstrap LLM is the developer, the domain expert is the code reviewer and the OCG is the runtime that tells you whether the tests pass.

Voila!


What this actually costs in practice

The honest to God answer is that it depends on how well-documented your business domain already is, and most enterprises are surprised to discover they are better documented than they thought, because the documentation exists in forms they did not think to count as documentation. Email threads where process exceptions were debated. Audit finding letters where violations were described. Vendor contracts that specify exactly which states a transaction can be in at each stage. Training materials from five years ago that the onboarding team still uses. Legal opinions on edge cases that the compliance team keeps in a folder somewhere.

That’s why I always have one or two business analysts in my factories that do system reconnaissance. They are the Sherlock’s that are on a mission to find every darn bit of evidence on how processes work. And they’re armed with root-level access and a bunch of process- and task-mining tools. My current team is called Titanium1, since we all have Titanium parts in our bodies because of the mistakes we made in the past.

And everything T1 finds is legible to an LLM.

All of it is feedstock for the bootstrap pipeline. The enterprise that thought it had no formal process documentation frequently discovers it has hundreds of documents that collectively describe its business domain with considerable precision. The problem was never the knowledge but the knowledge was locked in natural language across a hundred different files and nobody had a way to compile it into something executable.

The OCG bootstrap pipeline is essentially a compiler for institutional knowledge.

It takes the informal, distributed, inconsistently formatted documentation that every enterprise accumulates over decades and produces a formal, machine-executable, version-controlled ontology that your agents can actually reason against.

Article content

And now the auditor finally relaxes

The compliance officer who has been following this article with escalating anxiety can now exhale.

The OCG does not ask you to trust the AI, but to trust the ontology, which is a document you reviewed, corrected, validated against historical data, and signed off on. The AI’s role is to propose actions. The ontology’s role is to reject the ones that violate the domain model your experts built and approved. The policy engine’s role is to reject the ones that violate the authorization rules your compliance team wrote and audited. The Evidence Bundle’s role is to record everything that happened in a tamper-proof log that your auditors can read without needing a PhD in machine learning.

The probabilistic AI never touches anything real without passing through the deterministic gateway. The gateway never changes without a version-controlled update to the ontology or the policy engine. Every change is traceable. Every decision is explainable. Every rejection is documented with the specific rule that fired and the specific evidence that triggered it.

Article content

That is what enterprise-grade agentic AI looks like when it is built honestly rather than optimistically.

The AI gets to be creative inside the grooves. The grooves do not move without your permission so in the end, my Weiner dog remains unmolested.

All what’s left now is another one liner to sum things up.

The best thing about building a cage for your AI is that once it is built correctly, the AI is genuinely more useful inside it than it ever was running free.

Signing off,

Marco

P.S. if you want to receive the paper when it is ready, simply indicate that in the comments or drop me a DM.

Become an AI Expert !

Sign up to receive insider articles in your inbox, every week.

✔️ We scour 75+ sources daily

✔️ Read by CEO, Scientists, Business Owners, and more

✔️ Join thousands of subscribers

✔️ No clickbait - 100% free

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Up ↑

Discover more from TechTonic Shifts

Subscribe now to keep reading and get access to the full archive.

Continue reading