Compute Is Labor: The Architectural Inversion of the Economy

Something is slipping

Something is slipping in the economy, though almost nobody sees it yet.

It doesn’t show up in headlines or earnings calls, but in quieter places, in how procurement agents are pitched (and I’ve done that a lot), in why private equity firms are buying every “boring” business they can find, in the strange defensiveness people have around processes nobody actually likes. At first, these signals seem disconnected, accounting firms here, freight brokers there, property managers somewhere else. But the more you look, the more they start to rhyme.

And here’s what makes it so easy to miss: on the surface, nothing looks different. The same companies exist. The same workflows run. The same reports get filed. But underneath, something is dissolving.

The problem is that most of the “work” inside these systems was never necessary to begin with. The approvals, reconciliations, reviews-of-reviews, entire bureaucracies exist to manage complexity that only exists because we built systems around humans doing work. When AI stops assisting and starts orchestrating, that complexity doesn’t get automated. It collapses.

Procurement was just where I first noticed it. But once you see the pattern, you start seeing it everywhere, in accounting firms, healthcare administration, freight brokerage, property management. Entire categories of work are being revealed for what they always were: elaborate theater.

This essay is an attempt to trace that phenomenon, what happens when compute becomes labor, when AI starts running workflows end-to-end, and when the very architecture of the economy begins to invert beneath us.

I. Seeing the Pattern

The first clues appeared in places that looked unrelated. Procurement systems with thousands of configuration rules. Accounting firms buried under reconciliations and approvals. Freight brokers managing hundreds of emails for a single shipment. Healthcare providers cycling paperwork through endless authorizations.

At first glance, it all seemed like ordinary bureaucracy, frustrating, but necessary. So you go on building around it: adding approvals here, routing exceptions there, outlier handling for one customer, overrides for another. Soon, the system becomes a configuration jungle, 200 rules stacked on top of one another, every branch carved out to manage some special scenario.

But this complexity wasn’t designed to “keep humans busy.” It emerged because humans couldn’t hold the full context. Every exception, every dependency, every customer-specific rule, all of it had to live somewhere. No single person, or even team, could hold it all in their head, so we externalized the logic into sprawling systems. The jumble wasn’t intentional; it was inevitable.

And then something shifts. The moment AI stops assisting and starts orchestrating, the entire structure changes. The jungle fades. All those conditions, overrides, reconciliations, gone. What felt necessary was only managing complexity that didn’t need to exist in the first place.

This is where the architectural inversion begins. As I wrote in Building on Quicksand: when deterministic systems try to govern probabilistic reality, you end up with fragile complexity, systems that require endless maintenance to keep from breaking. But when orchestration flips, the burden disappears. Instead of embedding AI into the process, the process itself becomes AI-driven, simpler, leaner, and far closer to the outcome you wanted from the start.

And procurement isn’t unique. Once AI takes over coordination, the same pattern appears everywhere, in accounting, healthcare, freight brokerage, property management. Entire categories of operational complexity reveal themselves for what they really are: workarounds for human cognitive limits.

II. The Accounting Firm Paradox

Professional services firms are built on a fragile equilibrium: they monetize complexity they can no longer justify.

Take a typical accounting firm. Partners bill $500 an hour for work largely performed by juniors earning a fraction of that. The firm’s margins exist because the work is messy, reconciling spreadsheets, cross-checking approvals, passing information between systems that were never designed to talk to each other. None of this is elegant, but for decades it worked because managing complexity was the service.

Here’s the paradox: AI orchestration makes that complexity disappear. The approvals, reconciliations, and reviews-of-reviews that once justified armies of analysts now compress into a handful of orchestrated workflows. The work that generated the billable hours doesn’t get automated, it evaporates.

But that puts these firms in an impossible bind. To adopt AI orchestration would be to reveal that their business model rests on scaffolding, processes designed to manage fragmentation that no longer needs to exist. They can’t pivot without destroying themselves. They’re like the Titanic, trying to steer after the iceberg has already torn through the hull.

Meanwhile, their clients are waking up. Once a CFO realizes they’re paying $5,000 for work an AI system can handle for $50, the illusion collapses. This isn’t just accounting, the same pattern is emerging across healthcare administration, freight brokerage, property management, insurance, compliance. Entire industries built on human coordination are now structurally mispriced.

This is why private equity and rollup operators are moving so aggressively. They’re not betting on growth the old way; they’re buying companies trapped inside outdated operating models and rebuilding them around orchestration-first workflows. They’re exploiting an arbitrage hiding in plain sight:

When AI orchestrates rather than assists, you don’t optimize the business, you redefine it.

III. The PLA Playbook (Product-Led Acquisitions)

Once you see the pattern, you can’t unsee it. Entire industries are structured around managing complexity that no longer needs to exist. Accounting, healthcare administration, freight brokerage, property management, insurance, all of them.

And this creates a strange kind of arbitrage.

For decades, companies charged premiums for work that was hard, slow, and manual. That procurement approval took three days because five systems didn’t talk to each other and three different managers needed to sign off. That insurance claim took three weeks because someone had to extract details from a PDF, send emails to carriers, cross-check policy terms, and reconcile the payout in another system. Nobody liked these processes, but we accepted them because they felt inevitable.

But inevitability isn’t a moat. It was a limitation of human orchestration. And once AI flips that assumption, the entire economic model underneath begins to dissolve.

This is where Product-Led Acquisitions (PLAs) come in, the most important business model almost no one is talking about. The insight is simple but profound:

If the value of a company is based on managing complexity, and AI orchestration removes that complexity, the value doesn’t shrink, it migrates.

PLAs exploit this migration directly. They don’t sell tools to trapped companies and hope for adoption. They buy the companies outright, inject AI orchestration, and collapse the cost structure from the inside out.

Here’s the playbook playing out right now:

Find the zombies: Insurance brokers, accounting firms, freight forwarders, property managers, profitable on paper, but running 20th-century operating models.

Buy at yesterday’s price: 3–5x EBITDA, valuations based on the fiction that human orchestration is necessary.

Invert the architecture: Replace human coordination layers with AI orchestration; route by confidence, escalate only when trust is needed.

Collapse the cost base: 120 employees → 8 humans and an AI; three-week workflows → ten minutes; gross margins 20% → 85%+.

Compound the intelligence: Every transformed company becomes a node in a growing network, feeding shared context back into the orchestration layer.

Exit or hold forever: Flip at 25x to a strategic, or keep compounding value indefinitely.

This isn’t theoretical. It’s already happening at scale:

Constellation Software quietly built a $90B empire by buying 500+ vertical software companies, never selling one. Now imagine that model when the software itself runs the business.

General Catalyst is going diagonal in healthcare: billing + pharmacy + insurance + care delivery, all connected through a single orchestration layer.

Parker Conrad at Rippling cracked the “warm start” code: don’t sell HR software to PEOs, buy the PEOs themselves, inherit 5,000 customers overnight, and let AI run the stack.

Thrive Holdings and Elad Gil’s rollups are running variations of the same play: buy boring businesses, deploy orchestration, and build compound entities that quietly dominate entire service ecosystems.

This is not traditional private equity. It’s not optimization. It’s not squeezing out incremental margin. It’s redefining what the business even is.

When AI orchestrates, you don’t improve the process, you erase it. And when the process dissolves, so does the old operating model. The “insurance broker” isn’t a firm anymore; it’s an orchestrator for underwriting, carriers, and customers, running mostly in software. The “accounting firm” stops existing as a hierarchy of analysts and managers; it becomes a probabilistic coordination layer over ledgers, transactions, and regulations.

The leverage is civilizational in scale. Buying companies built on human coordination costs $1. Collapsing their complexity and rebuilding them on orchestration-first systems unlocks $100. Every acquisition makes the next one easier because the AI gets smarter with each new node. It’s compounding context, not just capital.

And here’s the part that still feels invisible to most people: these rollups aren’t “adding AI” to businesses. They’re using AI to turn the business itself into software.

Once you understand that, you start to see why the people running this playbook aren’t talking about it. They’re too busy writing checks.

IV. The Architectural Inversion

Product-Led Acquisitions work because they exploit something deeper than financial engineering. They exploit an architectural shift in how work gets done.

For 200 years, the pattern was fixed:

Humans managed → Machines executed.

Humans set the rules, built the workflows, managed the exceptions. Machines handled the repeatable parts, but the orchestration layer, deciding who does what, when, and why, always lived in people’s heads. That’s why procurement needed 2,000 rules, why accounting firms needed junior armies, why healthcare admin exploded into whole departments of coordinators: managing context was the job.

AI flips that.

Machines orchestrate → Humans are APIs.

Instead of embedding AI inside human-defined workflows, the workflows themselves become probabilistic systems. You stop hardcoding “if vendor = X, route approval to Y.” You give the orchestrator the goal: “Process this order correctly.” It calls the tools, evaluates the edge cases, escalates to humans only when trust or judgment is required.

And once orchestration flips, something strange happens: the complexity doesn’t get automated, it dissolves. Those 2,000 lines of procurement config collapse into 50. The reconciliation steps disappear because the orchestrator handles them in-flight. The entire architecture of the business becomes simpler, flatter, and closer to the outcome.

This is why PLAs work. They’re not “making companies more efficient.” They’re rewriting companies into software.

The old architecture looked like this:

Humans orchestrate → Humans coordinate → Machines execute

The new architecture looks like this:

AI orchestrates → Machines execute → Humans handle trust, judgment, and empathy

This inversion has three consequences:

Scaffolding disappears: Most of what we thought of as “necessary work” was actually coordination overhead. When orchestration moves into software, the need for human scaffolding, approvals, reconciliations, layers of middle management, drops by orders of magnitude.

Industries compress: When AI runs workflows end-to-end, the cost structures converge. A $5,000 accounting engagement collapses to $50. A claims-processing team of 120 shrinks to 8. Whole service industries become software-like in margins and scalability.

Firms dissolve into callable functions: In an orchestrated economy, companies expose their capabilities like APIs: generate_quote(customer_data), verify_compliance(policy), process_claim(claim_id). The org chart becomes software. Businesses stop “using tools” and become tools.

This isn’t automation in the old sense. It’s not “faster humans” or “better UIs.” It’s a new substrate for economic activity, one where work isn’t assigned, it’s discovered, routed, and executed probabilistically.

And once you see this inversion, you start to realize PLAs are just the opening act. The real game is what happens when every acquired company becomes a node in a shared orchestration layer. When context compounds across hundreds of industries. When intelligence stops living in silos and starts flowing across entities.

That’s where the next section takes us: compound entities and diagonal dominance, the quiet monopolies being built right now.

V. Compound Entities: When Orchestration Scales

PLAs make sense one company at a time. But their real power emerges when you start connecting them.

Every acquisition isn’t just a balance sheet transaction, it’s a new node in an orchestration network. Every policy document processed, every claims exception handled, every billing edge case solved, all of it becomes shared context. And context compounds.

A traditional business optimizes for its own workflows. A compound entity optimizes for shared intelligence across many.

Take Constellation Software. Since 1995, they’ve quietly bought 500+ vertical SaaS companies, never sold one, and compounded to a $90B valuation. Their playbook was simple: leave the companies operationally independent, but centralize best practices and capital allocation.

Now imagine that same model, but with AI actually running the businesses. Every acquisition isn’t just profitable, it makes every other acquisition smarter.

A dental software company learns a new insurance trick → instantly available to every other healthcare holding.

A freight brokerage discovers an optimal routing pattern → every property management arm benefits through better vendor scheduling.

A compliance edge case in accounting → automatically improves orchestration in insurance, billing, and payroll.

This isn’t horizontal dominance (owning one industry) or vertical integration (owning the supply chain). It’s something new: diagonal dominance.

A compound entity cuts across industries and domains, connecting them through a single orchestration layer. It sees patterns no standalone company could. Your accounting provider knows your legal structure. Your insurance broker understands your operational risks. Your property manager anticipates your growth. This isn’t cross-selling, it’s cross-intelligence.

And it’s already happening:

General Catalyst is going diagonal in healthcare: pharmacy + billing + insurance + care delivery → one orchestration brain, dozens of distinct entities.

Thrive Holdings is quietly rolling up property management, HOA platforms, IT services, aggregating fragmented industries where AI orchestration drives 10x margins.

Parker Conrad at Rippling turned the HR software game sideways: buy the PEOs instead of selling to them, inherit thousands of customers instantly, and run the stack on Rippling’s orchestration layer.

Elad Gil and others are funding permanent capital vehicles designed explicitly to exploit this shift, rollups built not around shared branding, but around shared cognition.

The leverage here isn’t operational. It’s epistemic.

In traditional rollups, you integrate revenue streams. In AI-native rollups, you integrate intelligence. Every new workflow completed improves the system’s ability to handle the next one, across every business in the network. The whole portfolio becomes an evolving organism, compounding knowledge faster than any standalone company possibly could.

And this creates a kind of monopoly regulators aren’t prepared for. Not a monopoly of market share, but a monopoly of understanding.

Once a compound entity reaches sufficient scale, it sees patterns nobody else can:

  • Fraud across thousands of customers before auditors notice.
  • Regulatory shifts before they’re codified into law.
  • Demand changes weeks before competitors feel them.

This isn’t optional scale, it’s runaway compounding. Once you hit critical mass, every node improves the whole, and the whole improves every node.

Companies stop competing on tools, features, or pricing. They compete on shared intelligence. And shared intelligence compounds like capital, except faster.

This is why Constellation, General Catalyst, Thrive, and Rippling aren’t just buying companies. They’re building distributed economic intelligences, organisms made up of hundreds of semi-independent entities, orchestrated by AI, learning as one.

And here’s the part almost nobody sees yet: once the orchestration layer learns enough across enough nodes, the boundaries between companies stop mattering. You’re not running a dental SaaS, a freight brokerage, or a billing platform. You’re running a single, adaptive cognitive system that just happens to monetize through many legal wrappers.

These aren’t companies anymore. They’re compound intelligences.

VI. The Trust Layer (Where the System Stabilizes)

If orchestration collapses complexity, why doesn’t every transformed business run at 100% automation?

Because real companies don’t live in benchmark reports; they live in institutions. Customers, regulators, auditors, insurers, counterparties. The system doesn’t just have to be correct, it has to be trusted, accountable, and recoverable. That’s the role of the human layer.

In practice, transformations settle into a consistent band: mostly-AI, minimally-human. Call it 85/15 today, drifting toward 90-95/5 as the harnesses improve. Not because the models can’t do more, but because the system (technical + legal + social) won’t accept more, yet.

Why this equilibrium emerges:

Technical variance isn’t zero. Models are probabilistic. Even with retrieval, evals, and confidence routing, there’s residual uncertainty and drift. You need circuit breakers.

Economics have knees. Pushing from 90→99% autonomy is exponentially expensive: more evals, more guardrails, tighter routing, heavier oversight. Diminishing returns hit hard.

Institutions demand accountability. Someone must be licensable, insurable, suable. Signatures, attestations, and chain-of-custody aren’t “nice to have”; they’re the price of participating in the real economy.

Humans anchor expectation. People need a visible interface for exceptions, empathy, and responsibility. Not performative, functional. It keeps the system stable.

Think of it as trust scaffolding around an AI-native core.

Four pillars make that human slice non-optional:

Accountability Infrastructure: Licenses, attestations, professional liability. A doctor signs. A CPA signs. A broker signs. The signature is the legal affordance that lets the AI do the other 94%.

Exception Infrastructure: The long tail is real. Novel contracts, ambiguous medical notes, multi-jurisdiction edge cases. Humans resolve new patterns, then fold them back into the system.

Assurance Infrastructure: Explainers, receipts, audit trails, reversible operations. This is how you bound uncertainty and keep counterparties (and insurers) onside.

Evolution Infrastructure: The human layer doesn’t just “check.” It learns on behalf of the system, naming new failure modes, adjusting thresholds, curating context, updating policy.

So what does “mostly-AI, minimally-human” look like operationally? It looks like confidence-routed work:

  • Auto-accept above threshold (fast path).
  • Auto-clarify when missing context (cheap path).
  • Escalate when confidence × impact dips below policy (human path).
  • Rollback on breach (safety path).
  • Fold back learnings weekly (compounding path).

The point isn’t to chase a mystical 100%. It’s to stabilize the system where the economics sing and the institution smiles. Today that’s ~85/15. As evals, routing, and guardrails mature, the band shifts. The principle doesn’t: automation to the trust boundary, not beyond it.

This is also where compound entities pull away. With every additional node, they need fewer human touches per unit of work, not because they remove humans, but because they raise the floor with better priors, better eval sets, and richer exception taxonomies. Trust scaffolding thins as shared intelligence thickens.

Up next, the practical question operators ask in week one of a transformation: What’s the bar to ship? That’s where Minimum Viable Intelligence and the Orchestration Stack come in, the difference between cute demos and durable businesses.

VII. Minimum Viable Intelligence (MVI) and the Orchestration Stack

You don’t ship orchestration systems on vibes. You ship them when the Minimum Viable Intelligence clears, and that threshold is nowhere near as simple as hitting “95% accuracy” on a benchmark slide.

MVI isn’t about some arbitrary score; it’s about whether the system can actually run an entire workflow end-to-end without burning the house down. It’s about knowing where the model is confident, where it’s uncertain, where the risk boundary lives, and how the entire system behaves when things go sideways. You only trust an orchestration system when you know, empirically, that failure is bounded, reversible, and explainable, and when the economics make sense.

Take procurement orchestration. Right now, most production systems stabilize around 85/15: AI handles ~85% of tasks, humans step in for the other 15%. Not because the model can’t technically do more, it can, but because the system isn’t ready for more yet. Trust collapses if you try to push beyond what your scaffolding can hold.

That scaffolding has layers, and each one matters:

The routing layer decides when to let the AI act and when to punt, confidence thresholds, escalation triggers, fallback paths.

The evaluation layer watches everything, constantly. You’re not measuring model “accuracy” anymore; you’re measuring outcomes, success rates, rework costs, exception loads, downstream impacts.

The guardrails layer wraps the whole thing: schema enforcement, safety constraints, policy boundaries, reversible commits, and “kill switches” that stop cascading failures before they start.

And underneath it all, observability stitches the system together. Every decision is logged. Every action is traceable. Every exception feeds back into the system to make it slightly smarter next week than it was this week.

If that sounds more like an organism than a tool, that’s because it is.

We’ve learned this the hard way in procurement. Early on, we tried to brute-force our way to full autonomy: hundreds of hand-tuned configuration rules, endless if/then branches, nested exceptions, special cases for every customer. It worked, until it didn’t. Context slipped through the cracks. Edge cases multiplied. The whole thing turned into a fragile jungle of assumptions.

When we flipped it, letting the AI orchestrate rather than assist, the jungle started to dissolve. But we didn’t go straight to 100% automation. We built an orchestration harness instead: route work by confidence, clarify missing context automatically, escalate high-impact uncertainty to humans, and, critically, fold human corrections back into the system.

Every exception handled today is one fewer exception tomorrow. Over time, your exception taxonomy matures, your priors sharpen, your context deepens, and the autonomy ceiling rises.

That’s the thing about MVI: it isn’t a fixed destination. It’s a moving frontier. Today, procurement stabilizes at 85/15. Tomorrow, as orchestration systems integrate across nodes, procurement talking to accounting, accounting feeding insurance, insurance updating billing, that same stack can credibly run at 90/10, then 95/5. The difference isn’t the model; it’s the context.

And this is where the orchestration stack starts to look less like software and more like infrastructure. It’s not just stringing together API calls or building clever prompts, it’s building a living system:

  • A router that understands uncertainty and prices it.
  • A memory that compounds context across every workflow.
  • A nervous system of evals, traces, and recovery paths.
  • A trust layer that keeps institutions comfortable while the machine gets smarter underneath them.

The old way was writing code to tell the system what to do. The new way is building systems that discover what to do, and trusting them only when you can prove, empirically, that they know enough to handle the weight.

MVI is that proof. It’s the line between cute demos and systems you’d bet an entire company on.

VIII. Connect the Dots (Quietly)

Something strange is happening in the economy, but you won’t find it in the headlines. You find it in the term sheets that never make TechCrunch, in the LP update where a “platform” quietly becomes a buyer, in the awkward silence when a team insists a 14-step approval chain is “non-negotiable” and you ask which steps produce value. On the surface, nothing looks different. The logos are the same. The reports still get filed. But underneath, something is dissolving.

Here’s the pattern as I see it.

First, capital moved. Sequoia literally rewired its fund to hold winners indefinitely, an evergreen pool designed for compounding instead of 10-year theatrics. Translation: patience is the strategy when software eats slow industries. You don’t raise an evergreen vehicle unless you believe the real returns show up in year 11 and beyond.

Second, the operators moved. General Catalyst didn’t just blog about “Health Assurance”; they stood up vehicles to buy and modernize care delivery, then set up staged platforms (yes, including HATCo and the Crescendo playbook) to fuse AI with ops. If you squint, it’s not “AI for hospitals,” it’s “own the workflows, then let AI run inside them.” That’s not a press release; that’s an operating thesis.

Third, the outliers proved distribution > demos. Metropolis couldn’t get every legacy parking operator to adopt computer-vision gates on vibes alone. So they raised ~$1.7–1.8B and bought SP+ outright for ~$1.5B. When the software wedge stalls, you buy the market and install the wedge from the inside. Call it “growth buyout,” call it “VBO,” call it whatever you want, the point is simple: own the cash flows, then refactor the work.

If you want a 30-year control group for how compounding actually works, look at Constellation Software. A thousand-plus vertical acquisitions. A near-religious bias to not sell. And value created by letting each business keep its domain weirdness while sharing invisible systems of capital allocation and learning. Now imagine that same architecture when the shared substrate isn’t just best practices, it’s an AI that actually executes the work. The compounding stops being quarterly memos; it becomes operational.

Of course venture noticed. Slow Ventures formalized a name for it, Growth Buyouts, and made the obvious point everyone whispers: if the product is the engine of value creation and M&A is the distribution hack, venture math can work. The asterisk is doing the hard part (product) before the fun part (buying things). Otherwise you’re just stapling dashboards to payroll.

And yes, skeptics noticed too. Benaich and Mrkšić’s read is fair: a lot of “AI roll-up” pitch decks confuse operational improvement for a new business model and assume public markets will bless service EBITDA with software multiples. That only clears if the tech truly rewires unit economics, not if you’re role-playing SaaS during diligence and doing people-ops in production. The critique is healthy; it forces the bar higher than “we added a bot.”

So what actually has to be true? The part we learned in the trenches is that the economics don’t flip until you change where the intelligence lives. When AI assists, you chip away at latency. When AI orchestrates, the middle evaporates. In procurement, that meant our 200 if-thens and “configuration jungles” were a symptom, not a system. We weren’t keeping humans busy; we were papering over the fact that the context a deal needed was scattered across finance, risk, vendor, and ops, so we settled for a jumbled compromise everyone could tolerate and no one could justify. Once an orchestrator can see across those silos, policy, history, exceptions, budget, the approvals, reconciliations, and reviews-of-reviews don’t get automated; they collapse.

Zoom out and you see the same shape: accounting closes, prior auths, carrier submissions, renewals, freight tendering, HOA maintenance, places where most “work” is coordination about the work. The minute an agent can route by confidence, escalate by consequence, and learn from traces, the theater clears out and the human layer reappears where it’s actually load-bearing: judgment, relationships, legitimacy. That’s the now-familiar trust band, ~85/15 today edging toward 90/10 as harnesses mature, not because of model magic but because our governance and evals are finally catching up to what models already do. (If you need proof change management is the real trench, go read Alpine’s annual scorecard: 23k deals sourced → 175 closed → years of integration. Buying is table stakes; making the workflows agree is the sport.)

Meanwhile, the capital stack is converging on this thesis from different sides. Newcomer has chronicled the quiet “AI roll-up” wave across Tier-1 franchises, GC, Thrive, 8VC, each standing up platforms that look suspiciously like PE on the outside and orchestration labs on the inside. The motif is consistent: acquire distribution, normalize data just enough to be useful, then let agents do the boring parts no one will miss.

You can already see where the edges will bite back. If you buy sleepy assets and install a chatbot, public markets won’t hand you a software multiple; they’ll hand you a stern talking-to. If your gains come from headcount instead of harnesses, the returns decay as soon as you stop staring at them. If you underestimate the human “trust theater” (the signatures, the accountability, the comfort of a face who can be blamed), adoption plateaus. Those aren’t gotchas; they’re guardrails. They’re why this works now only when you do the unglamorous part: rebuild the operating core so AI can run the loop, not decorate it.

If you’re looking for a single, unfakeable tell that you’re on the right track, it’s compounding. Constellation taught us that compounding is a cultural system as much as a financial one: decentralized decisions, strict capital discipline, patience. The AI-enabled version adds something new, shared operational learning. Every quote generated makes the next renewal smarter. Every denial overturned improves the next prior auth. Every garage automated makes the next lot cheaper to run. That’s why Sequoia needed an evergreen. That’s why GC built operating platforms. That’s why Metropolis bought the distribution instead of nagging it. They’re all converging on the same bet: the long arc of orchestration bends toward ownership.

And yes, there’s a saner, humbler version of this that survives contact with reality. Brad-Jacobs-style “listen first, change second” integration beats the chest-thumping “tear it all out” routine every time. Alpine’s pipeline tells you why: the volume of stitching needed to make dozens of bolt-ons sing is brutal. The only way through is to honor what already works, instrument what doesn’t, and let agents take the baton one lane at a time until the baton is the system.

So, connect the dots with me:

Capital structures stretched to hold compounding (evergreen).

Operators shifted from “sell software” to “own workflows” (GC/HATCo).

Distribution got bought when persuasion stalled (Metropolis/SP+).

The compounding template already exists (Constellation), we’re just swapping PDFs for agents.

The trench warfare is integration, not ideation (Alpine).

And the adult supervision in the room (Benaich/Mrkšić) is right to ask: are you actually changing the unit economics, or just narrating?

If that’s the frame, procurement was just the aperture, the first place the fog burned off enough for me to see what was really there: an economy built like a Rube Goldberg machine to carry human context across silos. AI isn’t “making the machine faster.” It’s removing the reason the machine existed.

IX. Compound Entities (when cognitive architectures and capital structures merge)

Something strange is happening. Not in the headlines, deeper than that. You see it in quiet places, where workflows move faster than companies do.

Software was supposed to be the thing that scaled. That was the whole SaaS thesis: build the tool once, sell it everywhere, enjoy infinite leverage. But that’s not what’s happening. The leverage isn’t in the software anymore. It’s in the workflows underneath it, in how context moves, how exceptions get resolved, how memory compounds. And once you see it, it’s hard to unsee.

The old assumption was simple: every company was a silo. Every business learned in isolation. If one hospital figured out a clever insurance trick, another hospital wouldn’t benefit until somebody wrote a SaaS feature, bought a license, retrained a thousand staff, and deployed the change six quarters later. Progress moved slowly because knowledge couldn’t move any faster than the companies that contained it.

That assumption is breaking.

At first, I thought I was just watching procurement get eaten alive. The configs kept growing, 200 rules, 500 exceptions, special-case handling for every supplier. It was pure scar tissue, like we’d hard-coded human superstition into Python. And then we stopped fighting the edge cases and handed orchestration to the AI instead.

Something subtle happened. The “rules” dissolved, but the system got better anyway. Exceptions stopped being chaos and started being memory. If one buyer discovered a vendor quirk in Entity A, that resolution showed up automatically the next morning in Entity B, C, D, across teams that didn’t even know each other existed. The context started compounding faster than the company boundary itself.

That’s when it clicked: the unit of scale isn’t the company anymore. It’s the substrate.

This is the quiet shift hiding beneath all the AI-enabled rollup decks floating around right now. Everyone sees the same surface-level playbook: buy a bunch of small, profitable companies, wire AI into their operations, compress costs, multiply EBITDA. The assumption is that the “synergy” is financial, better margin, faster cash cycles, higher multiples.

But the real compounding isn’t financial at all. It’s cognitive.

When exception memory is shared across nodes, something weird happens:

Fix a claims denial rule in healthcare → that insight accelerates invoice coding in procurement.

Solve freight tendering delays → that resolution speeds up prior auth approvals in insurance.

Build better PO matching → the billing engine improves in five unrelated subsidiaries without anyone coordinating a thing.

Knowledge stops being local. And once knowledge stops being local, the company boundary stops mattering.

Look at who’s moving first:

General Catalyst’s HATCo isn’t building healthcare SaaS. It’s owning the workflows themselves, insurance, claims, scheduling, and letting context propagate across them.

Metropolis didn’t spend $1.5B on SP+ for parking margins; they bought distribution and context density, wiring agents into every payment, every entry, every receipt.

Rippling embeds orchestration directly into payroll and PEO ops, where exceptions in one customer instantly teach the router for everyone else.

These aren’t “software companies” in the old sense. They’re compound entities: multi-node cognition substrates disguised as rollups, behaving more like living systems than SaaS vendors.

And here’s the important part: when cognition compounds faster than companies, capital has to evolve to keep up. Evergreen funds, OpCo/PropCo splits, AI-native rollups, they’re all just early experiments in designing ownership structures that match the speed of the substrate. The technology layer and the capital layer are quietly fusing, because you can’t capture nonlinear memory with linear hold periods.

This isn’t just software getting better. This is the economy rearranging itself around whoever controls the shared cognition graph.

X. The Shared Cognition Graph (how knowledge becomes infrastructure)

The thing nobody tells you about “automation” is that it doesn’t just make workflows faster. It changes what a workflow is.

I didn’t see it at first. We were just trying to make procurement less painful, hundreds of configs, endless exceptions, every customer insisting they were “special.” The codebase became a museum of scar tissue: 500 if-statements explaining human habits no one remembered creating.

Then orchestration flipped. We stopped telling the AI what to do and started letting it run the pipeline itself.

Something subtle happened. The rules dissolved, but the system got better anyway. Exceptions stopped being noise and started becoming memory. A PO mismatch solved in one customer became the default resolution for ten others overnight. The router wasn’t just deciding, it was learning.

At first, I thought this was a procurement problem. It isn’t. This is accounting. Freight. Healthcare claims. Property management. Anywhere workflows rhyme across silos, the same pattern emerges: context compounds faster than companies do.

That’s when it clicked: the unit of scale isn’t the business anymore. It’s the substrate, a shared memory system where every resolved exception increases the autonomy of the entire network.

This is the part most rollup decks miss. Everyone’s still focused on the surface-level arbitrage: buy a bunch of small, profitable companies, wire AI into their ops, compress costs, multiply EBITDA. But the real compounding isn’t financial, it’s cognitive.

When one node learns something, every connected node learns it instantly.

Fix a billing code in Entity A → denials drop across 100 clinics overnight.

Resolve a customs hold in freight → prior auth approvals accelerate in insurance.

Solve PO mismatches in procurement → invoice accuracy improves portfolio-wide, instantly.

No meetings. No “change management.” No project plans. The knowledge just… propagates.

And once context stops being local, companies stop being local too. The moat isn’t the software anymore. The moat is memory.

XI. The Human Operating Envelope (deepened)

Here’s the thing I didn’t understand when I was busy wiring up agents to move paperwork faster: autonomy isn’t a feature, it’s an inversion. The moment the substrate begins to orchestrate, calling tools, staging writes, resolving exceptions, the old chain of command flips. Humans stop being the system’s CPU and become validators of last resort. We don’t “automate tasks.” We redistribute decision rights. That’s why the resistance doesn’t show up in model accuracy graphs; it shows up in calendars, titles, and rituals.

Procurement was just my first glimpse. A two-line PO mismatch that our agent resolved instantly in one account sat unresolved for three weeks somewhere else, not because the model couldn’t do it, but because nobody had signed off on the idea that an agent should. Same inputs, same output, different ontology of who’s allowed to move. On paper, those orgs looked identical. Underneath, one of them had quietly inverted authority: tools propose, substrate routes, humans sign. The other still believed work flows up the pyramid until a sufficiently credentialed person blesses it with a click.

Why does the inversion happen at all? Because we hit the ceiling of human context-holding a while ago and pretended we didn’t. One PO isn’t “a PO.” It’s three systems, five threads, a contract PDF, a CSV invoice, two carrier portals, and a vendor rep who only answers the phone on Thursdays. Multiply that by hundreds of vendors and a few thousand exceptions, and no single person, or team, can hold the whole map in their head. The bureaucracy we built wasn’t malevolent; it was a coping mechanism for cognitive saturation. We created committees to pool context and then mistook the committee for the work.

Orchestration exists because the substrate can hold more context, faster, and in parallel, without pretending the map fits in a meeting. The router isn’t a cute flowchart; it’s a brainstem making a choice every step: do I act, do I fetch, do I ask, do I escalate? Confidence, budget, risk. Ask, search, call, retry, escalate. It’s brutally simple and that’s why it scales. The first time you watch it stage a write, produce a human-readable trace, and roll back cleanly when the risk budget says no, you realize how much of your “process” was just fear management with extra steps.

But this is where the social physics kick in. You can raise autonomy all you want; if meaning can’t keep up, the system wobbles. Context velocity is how fast the substrate learns from a resolved exception and propagates the fix everywhere it applies. Meaning velocity is how fast humans metabolize that change into trust. The substrate is asymptotic, one exception resolved in Cleveland becomes the default in Calgary before lunch. Humans… aren’t. We use stories and signatures to coordinate; we need to see the same lesson a few times before we stop clutching the “oh-shit handle.” That gap, context racing ahead while meaning limps after, is the drag you feel when a flawless demo becomes a month of “can we just loop Legal?” It’s not stupidity. It’s ontological lag.

Designing the human operating envelope is how you close that gap without lying to anyone. The mechanics look unromantic, because they are:

Two-phase commits where writes matter. The agent proposes, a deterministic tool stages, the commit happens only when the risk budget or a human says go. Reversibility isn’t a nice-to-have; it’s the difference between “trust me” and “read the receipts.”

Decision traces that read like a narrative, not a stack trace. Which model believed what, which context it saw, which thresholds were in force, which budget remained, why it didn’t escalate. If it takes you more than thirty seconds to reconstruct the story, you built a vibe, not a system.

A kill switch per workflow, owned by the business, not engineering. Terrifying the first week, liberating the second. Nothing earns runway faster than an incident that degrades gracefully while everyone can see why.

An exception library with human names. Not “ERR_419.” “Carrier portal renumbered fields on Tuesdays.” Each exception gets a minimal repro, a safe default, and a resolution recipe. Promote good fixes to global memory by default, allow local overrides when regulation actually differs. The eighth entity should never re-learn the first entity’s lesson because someone’s inbox is a knowledge silo.

Minimum Viable Intelligence defined in cash, not vibes. Auto-execute above this confidence; clarify in this band; escalate below; human review capacity is N%; p95 latency is M seconds; effective cost per successful outcome including rework must be <$X. These thresholds will be wrong at first. Good, version them by cohort and move on. Autonomy is a budget problem dressed in math.

Ship those, and the autonomy ceiling rises on its own. Skip them, and you’ll end up hiring a second company to babysit the first one.

Now, let’s talk about why people really resist. It’s not because the model is sometimes wrong. People forgive error if they believe in legitimacy. A doctor signs not because the model is dumb, but because society needs a name to sue. A controller wants the trace not because she loves logs, but because she knows what happens when an auditor asks, “Who decided this, and on what basis?” The sixth approval in your chain is not about quality; it’s about legitimacy theater. I don’t mean that as an insult. Theater is how institutions change without shattering. When we make the “theater” visible and cheap, named humans on the high-risk commits, automatic receipts for the rest, the hair goes back down on everyone’s neck and the system moves.

This is also why the 85/15 equilibrium appears everywhere right now. Technically, you could push to 95/5 in pockets. Socially, 85/15 is where meaning keeps up. The 15% isn’t really checking the AI’s work; it’s providing accountability, legitimacy, exception discovery, and evolutionary pressure. It’s the human API surface area where relationships, license, and judgment live. Pretend you can delete it and you’ll get revolt, not because people love keystrokes, but because we haven’t yet built a world where a trace is sufficient ritual.

You can feel the markets tugging on the same thread from a different angle. If context truly compounds faster than we can sell “tools,” you stop selling software and start selling systems of authority. That’s what Product-Led Acquisitions actually are: the right to invert the org chart and keep the cash flows when you do. Folks call it arbitrage because the spreadsheet says 3–5x EBITDA becomes 20–25x. But the spreadsheet is just a crude instrument for something subtler: you’re buying the distribution and licenses necessary to move decision rights from meetings into the substrate. That’s why roll-ups with dashboards stagnate and roll-ups with orchestrators compound. Memory moves faster than process.

There’s a moral undertow here I don’t want to varnish. When 85% of what we did was ferry context between silos, and the substrate ferries it better, what exactly were we doing? Some of it was skill. A lot of it was identity. We stapled a sense of worth to a serialization of keystrokes and called it a career. When the keystrokes evaporate, the person doesn’t. The risk is that institutions pocket the gains and call the rest “upskilling.” The opportunity is that we admit the quiet part: most of that “work” never deserved a human life, and now we have a chance, maybe our only one, to redirect human attention toward relationships, judgment, creativity, and stewardship. That requires more than a severance policy. It requires telling the truth about what changed.

So yes, I still like the dials. Router with budgets. Reversible writes. Exception memory with names. Boring dashboards. Shadow runs. But the deeper lesson is about velocity. If context velocity (what the substrate can learn and propagate) outruns meaning velocity (what humans can accept and own), the system tears. If you slow context to spare meaning, you squander the compounding. The art is in closing that gap without gaslighting anyone: raise the receipts, widen the kill-switches, allocate “trust bandwidth” where it buys the most movement, and keep the theater honest.

The winners here won’t be the teams with the shiniest models. They’ll be the ones who can let the substrate evolve without leaving the humans, institutions, and narratives behind, the ones who treat trust not as a slogan or a checkbox, but as the bottleneck through which the future must flow. Compute is becoming labor. Authority is becoming software. The question in front of us is embarrassingly human: can we move decision rights into the substrate at the speed our meaning can metabolize, and no faster? If we can, the 85/15 will drift toward 90/10 without anyone feeling the floor drop. If we can’t, the demo will keep winning and the deployment will keep losing, and we’ll blame the model for a problem that was always ours.