Unsigned Cognition

2026-05-05

Read the API terms of any frontier AI lab. Output is provided “as-is” and “for informational purposes only.” All warranties are disclaimed. The customer indemnifies the lab against any claim arising from use of the model. It reads like legal boilerplate but it is the economic precondition for the token to be sold for fractions of a cent to hundreds of millions of users a day. A token comes with no signature attached. No one signs the diagnosis, the audit, the engineering drawing. The model produces all of them and underwrites none of them.

Every other form of cognition humans have ever paid for was signed. The signature is an accountability instrument before it is a legal one. A doctor signs a diagnosis and can be sued. An auditor signs a 10-K and can lose their license. A structural engineer stamps a drawing and goes to prison if the building falls. The legal version is the most enforceable, but the mechanism reaches further than law. A journalist’s byline can be retracted; a scientist’s paper can be withdrawn; an architect’s name stays on the building for as long as the building stands. The signature, the seal, the byline: the same instrument in different costumes, someone whose career, license, or freedom is tied to the output. Tokens are the first form of paid cognition in human history that carries none of that, and that absence is what makes the price possible. Call it unsigned cognition.

Two company shapes survive the shift. Token distributors generate unsigned cognition by the trillion, win on capex and distribution, and consolidate to a handful of global players. Underwriters sell intelligence applied to something a model is not allowed to sign for: signed atoms (a regulated physical operation), signed relationships (a fiduciary tie), pure attestation (an audit by a named partner), or signed context (an archive whose owner is on the hook for every byte). The middle, what we currently call SaaS, traffics in unsigned cognition with extra steps. AI prices the extra steps to zero.

Distributors and Underwriters

A token distributor sells inference. The product is a number of tokens at a latency and a price. Hyperscaler economics rule: training runs in the billions, talent fungible only at the margins, distribution that compounds with usage. Margins compress toward zero the way commodity prices do. Expect roughly the same population as cloud hyperscalers; the inputs to scale are the same.

An underwriter is a company that holds at least one signature a model is not allowed to assume. There are four kinds. Each requires a named human, not a corporation, whose career, license, or freedom is staked on the output.

A pharma plant’s QA director signing every batch release under criminal liability, an aircraft type certificate held by named officers, a regional dialysis chain’s clinics each operating under a named Medical Director’s CMS license, a nuclear plant’s senior reactor operators on a license that took decades to acquire: these are signed atoms. After a century of disasters the regulatory state stopped trusting corporations and started requiring an actual human in each critical role, with criminal exposure or personal license at stake. The model can advise the operation; it cannot become the named officer.

A bank holding trillions in client assets under fiduciary duty, a registered investment advisor filing an ADV with the SEC, a primary care physician at the center of a patient’s chart: these are signed relationships. Trust at scale is a regulated instrument; breach is litigable; the relationship took decades to build and transfers slowly.

An auditor signing a 10-K under PCAOB inspection, a ratings agency signing creditworthiness, a SOC-2 firm signing security posture, UL Labs signing electrical safety, bond counsel signing the legality of a municipal issuance: these are pure attestation. The customer is paying for the willingness of a regulated entity to be sued if the analysis was wrong, not for the analysis itself.

Bloomberg’s terminal, Epic’s installed base, an ERP with thirty years of joins no model has seen: these are signed context. The archive is proprietary; the named Data Protection Officer who signs for it under GDPR, HIPAA, or PCI-DSS is what makes the moat regulatory rather than informational.

A company is an underwriter if it holds at least one of these. If it holds none, it is a SaaS interface to commodity intelligence with no one personally on the hook for the output. The structural test: hand a competent in-house team a frontier coding agent and a quarter of runway. Can they reproduce the product? If yes, the product was the middle. Try the test on your own company. What could they rebuild, and what would they miss?

Scorched Adjacencies

The mechanism is older than AI: platforms with a profitable core commoditize their adjacencies, and anything charging margin between the user and the core dies. In 2008, Andy Rubin’s team at Google released Android as an open-source operating system and gave it to every handset manufacturer who wanted it. Google did not need to profit from the OS; it needed the OS on every phone so that Google Search stayed the default, which protected the advertising core. By 2013, Android ran on 80% of smartphones sold worldwide, and the margins of every mobile software company that sat between user and search compressed toward zero. Amazon ran the same play with AWS: price infrastructure below profitability thresholds competitors could not match, and let the cheap infrastructure feed e-commerce traffic back to the core.

The token distributors are running this play one floor up. The core is unsigned cognition: produced by the trillion, indemnified by the disclaim, sold by the unit. They price everything between the user and the model toward zero: coding assistants, search, translation, document analysis, customer support tooling. They do not need to win those categories; they need to make them cheap enough that no SaaS company can charge a margin on top of inference. The margin destruction is deliberate.

What they cannot price to zero is the signature. A coding agent will rebuild a project tracker in a weekend; it will not sit for the FAA’s airworthiness exam, hold a state medical license, or accept fiduciary duty under ERISA. The categories that survive this pressure are the ones the distributors are legally not allowed to enter without giving up the disclaim language that makes the price work, not the ones they haven’t gotten to yet. The day a frontier lab indemnifies a hospital against a misdiagnosis, the lab stops being a token distributor in that vertical and becomes a medical-device company, and its margin structure changes overnight.

The deeper reason underwriters survive is taste, which is the operational form of carrying liability. An underwriter accumulates a reward function tuned to its domain: which edge cases matter, which workflows users actually run, which failures are catastrophic versus cosmetic. Consider a dialysis chain at 3 AM: an alarm sounds, and the nurse reaches for the silence button because she’s heard this one a thousand times. But another alarm, with a slightly different cadence, means a patient is crashing. She knows the difference. No training manual taught it to her; ten thousand shifts did. A freight broker knows which carriers ghost loads on Fridays and which ones answer the phone. That knowledge accrues through thousands of shifts and seasons and encodes itself in product decisions no distributor will ever see. Taste cannot be shipped from outside, because a distributor that gets the taste wrong faces no consequence; the underwriter does, which is why the underwriter develops it.

What Survives and What Doesn’t

The token layer concentrates because its inputs are fungible globally: GPUs, electricity, transformer architectures, web-scale text. The underwriting layer fragments because its inputs are not. Healthcare alone produces hundreds of vertical underwriters (dialysis chains, radiology networks, regional EHR integrators, pediatric specialty groups), and the same is true in industrials, logistics, regulated finance, education, government procurement. Each one looks more like a regional bank than a horizontal SaaS company. But the long tail is selective, not democratic: selling a slim layer over a frontier model and a Postgres database is a doomed middle in vertical clothing, no matter how niche the vertical, because the slim layer carries no signature.

The dividing line is whether a system terminates inside one firm or spans outward. Anything that terminates inside one company collapses to a prompt: internal CRMs, dashboards, project trackers, ETL glue, ops runbooks, all regenerated by whichever team needs them, often overnight, because no signature is required. But a coding agent does not give a hospital a thirty-year longitudinal patient record. It does not produce a Class-A trucking license, a SOC-2 history with named auditors, or the trust of a primary care physician. It does not become the named officer on an OSHA filing. The inside of the firm collapses to prompts; the outside, where the firm meets other firms, regulators, and history, does not.

The Survival Test

Pick any company. Ask four questions. Does it own a physical operation regulated by named humans personally on the hook? Does it sit at the center of relationships secured by fiduciary duty or licensure that took decades to build and cannot be transferred by API? Does it produce a signed attestation that is regulated, insured, or legally required? Does it hold a proprietary archive whose officers carry personal liability for breach? If the answer to all four is no, the company is a prompt waiting to be written. It may have brand, distribution, a hundred million users, a beloved product. None of those put anyone personally on the hook for the output.

A payments company that owns money-transmitter licenses in fifty jurisdictions and signs PCI attestations passes; Stripe is one of the safest companies in the entire AI transition. A code-hosting tool that a coding agent and a quarter of runway can reproduce fails. A project management suite fails for the same reason. Most analytics companies, most customer support platforms, most document editors: prompts. The question is whether anyone is willing to stake their name on the output, not whether the product is good.

The depth of the signature varies, which is where most “we own the atoms” claims fail. A semiconductor fab has dozens of named officers across export controls, OSHA, environmental, equipment qualification, customer audits: a deep signature that took years to assemble. A coffee shop has a food handler permit: a shallow one. Both are technically signers, but the moat scales with depth, not square footage. “We own warehouses” or “we own trucks” usually doesn’t survive scrutiny: the atoms are real, the signatures are shallow (a CDL, a DOT number, basic insurance), and the operation is reproducible by anyone with capital.

Some companies will discover they pass for a reason they never marketed. A design tool survives not because of its AI features but because it became the design-system-of-record for thousands of enterprises with named DPOs and regulatorially-significant content. A payroll company survives not because of its interface but because it signs tax returns under personal officer liability in every state. The survivors will rebrand around their unsynthesizable signature, not their software. The ones that can’t find such a signature will have their answer.

Full Vertical

The adjacency-scorching will not stop at coding assistants and search. A distributor has no reason to leave any token-generating surface in someone else’s hands. Chrome exists because Google needed every browser’s default search to be Google. The lab browser will exist for the same reason: whoever owns the surface where reasoning happens owns the billing meter for every token that flows through it. Expect at least one major distributor to ship a browser by 2027 and offer git hosting at or below cost in the same window. The distributor does not need to profit from browsers or repos; it needs to own the surfaces, and it will price every incumbent out of the way to get them.

Follow the logic one step further and the distributor becomes the literal operating system. Apple already ships on-device models. Google already runs Gemini inside Android. Within a decade the traditional OS (files, apps, windows) is a legacy compatibility layer, and the primary interface is a model runtime that holds the full context of everything you do on the device. The token is the new instruction set. The lab is the new Intel, the new Microsoft, the platform on which everything else runs.

Apply the framework to that endgame and it gets uncomfortable. Brand is not a signature. User base is not a signature. Network effects, unless they involve fiduciary standing, personal license, or regulated ties, are not signatures. Companies that think they are safe because they have 200 million users are in the middle unless those users are connected by obligations no model can intermediate. The framework, taken seriously, is more destructive than the essay has admitted until now.

The exception worth flagging is what happens when a distributor decides to underwrite. Some labs will eventually accept verticalized liability: a medical AI with malpractice insurance attached, an audit AI with E&O coverage, a coding agent that indemnifies the customer against IP infringement. At that point the lab stops being a token distributor in that vertical and becomes an underwriter, and its margin structure changes accordingly. The economics of the disclaim language do not permit this at the horizontal layer; they may permit it at the vertical. That is where the next decade of strategic motion happens. Watch for the first lab that accepts a malpractice premium.

The Endgame

Token distributors win at the bottom, underwriters win above them, and the top layer is disposable, rebuilt per-firm, often overnight; the regenerated applications run on the underwriters, and the underwriters run on the tokens. Two or three lab-platforms will own the runtime the way Microsoft owned the desktop in 1995, except with the ability to observe and intermediate every cognitive task, not just every file operation. Alongside them, a local-sovereignty ecosystem will function the way Linux does today: powerful, principled, and used by a small minority who care enough to maintain it. Everyone else will choose the most capable distributor and never look at the weights.

Every form of cognition humans have ever paid for came down to one structural fact: somebody had to put their name on the output and answer for it. That requirement does not go away because cognition gets cheap; if anything, cheap cognition makes the named human more valuable, because the volume of decisions in the world goes up and the volume of consequences goes up with it. The token distributors cannot become that person, by structure, without giving up the disclaim language that makes the price work. So the question every company should ask is the simplest version of the framework: whose name is on the output, and what happens to that person if the output is wrong? The companies whose answer is “no one’s, and nothing” are selling unsigned cognition with extra steps, and the price is going to zero. The companies whose answer is “this specific human, and they lose their license” have a moat the model is not allowed to cross. For now.