Skip to main content
PRODUCT

Why per-seat pricing is structurally wrong for AI tooling.

Per-seat pricing made sense for SaaS where each seat unlocked a discrete unit of value. AI tooling is supposed to compound across the team. Per-seat pricing taxes the compound. Here is what to use instead.

By A. VasquezPRINCIPAL THESIS · KNYTE
PUBLISHEDMARCH 04, 2026
READ TIME10 MIN
CATEGORYPRODUCT

Per-seat pricing is the default in B2B SaaS for a defensible reason: in most SaaS categories, each additional seat unlocks a discrete unit of value. A new Slack user can send messages they could not send before. A new Notion user can write documents they could not write before. The value scales linearly with seat count, and the price tracks the value.

AI tooling does not work this way. The pitch of AI tooling is that it compounds — that the value of the system grows faster than the seat count, because each seat contributes corrections, context, and editorial signal that improves the system for every other seat. If the compounding pitch is real, per-seat pricing is structurally taxing the compound. Every additional user adds revenue to the vendor and zero net asset to the buyer, because the asset belongs to the vendor.

The pricing model that matches AI tooling's value model is different in shape. It tracks the asset, not the seat. It scales with the depth of the deployment, not the breadth. And it gives the buyer a clean answer to the question CFOs are now asking on every renewal: what am I getting per dollar that I would still have if I cancelled.

Why per-seat fails the AI compounding test.

Three structural problems with per-seat pricing for AI tooling.

It penalizes adoption breadth. Per-seat pricing makes the marginal user expensive. AI deployments compound when more of the team contributes editorial signal — corrections, accept/reject decisions, brand-voice tunings. Per-seat pricing structurally caps the contribution surface, which caps the compounding.

It misaligns vendor incentives. Per-seat pricing rewards the vendor for seat expansion. The vendor's roadmap optimizes for features that drive seat expansion, not for features that compound the asset. The features that would help the buyer most are deprioritized in favor of features that help the seat-count grow.

It hides the cost of corpus depth. Per-seat pricing decouples the price from the actual cost driver of the deployment, which is corpus depth and inference volume. A deployment with five users producing fifty thousand outputs per month costs the vendor roughly the same as a deployment with fifty users producing fifty thousand outputs. Per-seat pricing produces a bill that has very little to do with the cost the vendor is incurring or the value the buyer is receiving.

What pricing models actually fit AI tooling.

Three pricing structures we have seen work, in roughly increasing order of buyer-vendor alignment.

Usage-based, with editorial-action discounts. Price per output, with a discount for outputs that an editor accepted (signaling the output was high-quality and contributed to the corpus). The vendor is rewarded for producing useful outputs, not for producing outputs in volume. The buyer's cost tracks their output volume.

Workflow-tier pricing. Price per workflow at a tier reflecting depth (basic, advanced, mission-critical), with unlimited users and outputs within the workflow. The vendor is incentivized to make each workflow more valuable, not to expand seat counts. The buyer pays for workflow depth, which is what they actually consume.

Architecture pricing. A fixed annual fee for the architecture install (model, corpus, runtime), with usage costs passed through at cost. The vendor is paid for delivering and maintaining infrastructure; the consumption is the buyer's. This is the model we run on most Knyte installs and it has the cleanest alignment with the asset frame — what the buyer is buying is the architecture, not the consumption.

Why this matters for product teams.

If you are designing the pricing of an AI product, the per-seat default is a fast path to the renewal-trap conversation we wrote about in a separate dispatch. Buyers are increasingly asking the asset question early in evaluation. A pricing model that obscures the asset — by making the bill grow with seats rather than with capability depth — invites the wrong category of customer and produces churn around month eighteen when the asset question becomes a board-level conversation.

The pricing model that wins, in the procurement environments we are watching, is the one that lets the buyer answer the asset question with confidence. Architecture pricing, workflow-tier pricing, and usage-based-with-editorial-discount pricing all clear that bar. Per-seat does not.

If you are buying an AI product priced per seat, ask the vendor what the per-output cost looks like at typical usage. The answer is often surprising. The per-seat number suggested an economics that the per-output reality does not support. The conversation that follows is the one to have before signing the renewal.

What this looks like in practice.

We have run the comparison across nineteen portfolio audits in the last year. The pattern is consistent enough to describe as a rule. A buyer running per-seat AI tooling at moderate adoption (around twenty percent of the addressable seat count) is paying roughly twice what the same workflows would cost under a workflow-tier or architecture-priced equivalent. At sixty percent adoption — which is what the AI vendor's deck assumed when the contract was signed — the multiplier rises to about three. The compounding curve, ironically, makes the per-seat economics worse, because each additional active seat is paying for a value the underlying architecture is supplying once.

The architecture-priced alternative does the opposite. The fixed annual fee is sized against the workflow inventory and the corpus depth, not against the seat count. As adoption grows, the per-output cost falls because the architecture investment is amortized across more output. This is the alignment buyers are increasingly asking for, and the vendors who can offer it are the ones winning the procurement conversations we are sitting in on.

Three questions for the next pricing conversation.

When the vendor walks through their pricing model, three questions surface whether the model is aligned with the value the buyer is actually receiving.

  1. What does our cost look like at twenty percent adoption versus sixty percent? If the answer scales linearly with seat count, the pricing is structurally taxing the compounding curve. The question is whether the buyer is willing to pay that tax.
  2. What asset do we still own if we cancel in twelve months? If the answer is "none of the model, none of the corpus, none of the workflow definitions," the buyer is renting a capability rather than building one. The pricing model should reflect that distinction; usually it does not.
  3. Is the price per workflow available, separately from the price per seat? Most vendors will produce this number on request. Most buyers do not request it. The number is informative.

The pricing model is downstream of the value model. If the value model is genuinely compounding — if the system actually gets better as more of the buyer's team contributes editorial signal — then the pricing model needs to allow that compounding to accrue to the buyer rather than being clawed back by the vendor at every renewal. Per-seat pricing was the right answer in a world where the value did not compound. AI is, by its own pitch, not that world. The pricing has to catch up.

A. VasquezPRINCIPAL THESIS · KNYTE

Former CFO at three growth-stage SaaS companies. Writes the replacement-math frame the Knyte team uses on every architecture call. Stanford GSB; CPA.

SUBSCRIBE

Get the dispatch in your inbox.

Twice a month. We send the essay, the postmortem, and nothing else. No roundups. No tracking pixels pretending to be personalization.

NO SPAM · UNSUBSCRIBE ANYTIME · 4,200 READERS