Skip to main content
ENGINEERING

Building AI features that survive the next regulatory change.

AI regulation is accelerating in every major jurisdiction. The features that survive the next regulatory cycle have specific architectural properties. Here is the engineering brief.

By J. ReichertPRINCIPAL ENGINEER · KNYTE
PUBLISHEDAPRIL 09, 2026
READ TIME12 MIN
CATEGORYENGINEERING

AI regulation is accelerating in every major jurisdiction we operate across. The EU AI Act has phased into application across 2025 and 2026. The US is patching together state-level frameworks and federal-agency guidance that, while less formally codified than the EU's, is operationally consequential for any deployment touching healthcare, finance, employment, or public-facing decisioning. The UK has published its principles-based framework and is starting to enforce against it. Singapore, Canada, Japan, and Australia have all published or will shortly publish frameworks that overlap meaningfully on the substantive requirements.

Across jurisdictions, the substantive requirements are converging on a small number of architectural properties. AI features built without these properties are positioned to require remediation under whichever framework first becomes binding for the deployment. AI features built with them tend to be portable across frameworks with bounded incremental work. What follows is the engineering brief — the specific properties that have shown up in every framework worth tracking, and what implementing them looks like in production.

The five properties that recur across frameworks.

01. Decision provenance.

Every consequential output must be traceable to the model version, the corpus version, the policy in effect, and the editor (or absence thereof) who reviewed it. This is the audit-trail observability layer we covered separately. The frameworks vary in what counts as "consequential" but the underlying requirement is the same.

02. Editor accountability for high-risk decisions.

Decisions classified as high-risk under the operative framework must have a human accountable for the decision, with the human's review captured at the time of the decision. This is the editor-in-the-loop pattern, expressed as a regulatory requirement. The frameworks differ on what counts as high-risk; the implementation is the same.

03. Bias and quality monitoring with documented procedures.

Deployments must demonstrate that they monitor for quality regression and disparate impact across protected characteristics. The monitoring must be documented, the procedures must be written, the results must be retained. The eval suite we wrote about here is the engineering substrate; the documentation is the regulatory deliverable.

04. Data minimization and purpose limitation.

The data the deployment processes must be limited to what is necessary for the declared purpose. Excess data ingest is structurally not permitted. The corpus governance that supports this is closer to the data-classification audit than to the model architecture, but it shows up in engineering as constraints on what the corpus indexer is allowed to ingest.

05. Right of explanation, scoped to the decision.

An individual subject to a consequential AI decision has the right to a meaningful explanation of how the decision was reached. The explanation does not require model interpretability; it requires the deployment to produce a coherent narrative grounded in the inputs, policies, and editorial actions in scope at the time. This is achievable from the audit-trail layer if the layer was designed for it.

What the implementation looks like in code.

The five properties are not five separate engineering projects. They are five facets of the same observability and governance architecture. Most of the work is in instrumentation; some of the work is in process documentation that the engineering team usually does not own. The engineering investment is bounded; the process investment is the part that surprises teams.

// Decision provenance carried through the workflow runtime
const decision = await workflow.execute({
  input,
  policy: policy.activeAt(now),
  modelVersion: model.activeVersion,
  corpusVersion: corpus.activeVersion,
  editorRequired: classification === "high-risk",
  audit: {
    purpose: workflow.declaredPurpose,
    dataClasses: input.classifications,
    explanationStub: workflow.explanationTemplate,
  },
});

Three regulatory failure modes we have seen.

Provenance gaps. The deployment can produce decisions but cannot reconstruct how a specific decision was reached six months later. This is the most common failure mode. It fails decision-provenance and right-of-explanation simultaneously.

Documentation drift. The eval and monitoring procedures exist in code but were never written down in a form a regulator can read. The deployment is doing the right things; it cannot defend that it is doing them. This is fixable cheaply if caught before an audit and expensive otherwise.

Data-minimization overshoot. The corpus contains data the deployment does not need, ingested under a previous broader purpose, retained because nobody removed it. The data is technically irrelevant to current operation; it is regulatorily exposed because it exists. The remediation is corpus governance discipline, not engineering.

What this means for the next feature you ship.

The features that survive the next regulatory cycle are not the features that anticipated the specific cycle. They are the features built around architectural properties that recur across frameworks. The cost of building those properties from the start is bounded; the cost of retrofitting them under regulatory pressure is the cost of a re-platforming.

The five properties enumerated above are also the architectural properties that produce a deployment that survives the eighteen-month half-life we wrote about in the agentic-SaaS dispatch. Regulatory durability and architectural durability are converging on the same set of requirements. Building toward the architectural pattern produces the regulatory posture as a byproduct.

If you are shipping AI features today and the five properties are not present at the architectural layer, the right framing is not "are we at risk under the current framework." It is "would we want to be at the start of the next regulatory cycle without these properties in place." The cost of acting now is bounded; the cost of acting under regulatory pressure historically has not been.

The teams that have built toward the five properties also report a lateral benefit that is hard to anticipate before adopting the discipline: their procurement conversations get noticeably easier. Buyers in 2026 are asking the regulatory questions early in evaluation, often in the first technical conversation. Vendors that can answer with reference to specific architectural mechanisms — "here is the audit trail you would have access to, here is the editor accountability surface, here is the data minimization control" — close deals that vendors who answer with general assurances do not. Regulatory durability and procurement velocity are correlated. The same investment buys both.

J. ReichertPRINCIPAL ENGINEER · KNYTE

Twelve years on production retrieval and inference systems. Previously at Stripe (risk infra) and Anthropic (eval tooling). Writes about the boring parts of agentic infra.

SUBSCRIBE

Get the dispatch in your inbox.

Twice a month. We send the essay, the postmortem, and nothing else. No roundups. No tracking pixels pretending to be personalization.

NO SPAM · UNSUBSCRIBE ANYTIME · 4,200 READERS