Every incumbent enterprise SaaS vendor has shipped an AI integration in the last two years. The CRM vendors did it. The marketing automation vendors did it. The HRIS vendors did it. The customer support vendors did it. In nearly every case, the integration was delivered as a sidebar — a chat affordance, a generate-this button, a summarize-this menu item — bolted onto the existing product. The integrations shipped on quarterly roadmap cycles. They were celebrated in earnings calls. And, if our landscape tracker is any guide, they have a median lifespan of about fourteen months before they are quietly rewritten as architecture-first features that look nothing like what shipped originally.
The pattern is consistent enough that it is worth naming. The incumbent ships an integration. The integration gets uneven adoption. The customer support team starts seeing tickets about hallucinations and context loss. The product team realizes the integration cannot reach the data it would need to be useful, because the underlying product was not architected to make that data accessible to a model. The integration gets rewritten — sometimes after a leadership change, often after a board-level conversation about why the AI bet has not paid off — into something that looks more like a redesigned product than an integration.
The lesson is one incumbent vendors keep relearning, at considerable expense, and we think it is worth stating directly: integration and architecture are different categories of work. Treating one as the other produces a product that ships on time, looks good in the demo, and falls apart in enterprise deployments around month nine.
What the integration frame assumes.
An integration is, by definition, a wrapper. The integration frame assumes the underlying product is the durable asset. The AI capability is something added on top — useful, differentiating, but ultimately accessory to the workflow the buyer is paying for. This is the right frame for many product additions. Spell-check is an integration. SSO is an integration. Most analytics dashboards are integrations.
It is the wrong frame for AI. The reason is structural: an AI capability that does anything useful needs to read across the product's data model in ways the product was not designed to permit. It needs to retrieve from records that were stored under one access pattern and surface them under another. It needs to maintain a queryable memory that does not exist in the underlying product. It needs to reason about workflow state that is currently encoded as UI affordances rather than as a workflow primitive. The integration cannot reach what it needs to reach without modifying the product, at which point it is no longer an integration.
The path of least resistance for the incumbent product team is to ship the integration anyway, with the data access constraints it can deliver in the time available, and call the gap a future roadmap item. The gap is rarely closed on the roadmap. It is closed in the rewrite.
What the architecture frame requires.
Architecture starts from the AI capability and works backwards into the product. The corpus is a first-class object. The retrieval pipeline is a first-class object. The model is a first-class object with its own version, its own deployment lifecycle, and its own monitoring surface. The workflow runtime is a first-class object that the product's UI sits on top of, not the other way around. The product is the surface; the architecture is the engine.
This is harder. It is more expensive in the short term. It cannot be shipped on a quarterly roadmap. It requires the product team to admit that the existing product is not the durable asset. It requires the engineering team to rebuild data access patterns. It requires the sales team to stop selling the existing product's affordances and start selling something the customer has not seen before.
The reason architecture wins, in the deployments we see, is that it survives the eighteen-month inflection. By month eighteen, the integration approach has produced workflows that hit a ceiling and a customer base that has started to grumble. The architecture approach has produced workflows that compound and a customer base that has started to expand the deployment voluntarily.
Why this matters for buyers.
If you are a CIO buying an AI capability from an incumbent SaaS vendor whose primary product was not designed around the capability, you are very likely buying an integration that will be rewritten before your deployment matures. This is not a moral judgment about the vendor. It is the predictable consequence of treating AI as something that can be shipped on top of an existing product surface.
Three signals suggest you are looking at an integration rather than an architecture. The first is that the AI capability lives in a sidebar, a chat panel, or a generate button — physically separate, in the UI, from the workflow the buyer is paying for. The second is that the capability cannot reach data the buyer expects it to reach, with the explanation "that's on our roadmap." The third is that the vendor cannot describe the data flow from corpus to model to output without wandering into general statements about "our AI platform." When the vendor's own engineers cannot describe the architecture, the architecture does not yet exist.
What survives, in the deployments we audit, is the architecture choice. The vendors that built AI-first products are eating the integration vendors' deployments — not because the AI is meaningfully better in any objective sense, but because the deployment compounds in a way the integration cannot.
What this means for your next vendor evaluation.
On the next vendor evaluation, separate the AI capability from the existing product surface. Ask the vendor to describe the architecture, in writing, that supports the AI capability — model, corpus, retrieval pipeline, workflow runtime, eval suite. If the architecture description is shorter than the product feature list, you are looking at an integration. If the vendor cannot produce one at all, you are looking at a sidebar.
We wrote about the case against agents-as-features precisely because the integration-versus-architecture distinction is the most consequential one in enterprise AI procurement right now. The vendors that get it right will eat the deployments. The vendors that get it wrong are the ones whose AI integrations are now showing up in our tracker as scheduled rewrites.
Architecture beats integration. It is not a slogan. It is a structural property of the category. The incumbent vendors are relearning it, slowly, at the cost of fourteen-month rewrites. The buyers who acted on the lesson early are the ones whose deployments are now compounding without a re-platforming on the calendar.