Anchal Desai is VP Editorial Ops at a legacy media holding company. The company runs three news verticals, two B2B publications, and a custom-content business that produces work for enterprise clients. Editorial Ops at a company that size means coordinating production workflows across roughly four hundred editorial staff, six legacy CMS systems, and a quarterly board that has spent the last eighteen months asking some version of "so what is the AI doing for us." Anchal had been answering with a productivity-uplift number that the CFO had stopped finding satisfying. She rebuilt the answer around replaced line items. The CFO stopped asking. We sat down with her for twenty-eight minutes.
On the productivity-uplift conversation.
Knyte: Walk me back to the version of the QBR that was not working.
Anchal Desai: It was a slide that said "AI productivity uplift across editorial: thirty-one percent." The methodology was a survey. We had asked editors how much time they were saving on three categories of task, multiplied by their hourly rate, and divided by the AI tooling cost. The number was real, in the sense that the survey responses produced it. It was not satisfying because the CFO would ask "so where is the thirty-one percent in the operating budget," and I would not have an answer.
Knyte: The savings did not show up.
Anchal Desai: The savings could not have shown up. We had not reduced headcount. We had not reduced the number of contractors we hired. We had not consolidated tooling. The thirty-one percent was a productivity number, and I was asking the CFO to believe it without showing it on a line. After about three QBRs of that, he asked me to stop bringing it.
On the rebuild.
Knyte: What did you replace it with.
Anchal Desai: I replaced it with a table. The table has four columns: workflow, prior cost, current cost, delta. Every row is a specific workflow. The prior cost is the line item or items in the operating budget that the workflow was previously consuming — contractor hours, third-party translation fees, freelance copy editing, that kind of thing. The current cost is what those same line items cost today. The delta is the difference. There is no productivity number anywhere on the slide.
Knyte: And the CFO finds this useful.
Anchal Desai: The CFO loves this. The first time I showed it, he looked at the slide for about a minute and then said "this is the first AI slide I have ever been able to verify." That was the moment. He pulled up the operating budget on his laptop, found two of the line items, and confirmed they had moved by approximately the amount the table was claiming. He has not asked about productivity since.
On building the table.
Knyte: How did you build it.
Anchal Desai: I had to do the work that the productivity-uplift methodology had let me skip. I sat down with the head of finance and her team for two days. We pulled the operating budget for the prior twelve months and tagged every line item that touched the workflows my AI tooling was supposed to be supporting. Some of the tagging was easy — translation fees were a single budget line. Some of it was hard — copy-editing labor was buried inside contractor categories that did not break out by workflow. We made educated guesses where we had to and flagged them on the slide as estimates rather than facts.
Knyte: How long did the tagging take.
Anchal Desai: Two days for the first pass. About two weeks of cleanup as edge cases surfaced. I would call it a one-month project, with the second half being the conversation with the heads of each business unit about whether the tagging was fair. Some of those conversations were uncomfortable because the tagging implied that the AI deployment had taken work away from people who were sitting in the meeting. I think those conversations are why most AI ROI presentations stop at productivity uplift. The honest version forces a different kind of conversation.
On what changed in the rest of the org.
Knyte: Did the table change anything beyond the CFO's reaction.
Anchal Desai: Yes, in two ways. First, it changed how we plan AI deployments. Every new workflow we propose now has to be tagged to specific budget lines that the deployment is going to affect. We are not allowed to propose a workflow whose value is "productivity uplift in editorial." We have to say "this workflow will reduce contractor spend on category X by approximately Y over twelve months, with these editorial-quality safeguards." The discipline forces us to be specific before the deployment, not after.
Anchal Desai: Second, it changed the procurement conversation. We had been buying tools on the basis of feature lists and demos. We are now buying on the basis of which line items the tool can credibly affect and over what timeline. Several vendors fell out of contention immediately because they could not answer the question. The vendors who could — which included Knyte, the architecture provider we ended up consolidating onto — were the ones who had been thinking about the problem in this frame already.
On the methodology being defensible.
Knyte: What do you do when the table shows a delta that is smaller than the AI tooling cost.
Anchal Desai: I show it. The whole point of the methodology is that it has to be honest. Two of the workflows we deployed in the first wave have not yet broken even. The slide says so. The CFO and I have a conversation about why — usually because the workflow is too shallow, or because we have not yet decommissioned the legacy tooling that the new workflow was supposed to replace. The conversation produces a remediation plan and a deadline. If a workflow does not break even within nine months, it gets killed.
Knyte: Have you killed any.
Anchal Desai: One. A summarization workflow we built for the leadership team. The workflow worked technically but the leadership team did not change their behavior — they kept reading the full source documents anyway, and the summarization output was decoration. We killed it after seven months. The cost line went away. The honesty made the kill easy. If I had been defending it on productivity-uplift grounds, the conversation would have been impossible.
On what she would do differently.
Knyte: If you were starting over.
Anchal Desai: I would have started with the table on day one. The eighteen months I spent presenting productivity-uplift slides was eighteen months of watching the CFO get more skeptical, which made the AI program harder to defend internally even when it was working. If I had started with the line-item frame, the first six months might have looked worse on paper — we had not yet decommissioned anything — but the conversation would have been productive instead of defensive. I would not have lost the trust I had to rebuild.
Anchal Desai: I would also have made the finance team my partners earlier. The tagging exercise was uncomfortable because it was the first time the finance team had been brought into the AI program. They felt like they were being asked to validate something they had not been consulted on. If I had brought them in at the procurement stage of the first AI tool, the relationship would have been collaborative from the start. The eighteen-month delay was a self-inflicted cost.
On what is next.
Knyte: What does the next year look like.
Anchal Desai: We are extending the table to cover three more workflow categories that we had not yet measured this way. Each one requires the same one-month tagging exercise with the relevant business unit. The board has asked for the table to be the standing AI report at every QBR going forward. The productivity-uplift slide is gone permanently. I do not expect to bring it back.
Anchal Desai: And we are doing more architecture-style consolidation. The first wave proved the model. The next wave is going to consolidate four more legacy tools into the same architecture. The line-item table will tell us, by month nine, whether the consolidation worked. If it did, we keep going. If it did not, we kill what is not paying back. That is the discipline. The CFO loves it because it is finally a discipline he can verify.