AI-Era Design Principles, Part 2 of 4
This is the second in a four-part series exploring the AI-era design principles identified by McKinsey in their landmark report, “Building Next-Horizon AI Experiences.” Each article examines one principle through the lens of what it takes to build AI that does more than answer questions. At Datafi, we believe AI must be enabled to solve problems, and that distinction changes everything about how you design for it.
True continuity in enterprise AI is a data problem first. Without a governed contextual layer that carries full operational context across every interaction, AI remains a fast lookup tool rather than a genuine business partner that builds understanding over time.
The Amnesia Problem at the Heart of Enterprise AI
Imagine hiring a brilliant analyst. Every morning they show up for work with no memory of yesterday. They don’t recall the strategic context you discussed last week, the data sources you explained last month, or the decisions your team has been wrestling with for the last quarter. Every conversation starts from zero. Every task requires full re-explanation. Every output lands without the weight of accumulated understanding.
That is, essentially, how most enterprise AI tools work today.
McKinsey’s research on next-horizon AI experiences identifies “Design for Continuity” as one of four foundational principles required to build AI that people will actually trust and use. The principle is straightforward: sustain context and memory across interactions to create coherent, personalized, and seamless experiences over time.
It sounds obvious. But the gap between the principle and current reality is enormous, and the cost of that gap is measured not just in user frustration but in the failure of AI to generate meaningful, measurable outcomes across organizations of any size.
At Datafi, this is not an abstract design consideration. It is the core architectural challenge we solve for our customers every day.
Why Continuity Is a Data Problem First

Most discussions about AI continuity focus on the user interface: does the chat session remember what was said earlier? Can the assistant recall that you prefer certain output formats? Those things matter, but they are surface expressions of a much deeper requirement.
True continuity in an enterprise AI context is a data problem first, and a design problem second.
McKinsey’s framework identifies “context gaps” as one of the critical breakdowns preventing AI from becoming a trusted partner. Systems are not designed to identify, request, or retrieve the information required to perform a task thoroughly and accurately. While users trust the system to “know what it needs,” the AI often proceeds with only a partial understanding of the context.
For non-technical business users, this breakdown is particularly acute. A sales operations manager asking why pipeline coverage dropped last quarter does not want to write a SQL query. They do not want to explain what CRM system the company uses, what the fiscal calendar looks like, or how their organization defines “qualified opportunity.” They expect the AI to already know, the same way a seasoned internal analyst would.
That expectation is not unreasonable. It is, in fact, the bar that AI must clear to earn a seat at the table in critical business workflows.
Meeting that bar requires AI to have access not just to data, but to the full operational context of the business: the data ecosystem, the business definitions, the governance policies, the organizational structure, the decision-making history. Without that foundation, continuity is cosmetic. With it, continuity becomes transformative.
The Contextual Layer: Building the Memory That Matters
At Datafi, we use the term “contextual layer” to describe the foundational intelligence that enables AI to function as a true business partner rather than a sophisticated search interface.
The contextual layer is not a single feature. It is an architectural commitment. It means that when an employee asks a question or initiates a workflow, the AI does not merely retrieve data; it retrieves data within the full context of what that employee is trying to accomplish, who they are in the organization, what decisions are currently in flight, what constraints govern how data can be used, and what has already been learned from prior interactions.
This is what distinguishes AI that answers questions from AI that solves problems.
Consider a category manager at a retail company working on a promotional planning cycle. In a world without continuity, she asks the AI a question, gets an answer, reformulates the question, gets another answer, and eventually assembles a picture of what she needs over a series of disconnected exchanges. The burden of context management falls entirely on her. The AI is a fast lookup tool, not a thinking partner.
In a world designed for continuity, the AI knows her role, her current planning horizon, the promotional calendar, the historical performance of similar campaigns, the cost structure of each promotion type, and the margin constraints set by finance. It carries that context across every interaction. When she asks about promotional lift for a specific product category, the response is not generic. It is grounded in accumulated understanding and institutional memory, calibrated to her specific decision environment.
McKinsey illustrates this with a powerful example from marketing: when the continuity principle was applied to a campaign analytics workflow, the AI tool did not simply summarize new survey data, but automatically connected it to prior rounds of insight, highlighting what was working, what was not, and what should change, delivering holistic recommendations grounded in cumulative learning rather than single-point inputs.
That cumulative learning is what continuity actually looks like in practice. And it cannot happen without the right data architecture underneath it.
Unified Data Experience: The Enterprise Imperative
One of the most persistent barriers to continuity in enterprise AI is fragmentation. Data lives in dozens of systems. Definitions vary across departments. Access policies differ by role, region, and regulation. The average knowledge worker switches between multiple tools to complete a single workflow, and each tool operates in its own context-free bubble.
This is the environment Datafi was built to change.
A vertically integrated data and AI technology stack is not just a technical convenience. It is a prerequisite for continuity at enterprise scale. When the AI layer has governed, policy-aware access to the full data ecosystem, it can carry the right context across every interaction, for every user, within the compliance guardrails that enterprise operations require.
This matters for organizations of every size. A regional logistics company and a global pharmaceutical manufacturer face the same fundamental problem: their people need to make faster, better decisions using operational data that is scattered, siloed, and inconsistently defined. The difference is the scale of complexity, not the nature of the need.
What unified data experience enables, when paired with an AI layer designed for continuity, is the ability for any employee to work with data in their natural language, within the context of their specific role and workflow, without needing technical intermediaries. The frontline manager, the regional analyst, the operations lead, each one gets AI that knows their context, remembers their priorities, and builds understanding over time.
That is not a nice-to-have. It is the foundation of a competitive AI strategy.
Agents, Workflows, and the Continuity Advantage

The stakes for continuity rise significantly when AI moves from answering questions to executing workflows. Agentic AI, systems that do not merely respond but plan, act, and iterate across multi-step processes, are only as effective as the context they carry.
A workflow agent that forgets its prior steps, loses track of its constraints, or fails to connect its current task to the broader operational objective is not just inefficient. It is dangerous. In environments where AI is being asked to take on critical thinking and decision-support roles, context collapse is a failure mode with real consequences.
This is why we see our customers increasingly wanting to use AI in more complex, autonomous roles: not to automate simple lookups, but to participate meaningfully in analytical workflows, planning cycles, exception management, and operational decision-making. The precondition for those roles is continuity, not just in a single session, but across interactions, across time, and across the full scope of the business context the agent needs to function effectively.
McKinsey describes the future of work as depending on how effectively people and AI systems share responsibility: the goal is not for people to correct the system after the fact, but to design human-AI interactions that simplify, reimagine, and refine the work itself, in a way that improves with every interaction to drive real outcomes.
For that improvement loop to function, the AI must remember. It must build knowledge across interactions the same way an experienced employee does. It must develop not just data access but situational intelligence, the kind that comes from sustained engagement with a specific operational environment.
LLMs that are given full context of the business, access to the complete data ecosystem, and the ability to function in genuinely autonomous roles will develop this situational intelligence. Those operating in isolated, context-free sessions will not, regardless of how powerful the underlying model is.
Governed Continuity: The Compliance Dimension
Continuity without governance is not enterprise-ready AI. It is a liability.
As AI systems accumulate more context and operate with greater autonomy, the question of what information they can access, how they can use it, and who can audit those decisions becomes critically important. This is especially true in regulated industries, but the governance imperative applies to any organization managing sensitive operational or customer data.
Datafi’s approach to continuity is inseparable from its approach to governance. The same architecture that enables AI to carry rich context across interactions also enforces the data access policies, role-based permissions, and compliance controls that responsible enterprise AI requires. The contextual layer is not just a memory system; it is a governed memory system.
This design choice reflects a fundamental belief: that AI earning trust inside an organization requires transparency about what it knows, clarity about how it uses what it knows, and accountability for the decisions it supports. Continuity that operates outside governance frameworks creates shadow AI. Continuity that operates within them creates institutional advantage.
The practical implication for enterprise leaders is this: the AI that will scale across your organization is not the one with the most impressive demo. It is the one that can carry context intelligently, within the boundaries your organization has defined, while remaining auditable and explainable to the people responsible for outcomes.
From Conversation to Comprehension
There is a meaningful difference between an AI that can hold a conversation and one that can build comprehension. Conversational AI, however fluid and capable, is still fundamentally reactive. It responds to what is asked. Comprehension-oriented AI anticipates what is needed, connects new inputs to existing knowledge, and grows more valuable the longer it is engaged with a specific operational environment.
This is the transformation that Design for Continuity makes possible, and it is the transformation that Datafi is building toward with every customer engagement.
The shift requires accepting that AI capability is not fixed at deployment. The real value of an AI system emerges over time, as it accumulates context, refines its understanding of the business, and develops the kind of institutional memory that enables it to participate meaningfully in complex decisions rather than just field one-off queries.
Organizations that break through will not be the ones that chase better models. They will be those that fundamentally rethink the way work happens, designing experiences that people trust, rely on, and choose to use.
Continuity is what makes that trust possible. It is what separates AI that employees abandon after a few sessions from AI that becomes genuinely indispensable to how work gets done.
What This Means for Your AI Strategy
If you are evaluating AI tools or building an AI program for your organization, continuity is a first-order design requirement. Ask not just what the AI can do in a single interaction, but what it can build across many. Ask whether the system has access to your full data ecosystem or only to a narrow slice. Ask whether the context it carries is governed or free-floating. Ask whether it becomes more useful over time or simply faster at doing the same limited things.
At Datafi, these questions are the starting point, not the finish line. We believe the organizations that will realize the most transformative outcomes from AI are those that invest in the contextual layer, building AI that knows the business the way a great employee does, and that uses that knowledge to do more than answer questions.
The next article in this series explores the third McKinsey design principle: Build for Depth, enabling AI to automate entire workflows rather than simply responding to individual prompts. It is in depth that the real productivity gains live, and it is in depth that the contextual layer we build for continuity pays its biggest dividends.
Datafi is an applied AI company building the vertically integrated data and AI technology stack that enables every employee to work with data in natural language, within governed, compliance-ready environments. Learn more about how Datafi helps organizations move from AI experimentation to AI transformation.

