We're in the middle of conference season, and it’s a firehose of announcements: feature releases, partnership reveals, integration demos, and new claims around enterprise intelligence. Across nearly every keynote and product launch, two themes keep surfacing: trust and context.
Three months ago, I wrote a piece on data trust convergence, tracking how observability, quality, and governance were collapsing into a single category. AI agents moving into production have pushed the convergence beyond what I expected: context has emerged as its own category, and a second convergence from the workflow and CRM side.
The original story
The February piece made a bottom-up case. Observability, quality, and governance converging because they share infrastructure (lineage, metadata, compute). Cloud platforms (Databricks, Snowflake) absorbing those capabilities at the data layer. Independents facing an "expand or specialize" choice.
That story is intact. Databricks Unity Catalog and Snowflake Horizon continue to deepen. Ataccama has pushed furthest toward “data trust” as a platform category, with Monte Carlo, Collibra, and Acceldata staking their own variants.
Data trust is becoming foundational infrastructure for enterprise AI systems: the assurance layer that establishes whether data is governed, explainable, observable, and reliable enough for autonomous consumption.
The story I missed
There's a convergence coming from above the data layer, from the workflow and CRM side, and I gave it too little weight in February. ServiceNow and Salesforce are assembling end-to-end stacks that go from raw data to AI agent action, with the trust layer in the middle.
ServiceNow announced data.world in May 2025 and folded it into the Workflow Data Fabric. Then, on February 12, they announced Pyramid Analytics, adding a semantic layer, decision intelligence, and BI capabilities. Salesforce closed Informatica in November 2025, and combined with Tableau and Agentforce, they're running the same playbook. Both companies are publicly framing their strategy around grounding AI agents in trustworthy enterprise data.
They aren't trying to own where data lives. They're trying to own where decisions happen, and action follows. The trust layer is a stop on that path.
The context layer
The original piece tracked the foundational layer of trust: lineage, quality, observability, governance, and the operational pieces around them. Semantics has now moved down from BI (itself in flux as AI agents reshape analytics) and become infrastructure for AI agents. In February, semantics was a feature humans used to make sense of dashboards. Three months later, it's infrastructure AI agents query to make decisions.
Context has emerged as the positioning battleground, and vendors across the data stack are now describing themselves as a context layer. Context sits above the trust layer, operationalizing business meaning and semantics so AI agents can reason and act safely against enterprise information. The context layer is the new infrastructure tier between trusted data and agentic action.
Atlan is the most visible mover. They've repositioned from "modern data catalog" to "the context layer for AI," with a Context Lakehouse, a Context Engineering Studio, and Context Agents. Their argument: enterprise AI agents fail because they lack context, not because they lack model capability.
They're not alone. Google launched the Agentic Data Cloud at Cloud Next 2026 in April, anchored by Knowledge Catalog (the renamed Dataplex Universal Catalog) as the "universal context engine." Snowflake's agent context layer, built on Horizon Catalog, stakes its claim. Salesforce repositioned Tableau as an agentic analytics platform with a new Knowledge Engine and an MCP Server that exposes customer-built semantic models to LLMs. ServiceNow formalized its Workflow Data Fabric into a Context Engine at Knowledge 2026: a graph of graphs feeding every AI decision, with Autonomous Data Analytics on Pyramid and a Data Catalog on data.world underneath. Knowledge graph vendors, semantic layer vendors, vector database vendors, and metadata management vendors are all making the same move.
Some of this is real, some is rebranding, and the exceptions will become obvious. "Context" has become what "data trust" was in February: a new category position vendors are racing to claim before the language settles. The Open Semantic Interchange v0.1 spec (released January 2026) exists because vendors recognized that semantic interoperability is the precondition for AI agents working across platforms. The spread of MCP servers as a connection standard solves the related runtime problem of how agents reach those semantic models in the first place.
For practitioners, the practical question isn't "who has the best context layer." It's "what does my AI agent need to know to be trustworthy, and which of my existing investments already provides most of it?" Most enterprises will discover they have several partial context layers, and the hard part is operationalizing and connecting them, not simply buying another platform that claims to unify them.
The categories in motion
The February piece used a framework with five core capabilities: Quality/Observability, Lineage, Governance/Catalog, FinOps, and Orchestration. The argument was that those categories were converging.
Quality and observability had already merged when I wrote the original piece. Lineage and catalog have collapsed into context. Standalone lineage stopped being a buying category years ago; standalone catalog just joined it. Atlan led the move from "modern data catalog" to "context layer." Google positioned Dataplex Universal Catalog as the "universal context engine." Snowflake Horizon Catalog is part of the Cortex agent context layer. The catalog as a buying category is gone.
Governance is on the same path, but isn't all the way there. Compliance-driven governance (DLP, audit, regulatory reporting) is still its own discipline with its own buyers and budgets. AI-era governance (policy enforcement at agent query time, role-based context filtering, certified definitions for autonomous consumption) is converging into context. Collibra and Ataccama are both repositioning around it. Expect the bifurcation to play out over the next year.
Orchestration didn't disappear; it evolved. The category was always about coordinating work across data systems. Agentic is the same job with autonomy and decision-making added. Orchestration vendors are now agentic vendors, or partners to them. FinOps was never a category; it was a battleground vendors fought on briefly and moved past.
Trust and context are where vendors are competing for category ownership today. Agentic observability is where the next twelve months of differentiation gets fought, and it sits one level higher than agentic data quality or shipping an agent. It's the control plane that discovers, monitors, evaluates, and governs AI agents and their decisions across the enterprise. Three vendor groups are converging on it: data observability players moving up from the data layer (Monte Carlo, Bigeye), workflow platforms extending into the agent layer (ServiceNow's AI Control Tower), and IT and application observability vendors moving in from infrastructure (Datadog, Splunk, New Relic, Dynatrace, Grafana). None of them owns it yet.
The story today
The question is no longer "which trust vendor matches my platform?" It's closer to "which combination of trust, context, and agentic observability capabilities best serves my AI agents and the people consuming their outputs?" Most enterprises will end up running combinations across all three fronts, because each front has legitimate claims and none is fully sufficient on its own.
The original piece closed on lineage as the foundation of the trust layer. That logic still holds: you can't run a trust platform or a context layer without deep, accurate lineage. Context absorbed lineage, the same way trust absorbed data quality and observability. The differentiator is the depth, and whether the answer is automated, governed, and explainable…especially now that the agents have come calling.

