IBM’s $11B acquisition of Confluent is the clearest signal yet that agentic AI will be dependent on capabilities that harness real-time data.
In addition to IBM, other industry giants like Google and Salesforce have taken note, with major acquisitions in recent years that aim to better connect enterprise data and systems.
The direction is clear. Now, the key questions for effective enterprise architecture design are how to plan and build to deliver on the promise of agentic AI. In my view, the enterprise is moving towards multi-agent orchestration at scale, and real-time data will be essential to drive real value.
Agentic AI promises autonomous systems that can respond and reason in real-time. But in production environments, that promise quickly collapses if the system responds too late or there is a lack of real-time context.
Consider a global financial services firm, where thousands of continuously changing market inputs must be considered and responded to the instant they occur. In this kind of environment, AI-driven processes cannot afford to periodically poll source systems looking for changes. A delay of minutes isn’t an inconvenience, it’s a risk. The system has to respond to what just changed, right now, not in a few minutes from now.
This is where other agentic AI platforms fall short. Their request-response architectures were designed for a slower world, one where applications could operate in batch mode, periodically querying source systems looking for changes, while burning through compute and LLM resources.
Responsive agentic systems operate differently. They need to respond to changes occurring across the enterprise – orders being placed, service delivery updates, customer sales activities – in real-time, not minutes or hours after they happen.
An AI agent that has to poll a database to understand the current state isn’t real-time; it’s operating on hindsight. Responding in real-time to business events is what gives agents true situational awareness. It provides the responsiveness and up-to-date context they need to act decisively, coordinate with other agents, and operate reliably.
To support this at enterprise scale, the underlying architecture must shift from static data integration to dynamic orchestration of specialized agents that operate in real-time. Larger problems should be broken down into smaller tasks and dispatched to the appropriate AI agents with the right skills, in real-time. Asynchronous communication between agents, enterprise applications, and data sources, and not overwhelming LLMs with too much hallucination-inducing context, is the only way to achieve the scalability, reliability and accuracy required by high-performing enterprises.
The market is rapidly maturing to support this movement. We are seeing major cloud providers acknowledge this necessity by creating dedicated spaces for these technologies. For example, AWS Marketplace recently introduced a new AI Agents and Tools category to serve as a centralized catalog for these solutions.
This maturation of the ecosystem is critical. It simplifies the discovery and procurement process, allowing enterprises to focus on innovation rather than vendor negotiations. Solutions like our newly-launched Solace Agent Mesh, now available in this new AWS category, are examples of how the industry is trying to bridge the gap, providing the framework needed to govern and orchestrate agents without rebuilding the entire stack.
The IBM–Confluent deal confirms what many enterprise architects already understand: real-time data is no longer optional. It is the non-negotiable foundation for enterprise AI at scale.
Effective agentic systems cannot reason, plan, or act in isolation from the present moment. They must respond in real-time as business events happen. Without real-time responsiveness, AI is confined to hindsight.
The “Agentic Age” has arrived. And it will be defined not by models alone, but by the intelligence of those models being applied in real-time.


