Even as overall global AI spending is forecasted to top $2 trillion in 2026, discussions about the AI bubble bursting abound. One thing remains clear amidst the uncertainty: organisations are under increased pressure to crack the code on AI investments and demonstrate ROI in order to justify the ambitious budgets set aside for this goal.
The rapid development of the technology means that businesses, customers, and regulators are all playing catch-up when it comes to AI. In fact, a recent IDC survey found that AI readiness within the next two years is a top priority for 82 percent of organisations with mature IT environments. This sense of urgency is driven by the realisation that being “AI-capable” is no longer a competitive edge, but a baseline requirement for survival in a digital-first economy. However, readiness requires a wholesale cultural and technical shift. It is no longer enough to simply deploy a model. Instead, enterprises must ensure their entire operational fabric is resilient enough to support continuous technological change.
Nothing demonstrates the pace of the evolution better than the speed at which agentic AI moved from a distant possibility to a viable option with use cases already in place. Unlike traditional chatbots that simply respond to prompts, agentic AI possesses the ability to reason, plan, and execute multi-step tasks autonomously. By operating with a degree of independence, these agents can handle complex workflows, from supply chain logistics to customer service resolution, without requiring constant human oversight. While we see some early challenges around accuracy, the security of the data, and effective tool usage, enterprises are moving full steam ahead to resolve those issues.
Gartner predicts that 40 percent of enterprise apps will feature task-specific AI agents by 2026. As a result, the focus will shift to model differentiation. It will no longer be about whether a company uses AI agents or not but how well they manage to integrate its agentic capabilities into production environments.
IT environments with fragmented systems and poor data visibility are one of the leading causes of failed AI initiatives. When data is trapped in silos or buried under layers of incompatible legacy infrastructure, the high-quality inputs required for machine learning models become nearly impossible to extract efficiently. Without a clear view of the data landscape, organisations risk training their models on redundant or conflicting information, which inevitably leads to skewed results and poor decision-making.
In order to get AI projects off the ground, organisations must rethink their data architectures with the technology in mind. To keep pace with AI-driven demands, enterprises will be looking at how they can reduce the number of vendors and consolidate data platforms into more unified, cohesive ecosystems.
This consolidation can seem like merely a cost-saving measure but in reality, it is a structural necessity. Reducing the complexity of the tech stack allows for faster data processing and more reliable governance. AI-enabled tools will help streamline architectures, automating discovery and eliminating redundant systems which in turn will minimise the “moving parts” in enterprise data environments. The goal is a “frictionless” data flow where information moves seamlessly from ingestion to insight, unencumbered by the manual hand-offs that traditionally slow down IT modernisation. A simplified architecture acts as a force multiplier, allowing small teams to manage vast amounts of data with the precision required for AI applications.
Effective AI models must have access to valid and clean data that is contextually available via RAG and other types of AI reasoning analytics. RAG, in particular, has emerged as a critical bridge, allowing LLMs to access private, real-time company data without the need for constant, expensive retraining.
However, this also presents new challenges in terms of data protection. When an AI model has the power to pull from various internal sources, the risk of unauthorised data exposure increases significantly. It is now more important than ever that the data is consistent, access is secure and automated, and requirements for data privacy, security, and compliance are addressed at every step of the way.
As organisations bridge unstructured operational data with analytics and AI initiatives, governance and integration requirements will take center stage. Businesses will need clear frameworks to ensure secure, compliant, and high-quality data pipelines that support automation and decision-making. Without these frameworks, AI becomes a liability, potentially exposing sensitive intellectual property or hallucinating based on outdated information.
Data sovereignty reshapes architecture decisions. Increasingly complex geo-compliance and regional data sovereignty regulations will push enterprises to rethink how and where their data is stored, processed, and analysed, fueling demand for flexible, hybrid cloud architectures that balance performance and compliance.
Ultimately, the path to AI readiness and even implementation will be paved with cleaner foundations and sharper governance. As the novelty of agentic AI transitions into the practical demands of automated workflows, the underlying IT infrastructure and data governance strategy will be one of the key determining factors of long-term success. Organisations that continue to struggle with fragmented systems will find their AI ambitions stalled by the very complexity they failed to address.
Global and regional compliance frameworks will continue to evolve, further shaping data architecture decisions. As regulations become more granular and strengthen their data protection demands, getting data governance right will become the foundation on which all successful AI initiatives stand.


