Across every industry, AI has left its mark. Smaller, nimbler firms are leveraging it to compete at levels previously reserved for industry giants. Yet at largerAcross every industry, AI has left its mark. Smaller, nimbler firms are leveraging it to compete at levels previously reserved for industry giants. Yet at larger

The 95%: Why Most AI Projects in Banks Fail at the Pilot Stage

2026/02/27 01:33
4 min read

Across every industry, AI has left its mark. Smaller, nimbler firms are leveraging it to compete at levels previously reserved for industry giants. Yet at larger firms, the story is strikingly different. 

Recent MIT research suggests that 95% of generative AI pilots fail to scale.  So, the question arises, when AI promises efficiency and speed, why can’t the majority of banking projects make it past the starting line? 

The answer shouldn’t be too surprising. Banks face a unique, complex mix of regulatory, legal and operational constraints, where the cost of a single failure is materially higher than in any other sector. One error can trigger regulatory enforcement, litigation, remediation obligations and significant reputational harm. As a result, AI projects can become bogged down in extensive risk assessments, model governance reviews and compliance processes, long before they approach deployment. 

Stuck between innovation and regulation 

Banks are under pressure to modernise and adopt AI-driven solutions at speed, while operating in one of the most tightly regulated environments. The introduction of any technological system must withstand the most rigorous and intense scrutiny. 

The first barrier is data. Banks possess vast amounts of it, but much of it sits in legacy systems, inconsistent data formats and requires significant work before it can be used in AI models. At the same time, financial institutions are bound to strict rules around data accuracy, completeness and reliability in any decision-making. 

Feeding fractured or inconsistent data into AI models puts them at risk of breaching duties under consumer protection laws, anti-discrimination rules, anti-money laundering (AML) and fraud risk requirements, as well as record-keeping and audit standards. Tasks such as data cleansing, lineage tracking, metadata management and preparing data for model ingestion are not merely operational hygiene; they are legal safeguards. 

All in all, this slows implementation, as banks must ensure defensibility at every stage. 

The challenge of explainability 

The second barrier is explainability. Financial regulators require firms to understand and demonstrate how a model arrives at a particular outcome. This is not simply best practice, it is essential for meeting obligations under consumer credit rules, anti-bias safeguards, prudential modelling standards, and the broader legal principle that firms must treat customers fairly and avoid vague decision-making. 

This creates tension, as AI systems may produce highly accurate outputs, but their decision-making logic is often opaque. That opacity translates directly into legal risk: the risk of enforcement action, consumer redress, litigation, or findings of unfair or discriminatory treatment. Many projects flounder when they encounter this hurdle. 

Outsourcing doesn’t remove accountability 

The final barrier is governance. Large banks operate across multiple jurisdictions, each with evolving and fragmented AI regulatory positions. This regulatory divergence creates uncertainty, leading some institutions to delay deployment until expectations become more harmonised or supervisory guidance becomes clearer. 

At the same time, banks rely on external vendors such as cloud providers, data aggregators and specialist AI firms to supply infrastructure or sophisticated models. However, outsourcing does not transfer accountability. 

Regulators require banks to maintain stringent oversight of third-party arrangements, including due diligence, contractual controls, audit and access rights, contingency planning, and ongoing monitoring. If an external system produces unlawful, discriminatory or erroneous outcomes, the bank remains fully accountable. 

As a result, institutions often cannot onboard AI vendors at the pace they would like, simply because the legal and governance requirements are so demanding. 

How can banks break the deadlock? 

Despite these challenges, AI adoption can still deliver the returns predicted, but only for institutions willing to take a different approach from the outset. 

  • Implement ensemble approaches with human oversight. As AI adoption accelerates, financial institutions must prioritise accuracy and precision alongside speed. The most effective will be those that understand AI alone is not enough. Ensemble AI models combined with careful model validation help strike this balance. It allows institutions to harness automation while maintaining accountability. 
  • Empower subject matter experts, not just central AI labs. Successful AI adoption for any sector requires subject matter experts to work alongside data scientists to maintain accuracy. This is especially true for industries where mistakes have materially far larger consequences and accuracy must be exceptionally high. 
  • Invest in line managers. Firms need to invest in mentoring and training to ensure line managers who understand specific business processes and regulatory requirements drive deployment decisions, with central teams providing governance frameworks and technical support. 

Joining the 5% 

AI may represent the future of financial services. However, for banks, the journey to deployment is less about technological capability and more about navigating a complex matrix of legal obligations, supervisory expectations and crossborder regulatory uncertainty. Until those tensions ease or frameworks become clearer, many AI projects will remain stuck in pilot mode, waiting for the regulatory green light required to move ahead. 

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.