As organisations implement AI at breakneck speed to stay competitive, adoption continues to outpace oversight in dangerous proportions. Incidents like the UK Department of Work and Pensions’ algorithm wrongly flagging 200,000 people for fraud or the ICO finding that some AI recruitment tools unfairly filtered candidates with certain protected characteristics show how quickly ‘black box’ systems can cause harm.
Regulation is now starting to catch up. The recently passed Data (Use and Access) Act (DUAA) introduces changes for the regulation of automated decision-making, particularly in the context of data protection and privacy. Promising more control and accountability, the new Act sends a dual message to UK organisations: AI can’t be a black box, and data protection can’t be box-ticking.
The Data (Use and Access) Act is designed to provide a modern, more streamlined framework for the UK’s data protection regulations. While it addresses a range of issues, its impact on automated decision-making and AI is particularly notable.
Automated decision-making describes processes in which outcomes are determined without human intervention. These can include basic tasks such as sorting emails, as well as more complex areas like recruitment, credit scoring and even judicial sentencing. AI systems are now foundational to many of these processes due to their ability to examine large datasets and make accurate predictions or recommendations far faster than humans can.
However, besides their many clear advantages, AI systems also present concerns around transparency, fairness and accountability. The Data (Use and Access) Act is introducing new governance standards to curb these negative effects and enable safer innovation.
The Act has four overarching goals:
Although the Act marks an important step towards ensuring the safe and governed use of AI, it is not a complete overhaul. Privacy-conscious businesses which have implemented policies and procedures in compliance with the UK GDPR will find that the Act builds on extending provisions on automated decision-making and AI.
For example, article 22 of the UK GDPR already limits organisations’ ability to make fully automated decisions in some circumstances. The DUAA extends this provision by providing greater safeguards and clarification around when and how automated decisions can be made. Similarly, both the UK GDPR and the Act emphasise transparency and accountability, with the DUAA going further to strengthen these requirements, particularly in terms of explaining how AI systems are designed, tested and monitored.
While the DUAA strengthens protections for individuals, regulation only sets the framework – it does not dictate the pace of innovation. In practice, businesses must approach AI with caution and build robust foundations before scaling its use.
New research from IBM shows that 97% of AI-related security breaches involved AI systems that lacked proper access controls, and 63% of victims reported having no governance policies in place to manage AI or prevent the unauthorised use of AI tools known as ‘shadow AI.’
Employees inputting sensitive information or proprietary business information into AI tools can leave organisations vulnerable to data protection infringements and confidentiality risks, while AI hallucinations can influence decisions and result in reputational or legal consequences, lost revenue and damaged stakeholder trust.
Leading organisations are now getting on the front foot by establishing internal AI policies. An AI policy sets out guidelines for how employees can use AI tools while emphasising ethical, responsible and secure best practices. These policies not only help ensure compliance with rules such as UK GDPR, the Data (Use and Access) Act and the EU AI Act, but also provide wider benefits for compliance, ethics and operations.
A robust AI policy demonstrates leadership in data privacy and a commitment to accountability. In procurement processes, it can serve as a key differentiator that sets one business apart from another. Day-to-day, an AI policy provides a structured approach for technology use, and ensures that teams understand their roles in overseeing AI outputs – thus reducing the risk of bias, misuse or misinformation.
An AI policy can even be the driving force of innovation. It can help map out AI deployments, uncover areas for expansion and streamline decision-making by clarifying which tools are approved under what conditions. This way, it can accelerate adoption and support effective implementation.
Just as data protection is not a simple compliance formality, there is no one-size-fits-all AI policy either. Each organisation should tailor and continually reassess its approach, and where in-house expertise falls short, seek professional guidance. However, a few key actions can help organisations cover all critical bases:
Whether motivated by compliance or innovation, businesses must now build a case for a robust AI strategy that promotes the responsible use of technology and automated decision-making. Ensuring compliance with the Data (Use and Access) Act provides a great opportunity for UK businesses to build more secure, transparent and responsible ecosystems, while creating an AI policy promotes visibility, streamlines adoption and fosters long-term trust.
Businesses should take this opportunity to align their policies and practices, ensuring that AI works for them – efficiently and ethically.


