Artificial intelligence (AI) tools are spreading rapidly across workplaces, reshaping how everyday tasks get done. From marketing teams drafting campaigns in ChatGPT to software engineers experimenting with code generators, AI is quietly creeping into every corner of business operations. The problem? Much of AI adoption is happening under the radar without any oversights or governance.
As a result, shadow AI has emerged as a new security blind spot. The instances of unmanaged and unauthorised AI use will continue to rise until organisations rethink their approach to AI policy.
For CIOs, the answer isn’t to prohibit AI tools outright, but to implement flexible guardrails that strike a balance between innovation and risk management. The urgency is undeniable as 93% of organisations have experienced at least one incident of unauthorised shadow AI use, with 36% reporting multiple instances. These figures reveal a stark disconnect between formal AI policies and the way employees are actually engaging with AI tools in their day-to-day work.
In order to get ahead of AI risks, organisations need AI policies that encourage AI usage within reason – and in line with their risk appetite. However, they can’t do that with outdated governance models and tools that aren’t purpose-built to detect and monitor AI usage across their business.
There are already a number of frameworks and resources – including guidance from the Department for Science, Innovation and Technology (DSIT), the AI Playbook for Government, the Information Commissioner’s Office (ICO), and the AI Standards Hub (led by BSI, NPL and The Alan Turing Institute). These resources and frameworks can help organisations b building a responsible and robust framework for AI adoption, and complement international standards from bodies such as The Internet Society (ISO/IEC) and the Organisation for Economic Co-Operation and Development (OECD).
As a business establishes the roadmap for AI risk management, it’s crucial that the security leadership team starts assessing what AI usage really looks like in their organisation—this means investing in visibility tools that can look at access and behavioural patterns to find generative AI usage in every nook and cranny of the organisation.
With that information in hand, the CISO should consider establishing an AI council made up of stakeholders from across the organisation – including IT, security, legal and the C-suite – to talk about the risks, the compliance issues, and the benefits arising from both unauthorised and authorised tools that are already starting to permeate their business environments. This council can start to mould policies that meet business needs in a risk-managed way.
For example, the council may notice a shadow AI tool that has taken off that may not be safe, but for which a safer alternative does exist. A policy may be established to explicitly ban the unsafe tool but suggest use of the other one. Often these policies will need to be paired with investment in not only security controls, but also those alternative AI tools. The council can also help create a method for employees to submit new AI tooling for vetting and approval as advancements come to the market.
By creating this direct, transparent line of communication, employees can feel reassured that they are adhering to company AI policies and empowered to ask questions, while also encouraged to explore new tools and methods that could support growth down the line.
Engaging and training employees will play a crucial role in getting organisational buy-in to keep shadow AI at bay. With better policies in place, employees will need guidance on the nuances of responsible use of AI, why certain policies are in place and data handling risks. This training can help them become active partners in innovating safely.
In some sectors, the use of AI in the workplace has often been a taboo topic. Clearly outlining best practice for responsible AI usage and the rationale behind an organisation’s policies and processes can eliminate uncertainty and mitigate risk.
Shadow AI isn’t going away. As generative tools become more deeply embedded in everyday work, the challenge will only grow. Leaders must decide whether to see shadow AI as an uncontrollable threat or as an opportunity to rethink governance for the AI era. The organisations that thrive will be those that embrace innovation with clear guardrails, making AI both safe and transformative.


