Data privacy isn’t just a back-office obligation or a once-a-year compliance check. Strong data governance should be the beating heart of the organisation now that business teams rely on free-flowing, secure data to do their jobs.
Critically, it is the power source behind AI. As AI becomes embedded across sales, commerce, IT operations, and everyday work, organisations are grappling with growing complexity, tightening regulation, and rising customer expectations around it.
With so much at stake around data privacy, your data brand is entwined to your organisation’s entire brand. From hybrid IT estates and agentic AI to evolving global frameworks, leaders agree on one thing: privacy and security must be calibrated carefully from day one, if businesses want to innovate with confidence.
“When you’re dealing with largescale, always-on IT environments, data privacy stops being a box-ticking exercise – it needs to be factored into every checkpoint. We see that complexity daily, across hybrid estates, legacy platforms and global cloud environments. In these settings, data is fluid and risk can quickly increase with that flow where trust isn’t built in from the start.
“Mitigating data risk needs to be at the top of every corporate agenda. Our Readiness Report found that only three in ten (31%) organisations feel ready to manage external risks, while the same proportion say IT complexity itself is holding them back. On top of this, nearly two-thirds (65%) have already changed their cloud strategies in response to regulatory and geopolitical pressures, underscoring the need for robust, compliant governance structures for IT estates.
“On Data Privacy Day 2026, organisations can only innovate with confidence when they work with trusted partners who understand this complexity, embed privacy and governance by design, and help turn regulatory pressure into resilience and competitive advantage. It’s key to remember that data privacy and security is a living, continuous process.”
“Protecting customer data is a real practice, not a compliance exercise. For teams handling personal information, particularly in sales and marketing, the risk of misuse or mishandling remains high with third-party AI tools embedded in daily work. Used properly, AI strengthens protections by detecting anomalies, reinforcing compliance, and reducing human error. For example, if a sales rep tries to send information to an unintended contact, integrated AI can alert them or block the action.
“While technology has a vital role to play, it must be implemented thoughtfully. Businesses need clarity on how data flows through their systems and supply chains, and they must choose partners who can prove their commitment to privacy. Certifications such as ISO 27001 and SOC 2 are evidence of robust, audited practices built for accountability and trust. As AI and data management rapidly advance, the requirement to commit to and prove compliance will become a critical requirement to gain customer trust at the start of the sales process.”
“Data Privacy Day 2026 is a moment to reinforce trust with customers. We’re operating in a landscape of tighter regulatory controls, many of them sector-specific – including DORA, which has now reached its one-year anniversary of full application. Alongside GDPR, the EU AI Act, and other major frameworks, this should be a reminder that compliance is always evolving.
“We’re in a cumulative environment where innovation and regulation grow side by side, and businesses need to move with both. If you’re a bank or a retailer adopting new AI tools, whether that’s chatbots or personalised shopping experiences, those innovations have to be matched with robust data management and the right controls. You can’t simply add functionality without building in compliance. At the same time, customer expectations around security are rising. Each week brings fresh headlines about data leaks, breaches, and IT failures, often followed by significant reputational backlash.
“In a digital world, your IT brand is your brand. If you can’t reliably protect data and meet expectations for how it’s handled, you’re not dependable: you’re a risk. Trust, like reliability, has to be engineered into your systems and your culture from day one.”
Anthony Salcito, General Manager, Enterprise, Coursera:
“This Data Privacy Day, many organisations should scrutinise the way they manage data security in relation to AI. The rapid deployment of AI has created an urgent need for robust governance frameworks. Fortunately, we see that individuals are beginning to respond to the skills needs necessary to cultivate these frameworks. In our 2026 Job Skills Report, Information Privacy has emerged among the top 10 fastest-growing skills, indicating a shift from governance as a niche legal concern to a core operational requirement for data teams.
“In addition, as AI automates technical tasks, human skills are becoming indispensable for data workers. For instance, “human-in-the-loop” skills like Debugging are a top ten skill for IT learners; our Data learner cohort shows over 100% year-over-year growth in Data Quality (+108%) and Data Cleansing (+103%).
“This demand is sparked by a new, complex regulatory landscape. With over 120 countries implementing international data privacy laws and governments around the world now adopting new AI regulations, professionals who understand, and are capable of responding to, these frameworks are in high demand. This is particularly the case as the cost of noncompliance is steep, with organisations in breach of the EU AI Act potentially facing fines of up to €35 million or 7% of their global turnover.
“Having the right data specialists is not just a compliance issue, it’s also a productivity one. AI tools are only as strong as the data foundations that sit beneath them, meaning that AI innovation is contingent on strong data skills. These foundations are reliant on nurturing skilled knowledge workers, who must be upskilled and reskilled as well as hired. Good data management will make the difference between success and failure when it comes to companies seeking to leverage AI for productivity and prosperity.”
“This year’s Data Privacy Week’s theme, “Take Control of Your Data,” couldn’t be timelier. As organisations race to adopt AI, the question isn’t whether innovation will move fast – it will. The real question is whether companies can keep pace with the trust expectations that come with it: knowing what data they have, where it lives, how it moves, and how it’s being reused in new AI-driven workflows. Control is no longer a back-office compliance exercise; it’s the condition for responsible innovation.
“At Box, this matters because trust is fundamental to our relationship with customers. We handle customer data every day, and strong privacy practices are essential – not only to meet regulatory obligations, but to exceed customer expectations. In an AI-first era, where state-of-the-art models can process enormous volumes of information, privacy controls and responsible AI use become even more critical to prevent misuse, preserve confidentiality, and maintain customer confidence.
“That’s why our approach is grounded in clear principles: transparency about how data is collected, used, and stored; security through strong technical and operational safeguards; and responsible practices that reflect best-in-class governance and compliance. These aren’t static commitments – they require operational discipline as privacy and AI governance frameworks mature globally, from GDPR enforcement to the EU AI Act.
“Box AI is built to help customers take control of their data in practical, scalable ways, so teams can use AI without losing sight of governance. When organisations can see and manage data flows, apply the right controls, and demonstrate accountability, they don’t just reduce risk, they unlock sustainable innovation. In this next chapter of AI adoption, trust isn’t a byproduct of progress; it’s the foundation that makes progress possible.”
“As AI and autonomous agents become embedded into how people discover, buy and manage commerce, they will undoubtedly accelerate growth – but also introduce new challenges. On Data Privacy Day, it’s important to recognise that in this new era of agentic commerce, identity and trust are the new perimeter, and both must be integrated into the full commerce tech stack.
“Transactions no longer begin and end on a single website or device; they happen across conversations, platforms and automated workflows. As AI and autonomous agents become more commonplace, there is a risk merchants may lose clarity over who they are transacting with, and actions are taken without clear authority. This may reduce friction in the short-term, but it can increase exposure to fraud and misuse. This new commerce environment demands a security model where identity is verifiable across surfaces, agent actions are permissioned and data access is controlled by default not by exception.
“Security in commerce is also inherently shared. Platforms, merchants, partners and app developers all play a role, but the job of the commerce platform is to ensure merchants have the tools to thrive in this AI-first environment. When done effectively, the outcomes are tangible: lower fraud exposure, fewer misconfigurations, faster incident response and clearer assurance for vendor risk teams.
“As AI reshapes commerce, trust will be the deciding factor. Data Privacy Day offers us a valuable reminder that trust is built on security that evolves at the same pace as AI itself.”
“Agentic AI is opening up a new era of productivity. Systems that can reason, act, and collaborate autonomously have the potential to transform how work gets done, freeing people up to focus on higher-value tasks and driving better outcomes across the business. Especially when it comes to activities such as document generation, where slow, repetitive, and error-prone manual processes continue to frustrate.
“But for agentic AI to really be adopted at scale, users need confidence that their data won’t be misused. That’s why organisations that want to truly excel with agentic AI must make data privacy a core priority. Strong data governance and security don’t slow innovation, they enable it. When data is protected and well-managed, agentic AI can operate responsibly and at scale, delivering value while maintaining trust.
“At Templafy, we know that the organisations that will lead in this next phase of AI adoption are those that embed privacy, security, and compliance into their AI strategies early on. This is why Templafy’s document agent adopts a security-first approach. When data protection is treated as a competitive advantage, it creates a foundation that allows the value of agentic AI to really flourish.”
“Businesses are rushing to adopt agentic AI, exposing a large gap between deployment and control. While AI agents can increase efficiency by acting and making decisions, without proper oversight shadow agents pose a serious threat to data privacy.
“According to Vanta’s recent State of Trust report, 65% believe that their current use of agentic AI outpaces their understanding of it. The lack of understanding means sensitive data can be leaked. It’s critical for businesses to demonstrate transparent practices, enforce clear rules, and deliver auditable outcomes.”
Across every sector, the message is consistent. Trust is engineered, not assumed. Whether managing complex IT environments, deploying AI-driven tools, or enabling agentic commerce, organisations that treat data privacy as a living, continuous discipline will be the ones that succeed. Strong governance, skilled people, and trusted partners turn regulatory pressure into resilience and growth. On Data Privacy Day 2026, the organisations that lead will be those that understand privacy is not a brake on progress, but an accelerator for confident, sustainable innovation.

