DevOps leaders know the risks of mishandling sensitive data. Yet according to Perforce’s 2025 State of Data Compliance and Privacy Report, most organizations areDevOps leaders know the risks of mishandling sensitive data. Yet according to Perforce’s 2025 State of Data Compliance and Privacy Report, most organizations are

AI’s Data Dilemma: DevOps Innovation Outpacing Privacy

2025/12/19 23:03
5 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

DevOps leaders know the risks of mishandling sensitive data. Yet according to Perforce’s 2025 State of Data Compliance and Privacy Report, most organizations are still taking that gamble, especially when it comes to AI. While there is near-universal awareness of data risk among the DevOps community, sensitive data like PII continues to be used in risk-prone non-production environments such as AI.

95% of the survey’s respondents use sensitive data (such as social security numbers and customers’ financial and health information) in AI environments. These results become even more worrying when considering that 100% of the survey’s respondents (the majority of whom are decision-makers and approximately half are director-level or above) work for organizations that are subject to data privacy regulations such as GDPR, HIPAA, or CCPA.

And the growth of AI is amplifying the privacy paradox for DevOps senior executives who are weighing the balance between pressure to innovate quickly against the need to safeguard sensitive data and meet compliance requirements.

The Privacy Paradox

On the one hand, the survey underscored the acceptance of risky behaviour around the use of sensitive data in environments like AI. For example:

• 91% of organizations surveyed believe that sensitive data should be allowed in AI model training, fine-tuning, and retrieval augmented generation (RAG)

• 82% believe it is safe to use sensitive data in AI model training and fine-tuning. Only 7% said that it is not safe.

• 84% of organizations still allow compliance exceptions in non-production environments like A, further exacerbating the risk of exposing sensitive data.

On the other hand, these same organizations are anxious about the consequences, which many are already experiencing at first hand:

• 78% are highly concerned about theft or breaches of model training data.

• 68% worry about privacy and compliance audits

• 60% have experienced data breaches or data theft in software development, testing, AI, and analytics environments, representing an 11% increase since last year

• 32% have faced audit issues, and 22% reported regulatory non-compliance status or fines.

Why DevOps Teams Choose Risk Over Governance

There are multiple contributing factors to why sensitive continues to be used in risk-prone non-production environments, but according to 76% of the survey’s respondents, the biggest reason is to support data-driven decision-making. This is understandable, given that teams involved in training AI models or in software development and testing need realistic data to train or test scenarios that mimic real-world environments. Traditionally, the further the data goes away from production-like values, the less valuable it becomes.

At the same time, protecting data is perceived to be difficult, disruptive, and a hindrance to innovation. DevOps teams want access to data quickly, further exacerbating their temptation to use real and sensitive data.

A Shift Towards More Responsible Practices

However, there is light on the horizon. While organizations today may grapple to find the balance between ambition and accountability, the majority have tangible intentions to address this challenge. For example, the survey also found that 54% know they need to protect sensitive data in AI model training and tuning, with 86% saying that they plan to invest in AI-specific data privacy solutions over the next couple of years. That said, only 34% believe there are sufficient approaches and tools for tackling data privacy in AI environments.

Regardless, DevOps leaders have already made some steps to help privacy and security keep pace with operational demands to innovate, especially with AI. 95% of organisations have a masking mandate or policy in place for non-production environments, and 95% have turned to static data masking, a technique that hides sensitive data while preserving its integrity for testing, AI model training, and other purposes. Furthermore, with modern data masking tools, users can be served realistic data in a fraction of the time it would previously have taken.

How Masked and Synthetic Data Can Help Break the Barrier

In addition, nearly half (49%) also use synthetic data in AI development. Synthetic data is artificially generated information that mimics the properties of real data, thereby avoiding the need to use real data at all. That said, digging deeper into the survey’s results, in fact only 36% have used synthetic data in small scale and experimentation mode.

Looking ahead, most DevOps teams are likely to use a blend of both masked and synthetic data, with each applied depending on the situation. For example, static data masking might be used for compliance, dynamic masking for select use cases, and synthetic data for new applications. However, it is essential to note that tools can only do so much: a culture of governance and consistent enforcement is equally necessary; dipping in and out of data privacy is not an option.

With the acceleration of AI, getting the right framework of tools, techniques, and best practices cannot come fast enough for today’s DevOps professionals. In the race to innovate, the protection and security of sensitive data must not be traded off against speed. Fortunately, by adopting the right culture, techniques, and tools, there is no need to make a sacrifice. As organizations deepen their commitment to AI, now is the time to invest in strategies that robustly protect sensitive data while maintaining velocity, scale, and innovation. No longer is there a need to tread the AI data privacy tightrope.

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

U.S. Dollar Plummets to One-Week Low as Hopeful Middle East Ceasefire Talks Intensify

U.S. Dollar Plummets to One-Week Low as Hopeful Middle East Ceasefire Talks Intensify

BitcoinWorld U.S. Dollar Plummets to One-Week Low as Hopeful Middle East Ceasefire Talks Intensify NEW YORK, April 10, 2025 – The U.S. dollar slumped to a one-
Share
bitcoinworld2026/04/01 21:00
Understanding the Difference Between Pi on Exchanges and Pi in Wallets

Understanding the Difference Between Pi on Exchanges and Pi in Wallets

Understanding the Difference Between Pi on Exchanges and Pi in Wallets Pi Network is gaining increasing attention as it transitions from a mined cryptocurr
Share
Hokanews2026/04/01 21:01
Ethereum Fusaka Upgrade Targets Dec 3 Mainnet Launch

Ethereum Fusaka Upgrade Targets Dec 3 Mainnet Launch

The post Ethereum Fusaka Upgrade Targets Dec 3 Mainnet Launch appeared on BitcoinEthereumNews.com. Fusaka testnet forks hit Holesky Oct 2, Sepolia Oct 16, Hoodi Oct 30 before Dec 3 mainnet Peer Data Availability Sampling and gas cap hike push Ethereum scalability higher Devnet testing shows blob capacity doubling within two weeks of Fusaka activation Ethereum’s core developers have set December 3, 2025 as the tentative mainnet date for the Fusaka upgrade.  Researcher Christine D. Kim detailed the decisions from developer call ACDC #165, where teams locked the rollout sequence after weeks of testing. The dates remain provisional until final epoch numbers are confirmed in the coming days. Important decisions were made on today’s Ethereum developer call, ACDC #165. Developers confirmed the public testnet schedule and BPO hard fork schedule for Fusaka. Let’s get into it. pic.twitter.com/mNrYMYyDj2 — Christine D. Kim (@christine_dkim) September 18, 2025 Testnet Rollout Before Mainnet The schedule starts with a code freeze on September 22 and client releases around September 25. Fusaka then activates on Holesky on October 2 at 12:06:24 UTC (epoch 165,376), followed by Sepolia on October 16 at 14:12:48 UTC (epoch 273,152), and Hoodi on October 30 at 22:11:36 UTC (epoch 50,944). If all phases hold, the mainnet launch will follow on December 3, 2025. Developers said testing on Devnet-5 shows blob capacity should more than double within two weeks after activation, a key data point for scaling analysis.  What Fusaka Brings to Ethereum Fusaka is Ethereum’s next major hard fork, built to expand throughput while keeping the network decentralized. The upgrade introduces Peer Data Availability Sampling (PeerDAS), which lets validators confirm large blobs by sampling peers instead of downloading entire datasets. Related: Ethereum to Quadruple Gas Limit in Fusaka Upgrade: Report Developers also aim to raise the block gas limit from 30 million to 150 million units, add Verkle Trees for leaner proofs, and sharpen EVM…
Share
BitcoinEthereumNews2025/09/20 04:09

Trade GOLD, Share 1,000,000 USDT

Trade GOLD, Share 1,000,000 USDTTrade GOLD, Share 1,000,000 USDT

0 fees, up to 1,000x leverage, deep liquidity