DevOps leaders know the risks of mishandling sensitive data. Yet according to Perforce’s 2025 State of Data Compliance and Privacy Report, most organizations areDevOps leaders know the risks of mishandling sensitive data. Yet according to Perforce’s 2025 State of Data Compliance and Privacy Report, most organizations are

AI’s Data Dilemma: DevOps Innovation Outpacing Privacy

2025/12/19 23:03
5분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

DevOps leaders know the risks of mishandling sensitive data. Yet according to Perforce’s 2025 State of Data Compliance and Privacy Report, most organizations are still taking that gamble, especially when it comes to AI. While there is near-universal awareness of data risk among the DevOps community, sensitive data like PII continues to be used in risk-prone non-production environments such as AI.

95% of the survey’s respondents use sensitive data (such as social security numbers and customers’ financial and health information) in AI environments. These results become even more worrying when considering that 100% of the survey’s respondents (the majority of whom are decision-makers and approximately half are director-level or above) work for organizations that are subject to data privacy regulations such as GDPR, HIPAA, or CCPA.

And the growth of AI is amplifying the privacy paradox for DevOps senior executives who are weighing the balance between pressure to innovate quickly against the need to safeguard sensitive data and meet compliance requirements.

The Privacy Paradox

On the one hand, the survey underscored the acceptance of risky behaviour around the use of sensitive data in environments like AI. For example:

• 91% of organizations surveyed believe that sensitive data should be allowed in AI model training, fine-tuning, and retrieval augmented generation (RAG)

• 82% believe it is safe to use sensitive data in AI model training and fine-tuning. Only 7% said that it is not safe.

• 84% of organizations still allow compliance exceptions in non-production environments like A, further exacerbating the risk of exposing sensitive data.

On the other hand, these same organizations are anxious about the consequences, which many are already experiencing at first hand:

• 78% are highly concerned about theft or breaches of model training data.

• 68% worry about privacy and compliance audits

• 60% have experienced data breaches or data theft in software development, testing, AI, and analytics environments, representing an 11% increase since last year

• 32% have faced audit issues, and 22% reported regulatory non-compliance status or fines.

Why DevOps Teams Choose Risk Over Governance

There are multiple contributing factors to why sensitive continues to be used in risk-prone non-production environments, but according to 76% of the survey’s respondents, the biggest reason is to support data-driven decision-making. This is understandable, given that teams involved in training AI models or in software development and testing need realistic data to train or test scenarios that mimic real-world environments. Traditionally, the further the data goes away from production-like values, the less valuable it becomes.

At the same time, protecting data is perceived to be difficult, disruptive, and a hindrance to innovation. DevOps teams want access to data quickly, further exacerbating their temptation to use real and sensitive data.

A Shift Towards More Responsible Practices

However, there is light on the horizon. While organizations today may grapple to find the balance between ambition and accountability, the majority have tangible intentions to address this challenge. For example, the survey also found that 54% know they need to protect sensitive data in AI model training and tuning, with 86% saying that they plan to invest in AI-specific data privacy solutions over the next couple of years. That said, only 34% believe there are sufficient approaches and tools for tackling data privacy in AI environments.

Regardless, DevOps leaders have already made some steps to help privacy and security keep pace with operational demands to innovate, especially with AI. 95% of organisations have a masking mandate or policy in place for non-production environments, and 95% have turned to static data masking, a technique that hides sensitive data while preserving its integrity for testing, AI model training, and other purposes. Furthermore, with modern data masking tools, users can be served realistic data in a fraction of the time it would previously have taken.

How Masked and Synthetic Data Can Help Break the Barrier

In addition, nearly half (49%) also use synthetic data in AI development. Synthetic data is artificially generated information that mimics the properties of real data, thereby avoiding the need to use real data at all. That said, digging deeper into the survey’s results, in fact only 36% have used synthetic data in small scale and experimentation mode.

Looking ahead, most DevOps teams are likely to use a blend of both masked and synthetic data, with each applied depending on the situation. For example, static data masking might be used for compliance, dynamic masking for select use cases, and synthetic data for new applications. However, it is essential to note that tools can only do so much: a culture of governance and consistent enforcement is equally necessary; dipping in and out of data privacy is not an option.

With the acceleration of AI, getting the right framework of tools, techniques, and best practices cannot come fast enough for today’s DevOps professionals. In the race to innovate, the protection and security of sensitive data must not be traded off against speed. Fortunately, by adopting the right culture, techniques, and tools, there is no need to make a sacrifice. As organizations deepen their commitment to AI, now is the time to invest in strategies that robustly protect sensitive data while maintaining velocity, scale, and innovation. No longer is there a need to tread the AI data privacy tightrope.

시장 기회
플러리싱 에이아이 로고
플러리싱 에이아이 가격(SLEEPLESSAI)
$0.02339
$0.02339$0.02339
+5.40%
USD
플러리싱 에이아이 (SLEEPLESSAI) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

Starter Gold Rush: Win $2,500!

Starter Gold Rush: Win $2,500!Starter Gold Rush: Win $2,500!

Start your first trade & capture every Alpha move