Embedded software has moved at a pace dictated by careful engineering and long development cycles. That world is gone. AI has accelerated everything, and embeddedEmbedded software has moved at a pace dictated by careful engineering and long development cycles. That world is gone. AI has accelerated everything, and embedded

AI Is Here. But Is Security? What the Data Says About the State of Embedded Software

2026/02/10 16:33
6 min read

Embedded software has moved at a pace dictated by careful engineering and long development cycles. That world is gone. AI has accelerated everything, and embedded teams now generate and ship code at speeds that would have been unimaginable a decade ago. The remarkable part is not just how fast AI arrived in critical systems, but how quickly it has become an essential part of development for the software at the heart of critical infrastructure. 

RunSafe Security’s 2025 AI in Embedded Systems Report found that 83.5% of organizations have already deployed AI-generated code into production. That includes medical devices, industrial control systems, vehicles, and energy systems. AI is now helping to run the machines that keep society functioning. 

The Change Inside Embedded Teams 

If you talk to embedded engineers, they’ll tell you a similar story. Someone on the team tried an AI tool because they were behind on test coverage. Someone else used it to speed up documentation. Before long, AI became part of the development workflow. 

The survey confirms it, as 80.5% of teams already use AI in development, and almost nobody plans to avoid it. AI is writing code paths that interact with sensors, hardware timers, and control loops. It is drafting logic that influences physical behavior in the real world. 

The adoption happened quickly, and so, naturally, there is a lag in understanding what this means for security. 

When Confidence and Risk Don’t Match 

When asked whether they could detect vulnerabilities in AI-generated code, 96% of teams said they felt confident. On its own, that might sound reassuring. But in the same survey, 73% said AI-generated code poses a moderate or high cybersecurity risk. 

And one in three organizations experienced a cyber incident involving embedded software within the past year. 

The contradiction indicates that teams trust the tools they’ve used for years, such as static analysis, code reviews, and manual testing. But those tools were built for human-written code, written at human speed. AI doesn’t follow the same rules, and neither do the vulnerabilities it introduces. 

Confidence in familiar tools is not the same as readiness for unfamiliar risks. 

A Different Kind of Code Requires a Different Kind of Security 

One thing AI has changed is the shape of the code itself. AI rarely writes code the same way twice, which erodes one of the hidden pillars of embedded security, which is predictability. Threat models lose clarity. Vulnerabilities find their way into software not because teams are careless, but because the patterns they rely on no longer exist. 

The report highlights this shift in another way. Security is the number-one concern with AI-generated code, cited by 53% of respondents. That worry emerges not from one catastrophic failure but from dozens of small inconsistencies. In embedded systems, small inconsistencies accumulate into big consequences. 

If you want to understand the urgency, memory safety is one example. Buffer overflows and use-after-free errors have been around in embedded systems for decades. If AI models are trained on C/C++ codebases with vulnerabilities, new code generated has the potential to amplify old mistakes. With only 49% of organizations using memory-safe languages, the industry remains anchored to the same challenges that keep memory safety flaws as recurring vulnerabilities. 

Why Embedded Teams Are Looking to Runtime Security 

When you look closely at how organizations are responding, a theme emerges. 60% of teams now rely on runtime protections for memory safety, and runtime exploit mitigation is one of the top three security priorities. The shift reflects a hard-earned lesson that you cannot test your way out of unknown vulnerabilities when the code volume and variability explode. 

Embedded systems used to depend on getting things right before deployment. Now the reality is different. These devices must remain safe even when a vulnerability is present in software. 

AI is forcing security to adopt the perspective of designing for resilience rather than perfection. WIth runtime protections in place, organizations can rest assured that certain vulnerabilities are mitigated even before patches become available. 

A Security Model That Matches the Moment 

Embedded systems are entering a phase where the amount of code, the sources of code, and the nature of code are all changing faster than traditional security practices can adapt. AI’s impact is that teams ship more software, and more unpredictable software. 

This does not have to be cause for alarm if teams take note of the risk, assume imperfections, and design for resilience. In fact, the report found that organizations are preparing to significantly increase their investment in embedded software security in the next two years, with 93.5% planning to increase investment and more than one-third expecting significant growth. 

Respondents cited code analysis automation, AI-assisted threat modeling, runtime exploit mitigation, and more secure coding training as what they view as the most helpful cybersecurity improvements for embedded software development. 

AI is writing the code that runs our critical systems. The question is whether our security frameworks can evolve quickly enough to match the reality we now live in. 

About Joseph M. Saunders 

Joe Saunders is Founder & CEO of RunSafe Security. He leads a team of former national security cyber experts on a mission to make critical infrastructure safe. Working with companies such as Lockheed Martin, GE Vernova, and Vertiv as well as the US Army, US Navy, US Air Force, and dozens of other organizations, RunSafe Security identifies risk in your software supply chain, prevents exploitation of embedded systems, and monitors software for indicators of compromise and bugs.   

Joe is also Chairman of Ask Sage, a cloud agnostic and large language model agnostic platform that is transforming how government and business operate. He previously served as a management consultant for PricewaterhouseCoopers, a director at Thomson Reuters Special Services, and member of the management team of TARGUSinfo (sold Neustar for $800M). 

Joe is a frequently sought-after speaker and panelist and is regularly asked to author articles on cybersecurity, artificial intelligence, and geopolitics. He is particularly interested in the implications of technology competition, economic coercion, and international security on the transformation of the international world order. He is the founder of International Resilience Institute, a 501(c) 3 non-profit which is building the Global Resilience Index to quantify power and coercion among nation states. 

Market Opportunity
Notcoin Logo
Notcoin Price(NOT)
$0.0003841
$0.0003841$0.0003841
-3.80%
USD
Notcoin (NOT) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.