AdaCore: Build Software that Matters
I Stock 2230989419
Apr 28, 2026

The Compliance Trap: Why More Code, More Connectivity, and More Regulation Are Colliding (and What to Do About It)

Software development teams in embedded systems are caught between three forces moving simultaneously and in opposing directions. Regulation is tightening. Customer expectations are rising. And the AI productivity miracle hasn't arrived - at least not yet, and not in the form most teams anticipated.

Understanding how those forces interact is the first step. Finding a path through them is the more important conversation.

The Regulatory Floor Just Rose; Permanently

The EU Cyber Resilience Act (CRA) and guidance from the NSA and CISA are not advisory suggestions. They represent a structural shift in accountability. Device manufacturers are now expected to build security in from the start, maintain products across long service lifecycles, and respond to serious vulnerabilities with timely updates.

For industries like automotive, aerospace, and defense, where a product's operational life can span decades, that commitment is significant. It adds cost, adds process, and adds liability that must be reflected in your risk register,  if it isn't already.

The regulatory baseline is moving. The question is whether your development process is moving with it.

More Capability Means More Code and a Much Larger Attack Surface

At the same time, customers are demanding more from every device: more connectivity, more personalization, more integration with adjacent systems. That demand is legitimate and commercially necessary. But it has a direct technical consequence.

More features mean more code. More connectivity means more networked exposure. More integration means more interfaces, each of which is a potential vulnerability. The attack surface on a modern embedded system is an order of magnitude larger than it was ten years ago.

Here is the problem those two forces create together: regulations require you to secure what you build, while market demands require you to build more. That is not a temporary tension. It is the permanent condition of embedded software development going forward.

The AI Productivity Promise Isn't Delivering Yet

Large Language Models were supposed to resolve this. The headline figure, a single engineer operating at 10x efficiency, circulates widely. In practice, most teams aren't seeing that. Productivity gains are real but modest, and they come with a catch that is especially acute in safety-critical domains.

Every line of AI-generated code still requires review, assessment, and testing. In a certified environment, that obligation doesn't shrink because a machine wrote the code. In some cases, the review burden increases because engineers must verify output they didn't author and may not immediately trust. The efficiency gain and the compliance overhead largely cancel each other out, unless you change the underlying approach.

The Language You Write In Determines How Much of This Problem You Inherit

Approximately 70% of serious software security vulnerabilities trace back to memory safety issues, a figure corroborated by Microsoft, Google, and multiple government cybersecurity bodies. C and C++ do not manage memory safely by design. That is not a criticism; it reflects the era in which they were built. But continuing to write new embedded software in those languages means inheriting that risk category indefinitely.

Memory-safe languages eliminate this class of vulnerability at the source. They don't reduce the likelihood of memory errors; they make them structurally impossible. Go further, and the case becomes compelling.

Languages with formal methods, such as SPARK, a subset of Ada designed for high-assurance development, can statically prove the absence of runtime errors before a line of code reaches a test bench. Evidence from fielded programs suggests this approach can reduce defects in deployed software by up to 75%. In an environment where post-release vulnerabilities carry regulatory consequences, that is not a quality metric. It is a risk reduction number.

LLMs and Formal Methods Together Solve a Problem Neither Solves Alone

Here is where the two threads converge into something actionable.

SPARK integrates cleanly with existing C and C++ codebases. Legacy code doesn't need to be discarded; it can coexist. And critically, LLMs can take existing C or C++ code and convert it to SPARK, with the formal proofs generated as part of that process.

That changes the AI productivity equation. Instead of using an LLM to generate unverified code faster, you use it to migrate toward a language where the absence of run-time failures can be demonstrated mathematically. The AI does the laborious translation work. The formal methods toolchain provides the assurance. The combination delivers both efficiency and verifiability, which is exactly what the regulatory environment now demands.

This is not a theoretical combination. The tools exist. The methodology is proven.

You Can Test This in Two Sprints, at Zero Additional Budget

The entire toolchain, SPARK, the Ada compiler, and the supporting formal verification environment are available in open source. That means a small, motivated team can be given a focused mandate: take a bounded piece of existing code, convert it to SPARK, and add one new feature. Two development sprints. No budget request. No enterprise commitment.

If that prototype succeeds, and the probability is high, you have internal evidence, not vendor claims. You have engineers who understand the approach. And you have a credible basis for a broader conversation.

If you decide to scale, experienced organizations exist that specialize in exactly this transition: enterprise tooling, certification support, and the long-term product support your regulatory obligations now require.

There is no credible argument for not running the experiment. The cost is two sprints. The upside is a development approach built for the environment your teams are already operating in.

The question worth bringing to your team this week: which piece of your existing codebase would benefit most from provable correctness, and who do you have that could lead a two-sprint proof of concept?

Author

Mark Hermeling

Headshot
Head of Technical Marketing, AdaCore

Mark has over 25 years’ experience in software development tools for high-integrity, secure, embedded and real-time systems across automotive, aerospace, defence and industrial domains. As Head of Technical Marketing at AdaCore, he links technical capabilities to business value and is a regular author and speaker on on topics ranging from the software development lifecycle, DevSecOps to formal methods and software verification.

Blog_

Latest Blog Posts