
MISRA for Memory Safety
MISRA is the most popular coding standard for the C and C++ languages in the embedded space. There is a flavor for C and one for C++, and each has different versions as languages and understanding of languages and the complexity of applications evolved over time.
MISRA is a fully-fledged coding standard, and I have yet to run into a software development team that uses every single rule in the standard. As a matter of fact, one of the thoughts behind MISRA is that you can create a subset of the standard and subscribe to the rules that you want to adhere to, or that your customer wants you to adhere to.
Many software developers have a love-hate relationship with MISRA as a coding standard. I mark that up to two reasons. First of all, the older versions of the standard are quite dated. There is a massive progression between MISRA C 2004, 2012, and 2023 and 2025, which makes sense as the type of software that teams were developing in 2004 was massively different from a size and complexity perspective vs the software that teams are working on now.
Consequently, some of the rules can feel archaic or overly pedantic for applications that do not have a deep need for functional safety or security.
MISRA offers the concept of deviations, exceptions to rules. A project can ask for an exception of a rule for a specific violation, or for an exception of a rule for the entire project.
Take, for example, MISRA C 2025, rule 2.5 - A project should not contain unused macro definitions. In some modern software development projects where the goal is to write code that can run on multiple platforms (say RISC-V and Arm) as well as under multiple operating systems (say VxWorks, QNX, and Zephyr), there will be macros essential for some configurations but unused in others. In such cases a strict enforcement of that rule solely for the sake of adhering to MISRA seems silly.
When you take a step back at the MISRA rules and directives, you quickly realize that you can separate the rules into roughly three different buckets:
Bucket 1 has to do with code quality and understandability.
Bucket 2 has to do with actual bugs and often memory safety and type safety related to undefined behavior.
Then there is bucket 3, covering types of code that can often result in run-time errors. This last bucket is of special interest. It is a bit of a twilight zone; the fact that there is, for example, a type narrowing, does not imply a run-time error all the time. There is a lot of code out there in the C and C++ world that has been written with narrowing in mind, where the developer knows that no problem will arise due to the higher-level design or other domain-specific limitations.
Let’s have a look at some examples:
The following rules and directives are from MISRA C 2025:
Rule 2.5 (A project should not contain unused macro definitions) mentioned before is clearly a quality and understandability issue (bucket 1), and in my humble opinion, a weak one at that.
Directive 4.1 (Run-time failures shall be minimized) and Rule 1.3 (There shall be no occurrence of undefined or critical unspecified behaviour) are clearly in the bugs arena (bucket 2). This is where null pointer dereferences, buffer overruns, data taint, and the like fall, and all such violations should be corrected.
Rule 10.3 (The value of an expression shall not be assigned to an object with a narrower essential type or of a different essential type category) is in bucket 3, problems may occur in this area, but the code may have been designed with this in mind.
If you are writing safety-critical code, then you absolutely want to correct code in this third bucket. However, if your code is less safety critical, it is okay to tolerate the violation, unless there is an actual path through the source code that demonstrates a problem in this category.
Static analysis tools, like CodeSonar, by AdaCore, can be configured for the functional safety use case, be extremely critical, and flag problems in all three buckets. Or, they can be configured to be more easy-going and only generate warnings if there is an actual path through the source code that can throw a run-time error.
To find this path, the tool will perform deep flow static analysis, combining a number of different technologies such as data flow analysis, control flow analysis, and abstract interpretation, where the tool considers values for variables and how they can influence the path, or the problem at hand. This last technique is really good at finding instances of undefined behavior, such as buffer and type overruns or underruns.
Where does this more lax usage of MISRA (only focusing on buckets 2 and 3) take us? Well, when carefully applied, this subset of MISRA defines a more memory-safe version of C and C++. In no way, shape, or form is as close as truly memory and type safe languages like Ada, SPARK, or Rust, but closer.
That is exactly what CodeSonar was designed for: to find memory and type problems in your C/C++ code that you should really fix. If you use the default configuration of CodeSonar, then you have a configuration that focuses on code that can cause run-time errors.
At the same time, if you want to be fully MISRA compliant, CodeSonar offers configurations that can get you there, too.
We have great examples on how to use CodeSonar on ffmpeg in the default configuration and on FreeRTOS in the MISRA configuration. Subscribe to our CodeSonar Trial Evaluation to learn more.
Author
Mark Hermeling

Mark has over 25 years’ experience in software development tools for high-integrity, secure, embedded and real-time systems across automotive, aerospace, defence and industrial domains. As Head of Technical Marketing at AdaCore, he links technical capabilities to business value and is a regular author and speaker on on topics ranging from the software development lifecycle, DevSecOps to formal methods and software verification.





