False Positives in Static Code Analysis
"Too many false positives" is probably the most common excuses for avoiding static analysis. But static analysis doesn't have to be so noisy.
Years ago, the biggest challenge in static code analysis was trying to find more and more interesting things to check. In Parasoft's original CodeWizard product back in the early 90s, we had 30-some rules based on items from Scott Meyers' book, Effective C++. It was what I like to think of as "Scared Straight for programmers." I mentioned this once to Scott, and while he hadn't thought of if that way... it did give him a pretty good laugh.
Since then, static analysis researchers have constantly worked to push the envelope of what can be detected, expanding what static analysis can do, and identifying defects rather than just a piece of weak code. But it still suffers from false positives. Static analysis has changed the user's focus, from hardening the code, to searching for bugs, which is great, but now one of the most common hurdles people run into with static code analysis is trying to make sense of the results they get.
Although people do say "I wish static analysis would catch ____" (name your favorite unfindable bug), it’s far more common to hear, "Wow, I have way too many results!" or "Static analysis is noisy!" or "Static analysis false positives are overwhelming!" So as a software testing organization, it's our job to continue to solve that problem for our customers -- to continue to provide tools and features that help you sort through the results you're getting and understand which issues represent the most risk.
What is a False Positive in Static Analysis?
In the context of static analysis, a "false positive" occurs when a static analysis tool incorrectly reports that a static analysis rule was violated. Of course, this can be subjective. Sometimes developers fall into the trap of labeling any error message they don’t like as a "false positive," but this isn’t really correct. In many cases, they simply don’t agree with the rule, they don’t understand how it applies in this situation, or they don’t think it’s important in general (or in this particular case). I would call this noise, rather than a false positive. The funny thing I've found here is that the more clever the tool is, the more likely it is to produce a finding that a developer might not understand at first glance.
False Positives in Pattern-Based Analysis
Pattern-based static analysis doesn’t actually have false positives. If the tool reports that a static analysis rule was violated when it actually was not, this indicates a bug in the rule (because the rule should not be ambiguous). If the rule doesn’t have a clear pattern to look for, it’s a bad rule.
I'm not saying that every reported rule violation indicates the presence of a defect. A violation simply means that the pattern was found, indicating a weakness in the code, a susceptibility to having a defect.
When I look at a violation, I ask myself whether or not this rule applies to my code. If it applies, I fix the code. If it doesn’t, I suppress the violation. It’s best to suppress static analysis violations in the code directly so that it’s visible to team members and you won’t end up having to review it a second time. Otherwise, you will constantly be reviewing the same violation over and over again; it's like trying to spell check but never adding your "special" words to its dictionary. The beauty of in-code suppression is that it’s independent of the static analysis engine. Anyone can look at the code and see that the code has been reviewed and that this pattern is deemed acceptable in this code. This is particularly useful if you need to prove compliance to a coding standard. And if you do indeed need compliance, it's easy to use an existing configuration for those standards such as CWE, MISRA, IEC 62304, DO-178B/C, and more.
False Positives in Flow-Based Analysis
With flow-based analysis, false positives are not just inherent to the method, but also relevant — and need to be addressed. Flow analysis cannot avoid false positives for the same reason that unit testing cannot generate perfect unit test cases. The analysis has to make determinations about expected behavior of the code. Sometimes there are too many options to know what is realistic; sometimes you simply don’t have enough information about what is happening in other parts of the system.
The important thing here is that the true false positive is something that is just completely wrong. For example, assume that the static analysis tool you’re using says you’re reading a null pointer. If you look at the code and see that it’s actually impossible, then you have a false positive.
On the other hand, if you simply aren’t worried about nulls in this piece of code because they’re handled elsewhere, then the message (while not important to you) is not a false positive. It's true and happens to be unimportant. The messages from a flow analysis tool range from "true and important" through "true and unimportant" and "true and improbable" to "untrue". There is a lot of variation here, and each should be handled differently.
There is a common trap here as well. As in the null example above, you may believe that a null value cannot make it to this point, but the tool found a way to make it happen. If it’s important to your application, be certain to check and possibly to protect against this.
It’s critical to understand that there is both power and weakness in flow analysis. The power of flow analysis is that it goes through the code and tries to find hot spots and find problems around the hot spots. The weakness is that it has to make assumptions to try and traverse the code, and the further it traverses, the more likely it is to produce an improbable path.
The real problem is that if you start thinking you’ve cleaned all the code because your flow analysis is clean, you are fooling yourself. Really, you’ve found some errors and you should be grateful for that. The absence of flow analysis errors just means that you haven't found anything, not that the code is clean. It's best to make sure you're using a tool like C/C++test, dotTEST, or Jtest that has both types of static analysis, if you are building safety-critical software.
Runtime Error Detection
One great, but commonly overlooked, way to complement flow analysis is runtime error detection. Runtime error detection helps you find much more complicated problems than flow analysis can detect, and you have the confidence that the condition actually occurred. Runtime error detection doesn’t have false positives in the way that static analysis does. When it finds a defect, it's because it actually observed it happening during execution — there are no assumptions involved.
Your runtime rule set should closely match your static analysis rule set. The rules can find the same kinds of problems, but the runtime analysis has a massive number of execution paths available to it. This is because at runtime, stubs, setup, initialization, etc are not a problem the way they are for flow analysis. The only limit is that it's only as good as your test suite because it checks the paths your test suite happens to execute. If you're programming in C or C++, especially in embedded devices like IoT take a look at Insure++ — it can find more bugs at runtime than any other tool. Instead of getting bogged down by tricky issues like thread problems, memory leaks, and race conditions, you can find them accurately at runtime.
Is it Worth the Time?
My approach to false positives is this: If it takes 3 days to fix a bug, it’s better to spend 20 minutes to look at a false positive...as long as I can tag it and never have to look at it again. It’s a matter of viewing it in the right context. For example, say you have a problem with threads. Problems with threads are dramatically difficult to discover. If you want to find an issue related to threads, it might take you weeks to track it down. I’d prefer to write the code in such a way that problems cannot occur in the first place. In other words, I try to shift my process from detection to prevention.
Static analysis, when deployed properly, doesn’t have to be a noisy unpleasant experience. Take a look at how we do things differently at Parasoft, especially using the full power of Parasoft DTP to manage results with intelligent analytics that keep you focused on the risk in your software rather than chasing unimportant issues.
Arthur has been involved in software security and test automation at Parasoft for over 25 years, helping research new methods and techniques (including 5 patents) while helping clients improve their software practices.