Static Analysis & Development Testing for Embedded Devices
By Jason Schadewald, Product Manager at Parasoft
You know those conversations that you have more times than you can count? Well, I recently had one of those at Design West with a very bright software engineer. This poor guy had a number of experiences with static analysis tools that left him with the “compiler warning equivalence” impression. If your static analysis experience is largely with freeware and your training is limited to Internet forums, then I certainly understand how that impression can form. On top of that, he said that the static analysis tools he tried reported “over 20,000 messages.”
It’s easy to see why he and many developers like him would find the effort insurmountable. What we’re dealing with here is a question of validity and quantity of results, and a mature Development Testing platform will help you manage both with minimal human intervention.
Validity of Static Analysis Results
Medical, automotive, aerospace, railway, and indeed all software developers struggle with the concept of validity of results on a daily basis, and the objects of scrutiny range from customers to QA to management to the very tools on the desktop. For developers, it can sometimes seem like everyone and everything is a source of new work – and it all has to be done yesterday. (Is it any wonder that the average developer stays with a company for only 2-3 years?)
While validity is a multifaceted topic, this guy was focused specifically on the “warning” nature of the messages. If it’s just a warning, then it’s not an error, and it’s not worth his time.
What’s really going on is that many development testing tools fail to justify their results in an easily accessible manner. Some even try to hide it behind a glorified “triage” process! In reality, different static analysis rules serve different purposes – defect prevention, consistency for productivity, actual bug-finding, etc. A mature tool will provide full justification for its results as well as a means to organize them by priority, severity, and risk.
This brings us to the next point...
Quantity of Static Analysis Results
No self-respecting manager would drop 20,000 tasks in his employee’s lap, so why do we accept static analysis tools that do? A well-implemented development testing strategy will allow for the prioritization of tasks by severity and risk. Additionally, it will account for distribution of workload across the team and take advantage of individual strengths.
As I confirmed with my new friend from the convention, he would have no problem fixing 5 static analysis violations per day. If he has a team of 10, and they each take 5 static analysis violations per day, then he’s looking at sparkling clean code in about 400 days.
Let’s take this a step further, though. Not all static analysis violations are created equal. If we frontload that effort with the highest risk, most severe issues, then quality, safety, and security can be reasonably addressed within 1-2 months. That’s less than one release cycle for many shops – and with negligible impact to schedule at 5 minutes per task.
The icing on the cake, of course, is that a mature development testing platform will automatically manage the prioritization and distribution of those static analysis tasks for you. You, the manager, set a few policies at the outset, and those policies ensure that each member of your team gets a reasonable number of the highest priority tasks each day. Everybody wins.
For Development Testing white papers, articles, and videos/webinars, visit our Development Testing Resource Center or watch the brief video below:
For Static Analysis white papers, articles, and videos/webinars, visit our Static Analysis Resource Center.