Recently, I was reading a post on LinkedIn in which someone had asked the difference between several static analysis security vendors. One person, unsurprisingly a vendor, replied that their solution was better because while other companies focused on quality and security, they strictly do security.
Of course, that was a ridiculous statement. And perhaps this kind of thinking is indicative of the current rampant problem with application security in the industry; for instance, organizations that are attempting to run their security group completely separate from the rest of the SDLC (both development and testing efforts). In this model, the security team runs their own tests, mostly attempting to break the software, then feeds security bugs back to the dev team. In other words, attempting to test security into their code. I can assure you this is no more effective than testing quality into your code.
Sure, this kind of security testing is necessary, but it simply is not enough. While breaking the software is indeed useful, reliance on it as a method of improving security leads to errors being found at a point that is too late, and they end up being suppressed. In particular, root-cause issues such as improper frameworks and algorithms are swept under the rug, as schedule wins the conflict between rewriting the code and getting the release out the door.
In the Linkedin comment I mentioned above, the vendor is dangerously misleading an unsuspecting prospective customer by claiming their software is somehow better, without actually saying anything useful about how or why it’s better. I don’t mean to pick on any particular tool vendor, especially since I work for one. However, I’m frustrated by such strawman arguments, that give the appearance of hawking snake-oil. In this case, the vendor's product may indeed have interesting unique features, but we’re left with the impression that security is somehow magically different than quality, which lowers our understanding of application security and makes us all a little less safe.
Security has to be treated like quality, and quality has to be based on mature engineering practices, because the truth is that if you have a quality problem, you have a security problem. Studies show that anywhere from 50-70% of security defects are quality defects. In other words, good old-fashioned quality bugs are turning about to be vulnerabilities that intruders/hackers/bad actors use to penetrate your application (we call those “zero days”).
“The consensus of researchers is that at least half, and maybe as many as 70% of common software vulnerabilities are fundamental code quality problems that could be prevented by writing better software. Sloppy coding.”
- Jim Bird “Building Real Software”
If you still aren’t sure how quality and secure overlap, take a look at a couple of examples from the CWE Top 25. The following possible security outcomes are from the CWE Technical Impact work:
- #3 CWE 120 – Buffer Copy without checking size of input (“Classic Buffer Overflow”) – can lead to execution of unauthorized code or commands, possible unauthorized data access, possible Denial-of-Service (DoS)
- #20 CWE 131 – Incorrect Calculation of Buffer Size (leading to buffer overflow) – possible DoS, execution of unauthorized code or commands, possible unauthorized read/modify memory
- #25 CWE 190 – Integer Overflow or Wraparound (leading to undefined behavior and therefore crashes) – possible DoS, possible memory modification, possible execution of unauthorized code or commands, possible arbitrary code execution
If you go further into the full CWE list (over 800 items), you find many others, i.e. all forms of overflow/underflow, initialization, uncontrolled recursion, etc. These are all common security attacks, as well as obvious quality issues.
Build it in
The complexity of software systems grows very rapidly. Trying to test every possible variation quickly becomes nearly impossible. As Richard Bender puts it, “The number of potential tests exceeds the number of molecules in the universe,” which is just a more fun way of saying you’ll never get it done. Or from Jim Bird, “for a big system, you would need an infinite number of pen testers on an infinite number of keyboards working for an infinite number of hours to maybe find all of the bugs.”
So both security and reliability have to be designed and engineered in. You can’t test them in. As long as security is something "extra" it will suffer.
What can be done?
Here are a few things you can do to start improving your software quality and security at the same time.
- Train developers in secure development. Adequatelly training your developers in secure development practices means they can prevent – or at least find and fix – security problems.
- Design and build your system with a deliberate focus on quality and security. Avoid code that “works” but isn’t really a good choice because it has potential security problems. (Or safety problems, for that matter.) Static analysis will help you do this by checking your code for not only bugs, but also for compliance with known best-practices.
- Stop relying on edge tools. Recognize your actual exposure and attack surfaces. Firewall and anti-virus won’t make up for insecure code – you must harden your application.
- Collect/measure defect data and use it to assess and improve your development practices. What code or components produce the most problems? What code is the best? How were they tested? Repeat the good ideas and flush the bad ones.
- Use strict static analysis. Don’t simply accept someone’s assessment that a reported defect isn't an important issue or a false positive. Get a good set of rules included both detection and prevention, and live by them. The best way to do this is from an engineering approach around best practices (the role of coding standards like CWE, CERT, and OWASP). Static analysis is the way to be sure that best practices are being followed.
- Use runtime analysis. It will find real problems (especially nasty memory problems), and it shows you exactly where and what went wrong without any false positives.
So we need to start building security into our code. This is best way to really harden it, rather than just patch known holes. Having all your software development results from coding, building, and testing integrated into a central repository provides control, measurement, and traceability. This is the basis for future improvement.
And remember, the cost of solid prevention is less than the cost of dealing with bad or insecure software. So there's really no excuse.