Go back to blog listing

Unit Testing: How We Eased the Burden of Our Regression Test Process

Unit Testing BurdenYears ago, when Parasoft started examining our internal unit testing process, we found that developers would write functional unit tests to verify requirements as code was implemented. However, when test assertions later failed as the application evolved, the regression test failures weren’t being addressed promptly.

Why it Didn't Work: Lack of Accountability

In each case, someone needed to review the failure and decide if it was an intended change in behavior or an unwanted regression error. Our open source unit test execution tools couldn’t tell us what tests were failing from whom, for what, since when. Instead, our nightly often produced a report that said that something like 150 of our 3,000+ tests failed.

Each developer could not determine if his own tests failed unless he reviewed the full list of failures—one at a time—to see if it was something that he should fix or if it was a task for someone else.

Attempts to Resolve Failures Faster

Initially, we tried asking all developers to review all unit test failures, but that took a lot of time and never became a regular habit. Then, we tried designating one person to review the test failures and distribute the work. However, that person was not fond of the job because it was tedious and the distribution of tasks was not well-received by the others.

A Sustainable Solution

Eventually, we built automated error assignment and distribution into our unit testing products, and started using it internally. Now, assertion failures are automatically assigned to the responsible developers based on source control data. If a developer causes one or more assertion failures, he is notified via email, imports only the relevant results into his IDE, and resolves them. As a result, failures are now resolved in days instead of months.

***

Photo credit: bazylek100

Stay up to date