With the need for speed driving continuous integration, continuous delivery, and continuous release, organizations across industries are experiencing a rising rate of regressions, integration errors, and other defects. Accelerated delivery is great—unless you end up placing the business at risk because your testing efforts just can't keep pace.
Watch the on-demand How to Avoid Continuously Delivering Faulty Software webinar (co-hosted by Perforce and Parasoft) to explore best practices for reducing the risks associated with Continuous Delivery. The webinar covers:
The top 4 best practices for Continuous Delivery.
The role that "DevTest" practices like static analysis, unit testing, functional testing, exploratory testing, performance testing, and security testing play in a Continuous Delivery process.
How to more accurately answer the core questions that different team members have (Development Manager, Architect, DevOps, Developers, Testers).
You'll learn how to:
Work to a set of business expectations, not just technical specifications
Build a well-defined, optimized workflow
Get started with automating software quality
Some great follow-up questions were asked by those watching the webinar live, including...
How do we make the best use of code analyzers for a large legacy code base?
For a legacy code base, it's important to configure code analyzers correctly to ensure there isn’t too much noise. Focus on what's important; otherwise, the results of code analysis may be ignored.
Versioning everything sounds good in theory but what about repository size and performance?
This depends on the version control system being used. For example, some (e.g., Perforce) can handle files of any size and type without suffering any performance penalties. For others, you might have to split repositories into smaller instances to avoid performance issues.
How can I coordinate both test and environment configuration at the same time?
You can do that with a test lab management solution. For example, with Parasoft Environment Manager, you can define a library of test jobs with associated environments, then provision the environment and execute the specified tests on demand. For example, a test scenario can use one set of test data and endpoint variables for execution in a development testing environment and another in a system integration testing environment. The job execution history stores the associated test environment settings and variables along with results, enabling complete traceability.
Developers hate code reviews, so how will using a tool help?
Reviewers can collaborate easily whether together or asynchronously. You don’t necessarily need to get everyone in a room or on a call at the same time. Comments can be posted inline with the code and answered easily by someone else when they have time. This helps eliminate the stress felt when people are being put on the spot and asked difficult questions when an immediate answer is expected.
Another benefit is that by including automated build and test as part of the review process, you ensure that only code that has passed code analysis and unit testing is reviewed--which helps avoid wasting time reviewing code that has significant problems.