StarWest 2016 is now less than a week away...and based on the program, it looks like they've packed at least a month's worth of great sessions into just a few days. When there are many as 6 sessions going on simultaneously, how do you decide which ones to attend?
After perusing the schedule, we want to highlight a few events that our customers and readers won't want to miss:
Capital One’s highly integrated environment creates many interdependencies for its agile teams. Because these dependencies were not being completed until late in their sprints, Adam Auerbach (@) says that Capital One faced prolonged integration and regression testing phases and did not realize expected improvements in quality or time-to-market. As technology leaders pushed for continuous delivery (CD), testing needed to shift left and occur simultaneously with development. To shift left, the testing community needed to learn basic development skills, including Ruby and Java, to take advantage of advanced automation practices, service virtualization, and the continuous integration (CI) pipeline. Adam shares Capital One’s experience implementing continuous testing and describes its core principles. He explains service virtualization and the CI/CD pipeline and why they are important concepts for testers to understand and leverage. Since continuous testing is not easy and many companies have large organizations of manual testers, Adam provides a learning map that will help organizations plan the transition.
Panelists include the following Parasoft customers: Ryan Papineau, Alaska Airlines | Adam Auerbach, Capital One | Mike Puntumapanitch, CareFirst | Sujal Dalia, Comcast
In this panel discussion, IT leaders from Alaska Airlines, Capital One, CareFirst, and Comcast share insights on the critical organizational changes required to transform test for DevOps. In addition to profiling the application of key transformational technologies such as service virtualization, API testing, and development testing, the panel will explore:
- Challenges associated with transforming test
- Process changes needed to achieve speed
- Critical infrastructure technologies required to support speed
- Crucial skill sets needed for both developers and testers
The panelists' diverse DevOps journeys have been driven by number of different catalysts:
- Outpacing aggressive competition in the financial industry
- Achieving top ratings for customer satisfaction in the airline industry, where any glitch is now instant headlines news
- Responding to multiple waves of digital disruption—while controlling costs—in a highly-regulated non-profit organization
- Accelerating time to market for highly-complex (and performance-critical) distributed systems
No longer just a futuristic concept, the Internet of Things (IoT) has a strong presence in our world even today. If your business is not prepared for it, you’re already behind. With the proliferation of connected “things”—devices, appliances, cars, and even clothes—Jennifer Bonine (@) says that the stage is set and IoT apps are here to stay. Testing, product management, and development teams must address developing and testing in this paradigm. Testers, accustomed to traditional platforms, are now asked to test on more complex devices and more advanced platforms. Testers must keep up with the demand for new skills, new strategies, and an entirely new set of knowledge for testing IoT apps. IT organizations must master the new skills, tools, and architectures required by the IoT. Jennifer reviews where we are today and explores the challenges that IoT and increased complexity pose to our testing profession. See what tools are available to aid IoT testing, what to consider and plan for when testing, how other organizations are preparing, and the new skills testers need.
How did Alaska Airlines receive J.D. Powers' “Highest in Customer Satisfaction Among Traditional Carriers” recognition for 9 years in a row, the “#1 on-time major North American Carrier” award for the last 5 years, and "Most Fuel-Efficient US Airline 2011-2015"? A large part of the credit belongs to their software testing team. Their industry-leading, proactive approach to disrupting the traditional software testing process ensures that testers can test—faster, earlier, and more completely.
Join this session to hear Alaska Airlines Automated Test Engineer Ryan Papineau share how they used advanced test automation in concert with service virtualization to rigorously test their complex flight operations management application, which interacts with myriad dependent systems (fuel, passenger, crew, cargo, baggage, aircraft, and more). The result: operations that run smoothly—even if they encounter a snowstorm in July.
- Get a technical account of how Alaska Airlines leverages service virtualization to address common testing challenges
- Explore the ABCs of Service Virtualization: Access, Behavior, Cost, and Speed
- Learn how to leverage service virtualization with Docker and the cloud (Azure, AWS, Google)
- Understand how technologies like service virtualization, automated API testing, and test environment management enable DevOps and Continuous Testing
Although the idea of doing performance testing throughout the software lifecycle sounds simple enough, as soon as you try to combine the concepts of “always testing” (in dev, pre-prod, and production) with “limited time and resources” and throw in the word “comprehensive,” the challenges can be monumental. Quickly the “how” of it emerges as the most important question—and one worth focusing on. Brad Stoner tackles this topic by explaining how he has been able to solve this seemingly impossible puzzle by applying various approaches such as early and often, learning when to say no, and seriously, I did say no—and more. Brad shares concrete examples of how he has successfully implemented full lifecycle performance testing at several companies. Join Brad to learn what performance tests to run at each development and delivery stage—from a simple load profile on a single server to full-scale soak tests over several days.
Andreas Grabner maintains that most performance and scalability problems don’t need a large or long running performance test or the expertise of a performance engineering guru. Don’t let anybody tell you that performance is too hard to practice because it actually is not. You can take the initiative and find these often serious defects. Andreas analyzed and spotted the performance and scalability issues in more than 200 applications last year. He shares his performance testing approaches and explores the top problem patterns that you can learn to spot in your apps. By looking at key metrics found in log files and performance monitoring data, you will learn to identify most problems with a single functional test and a simple five-user load test. The problem patterns Andreas explains are applicable to any type of technology and platform. Try out your new skills in your current testing project and take the first step toward becoming a performance diagnostic hero.