Blogs about Jazz

Blogs > Jazz Team Blog >

A DevOps transformation

The journey of a system test organization in transforming itself into a continuous delivery, DevOps model

by Mario Maldari and Albert Tabachnik

In a DevOps Continuous Engineering environment, with accelerated timescales, it is ever more important to focus testing efforts on those features and platforms that are most critical to your customers. The Collaborative Lifecycle Management (CLM) system test organization challenged itself to transform to meet the demands of our business and our clients. We started off by analyzing our time investment, and shifting our focus to areas that needed greater investment. We optimized and standardized on a set of “Golden Topologies” that represent a core set of topologies used by our customers. We invested in our automated deployments, effectively streamlining our server setup and deploy process. We moved to a “solution test” model and streamlined our resources and test scenarios. Once the foundation of our transformation was laid down, we began automating our scenarios and running them daily in the pipeline, while allowing testers to focus on other areas. Overall, our transformation has allowed us to better react to how our development teams operate, as well as align our testing with how our customers deploy and use our solution.

Chapter 1: Take a Step Back and Analyze Where you are Spending your Time. Make Changes!

We started by challenging ourselves to examine and scrutinize our process, our testing, and our pain points. The first thing we did was to “take a step back” and look at the bigger picture. We asked ourselves, “what are we spending our time on, and why?” Our management and senior staff encouraged the team to shake off old habits and examine what we are doing with fresh eyes and ask ourselves if we still needed to do it in the same way we had always done it. It is easy to continue to do things you have done in the past because it is familiar; however, in our dynamic environment, we must continually examine our efficiencies and see where we can improve. This was a cultural shift we were trying to achieve within the team. As part of this analysis, we asked the following:

• How much am I spending on process activities, status, test plans, test case scenario writing, etc?
• How can I consolidate?
• How can I reduce the resource that I’m spending on those non-technical areas / low-value areas?

Change 1:

Early assessments showed that we were spending 60% of our time setting up and configuring our complex test environments with only 40% allocated to testing. Each product team was setting up their own environments in isolation. It was obvious we could improve! As a result, we invested time in pattern automation, as well as deployment automation. Four times each month, CLM SVT sets up their golden topology environment. This used to take ~30 hours. It has dropped to 3 hours, saving us 27 hours. And those 3 hours are unattended and fully automated, so that the SVT engineer can be doing other activities while the deployment is running. This proved invaluable for us, and ultimately allowed us to consolidate down to one shared test environment that is centrally maintained. In addition, we invested in automated data generation techniques that allowed us to better simulate customer environments with volume, as well as complexity of data. We now have one shared test environment that is scalable and deployed with very little manual intervention. Time is now spent on testing, rather than configuration and setup.

Change 2:

To streamline our process further, we examined all of the platforms and configurations that we had tested in the past. It was clear this was taking up significant time, and resulted in our coverage being spread thinly across many platforms. The value was not clear in covering all of these platforms, and it was apparent that the cost was high. As a result, we explored standardizing on one set of “Golden Topologies” that would ultimately serve as the configurations that we would propose to our customers. We would run the majority of our testing on these configurations, ensuring their quality and streamlining our testing.

Chapter 2: Changes to Our Test Approach. Add Transparency!

Another area of consolidation that became apparent early on was our product based approach to testing. We had teams built up in silos per product with each team having a test lead, a test plan, and disparate test approaches. We decided to reorganize ourselves more around how our customers deploy and use our products. We transformed ourselves into a “solution”-based test team, where we focused on customer inspired end-to-end scenarios. This allowed us to consolidate our test artifacts and approach into a unified model. We now have one centralized test plan, and seven key solutions scenarios. This has streamlined our reviews, our test report metrics, and overall process.

To achieve the consolidation mentioned above, we had to look very closely at the value of the tests we were executing, and put priorities in place. We looked at tests that have not found a lot of defects, and areas that haven’t changed in a long time, and weighed these against new, high risk areas. We then took our lists of proposals and communicated them with stakeholders and executives to obtain validation of our approach and our priorities. This allowed us to focus in on the important areas based on risk, rather than the approach of “run everything, all of the time”. The approach of running everything all of the time is not feasible in our environment, especially when priorities are constantly shifting. We continue to analyze the tests we run and are constantly making our decisions based on risk and priority. The important piece of this analysis is being transparent with our analysis, and working with our stakeholders to validate our trade-offs and priorities. This is key to ensure our testing is streamlined and as efficient as possible.

Chapter 3: Automating our Scenarios and Running them in a Continuous Delivery Pipeline

As part of driving efficiencies within the team, we have been working to increase our automation bucket in the continuous delivery pipeline. We identify tests that are manual, time consuming and somewhat stable, and work with the automation team to develop automation in these areas. This helps us reduce risk in these areas by finding defects sooner and more often. Ultimately, it allows us to shift resources to other high-priority, changing areas that need manual test focus. We continue to look for candidates that can be automated so that we can build up our testing in the pipeline and look to gain efficiencies as a result. Our automation team has built a framework that allows for us to create new tests, and strive towards automating more and more of our tests. We have formalized a process in which test architects log formalized work item requests in RTC, which are reviewed by the automation team and prioritized with the help of the CLM test architects. Using RTC planning, we are then able to plan these tasks in various sprints for a given release.

During our initial assessment of testing pain points, we identified Media Verification Testing as an area that was not only time consuming, but labor intensive and completely manual. Testers would need to identify and secure hardware, and then run multiple tests that would include web images as well as download files. This manual testing was complicated by the delivery of the builds late in the week, forcing testers to work through the weekend. Often times, at the end of the release, problems were found and another golden master was needed. This forced the team to run multiple, time-consuming executions of MVT. It was clear that automation would provide a significant return on investment in this case.

We invested one week of dedicated work to automate this testing. What used to take four days has been effectively reduced to one hour. Testers can kick off the tests on demand, wait an hour, and then check results. This has provided tremendous value to the team, allowing us to focus on other high-priority areas. Equally as important, the team now has their weekends back!

Conclusion

The actions listed above have allowed us to transform our thinking and our organization into one that is focused on making decisions based on priority and gaining test efficiencies. It is not easy to change a mindset and a culture that has been in place for many years. The key is constant analysis of practice and subsequent evolution that allows you to grow. Challenging the team to reflect on their activities and encouraging them to suggest changes and empowering them to speak up about areas they feel can be improved helped enable us to make these changes. These types of changes must come from within.

Fostering an environment of trust and positive change in our organization has allowed us to make these improvements, and has put us in a position to continue to grow and evolve to meet our clients’ needs moving forward.