The following is the third posting in my Collaborative Lifecycle Management (CLM) Continuous Deployment series of blogs on Jazz.net. The earlier two blogs can be found at Improving throughput in the deployment pipeline and Behind the scenes of the CLM Continuous Deployment pipeline. In Behind the scenes of the CLM Continuous Deployment pipeline, I had mentioned a mechanism we have developed on the Jazz Collaborative ALM project to gather progress and status data of phases that make up our deployment and test execution pipeline. In this post, I will show you how we use the gathered data for reporting on the overall state of the pipeline and narrow down on the root causes upon its failure. Additionally, the data allows us to create trend reports that help us assess the overall quality of our builds, tests and product code over a period of time.
To reiterate briefly, the CLM Continuous Deployment Pipeline creates a “track build” type work item that is associated with one and only one build result. In our case, it is the result of a CALM-All build that orchestrates our product’s build, deploy and test execution activities. The track build item is created after the product compilation and package stages have been completed successfully and install repositories have been created for it. The track build item holds references to each ensuing deployment and test build run by the orchestrated build. Additionally, it records approvals and rejections status of each of these execution stages.
Since both the track build item and its approval data can be warehoused to a data repository, it enables use of common reporting tools, such as Eclipse Business Intelligence and Reporting Tools’ (BIRT), IBM Rational Reporting for Development Intelligence (RRDi) and IBM Rational Insight, for creating interesting reports that can be shared on a Jazz project or team dashboard. For our purpose, all reports referenced in this post were created using BIRT’s report designer capabilities against data warehoused from our self-host Jazz repository – Thanks a ton to our in-house reporting guru, Rafik Jaouani, for helping us develop these in a short period of time!
WHAT’S A GOOD BUILD?
Despite our sincere efforts to reduce barriers of communication, we often find our development and test community asking a very basic question: Which CLM build should I use? Using the CLM continuous deployment pipeline, not only can we easily provide an answer this question, we can also do it automatically through a dashboard widget. Additionally, we can also derive answers to other interesting questions like “How many times this week/month/milestone did we produce a good CLM build?” or “How long has it been since we produced a good CLM build?” With the help of the track build items that constantly gather deployment and test approval data for each execution stage in the pipeline, we are able to construct the following report:
This report is also available on our team dashboard under “Pipeline Statistics – Last 30 Days”.
Following the reference to a track build item captured by the report widget above, our development and test team members can navigate to its corresponding product build results page and eventually to the published IBM Installation Manager repository. Using the repository reference link, they are able to install client/server bits, produced by this thoroughly tested build, on a host system of their choice. Alternately, they can choose one of the pipeline supported mechanisms for installing it in a cloud/virtual environment against a recommended system topology.
WHO’S BRINGING IN THE DONUTS?
It would be naïve to assume that every run of the CLM pipeline produces a build that is good for consumption by the wider teams. On the contrary, there is always something or the other that makes an execution stage of the pipeline fail. As I mentioned earlier, any time an execution phase in the pipeline fails, its related approval record is marked as “rejected” in the track build item. This interesting aspect of gathering build breakage data as rejections on the approval record show up on our monthly operations review reports as below:
A most recent copy of this report is available under “MOR Reports – August 2013” on our team dashboard.
The charts shown above indicate the number of times the pipeline failed deployments and test execution phases in the last 30 days. A further breakdown of these allows us to narrow down on the phase that failed consistently and/or periodically:
A further breakdown by time is also possible, wherein we can show a success/failure trend on an instance-by-instance basis. Here is an example that shows when our CLM builds passed or failed the basic build verification/smoke tests in the last 30 day period:
For the month of August 2013, a few other trend reports like this are available at the following two locations on our team dashboard:
OUR NEXT STEPS
These reports give us a very accurate view of the quality of our product and automated test code. Coupled with the number of product/test defects found in the pipeline, we can easily determine if we are doing (or not doing) due diligence in terms of breaking down high risk feature items, reducing cross-application impact of new features, communicating and quickly adopting late-breaking changes and running sufficient pre-integration tests. These execution reports also help us identify gaps in our automated tests and give us a fair assessment of the most problematic areas of the product, areas with zero or little automated test coverage and root causes for frequent failures (infrastructure, test or product). Since these charts are now available on a team dashboard, they are always available to all our teams. We are working closely with our development teams to reduce the number of failures due to product issues. We are also working with our test teams to stay on top of test issues affecting our pipeline process and are collaborating with them for bringing in additional automated test frameworks that validate orthogonal features of the products not covered today. Finally, we are being dead serious about owning and providing a quick turnaround of every failure encountered by our continuous delivery process, thereby giving you a much greater experience of using our products.
You must be logged in to post a comment.