Having experience with different types of reporting and status updates, Brian Lang, the Collaborative Lifecycle Management (CLM) Product Delivery Leader, spent some time discussing with me some of the key productivity gains he’s noticed in using Rational Team Concert (RTC). The focus here is on the benefits of having a collaborative and transparent workflow, especially when it comes to meeting preparations. Brian shares with us the benefits of using RTC Dashboards so that the data is available, current, and updated as part of the natural development workflow.
Brian, what types of meetings have you needed to pull together presentations in the past and today?
In the past, when I was using a chart-based status process, leading up to the Senior VP’s Monthly Operations Review (MOR), there would be a review with my VP where we would review the status of everything in my portfolio. This included key dates of delivery, status (red, yellow, green), and any additional key points that we wanted to raise to an executive level. These topics might typically included key capabilities, beta feedback, or what kinds of achievements or accomplishments the team had this month. Also, we prepared risks and mitigations. Each product would have their own chart.
Additionally, we covered other things like burn down charts, user stories, velocity, story points, and month to month trends. We also covered various forms of technical debt in the form of APARs, defect backlogs which rounded out the reviews.
Now, in CLM, while we still have MORs, the process is different even though the types of data are the same. Rather than create a separate set of material, the materials are baked into our workflow with regular reviews with teams. The same sorts of materials that would have been prepared and reviewed are part of the workflow including scrum meetings, release team meetings, and daily communications and work activities. The schedule is laid out and published for the whole team. Risks and mitigations are captured and updated in plan items. If a plan item’s risk changes from green to yellow or red, the viewlet in the dashboard will pull in that plan item, with the latest updates about actions.
What kinds of preparation do you do for these meetings?
When pulling together charts, we would work our way backwards from the review with the Senior VP. We ended up with four separate meetings to review material specific to the status update. With the CLM dashboards, we can be ready for an update at any time. It’s transparent and anyone in the team can check on the overall status because transparency is built into the product and supports our process.
How long did it take to create the dashboards?
Viewlets are part of the RTC capabilities and some are out of the box. For example, risk and issues are available out of the box. It doesn’t take long to set up the queries against project areas. Depending on how many and the complexity, it typically takes less than an hour to get them put up. Once they’re there, they are automatically updated, so you don’t have to repeat creating them. One example is that we have a tab on our dashboard for risks and issues. When we are tracking multiple releases, like we did with 5.0.2 and 6.0, we can separate them out, while keeping them on the same tab.
Going back to collecting the data for the charts, how many people are typically involved in gathering and consolidating that information?
First, the offering team pulls together the information. They usually have a release engineer, a project management, a development lead or chief programmer, an architect, a test lead, and a support lead, six people typically. Sometimes there is the first line manager too so about six to seven people gathering and generating content to be reviewed. The managers were the ones who presented the data to me, they are responsible for the content being presented.
How many teams did you have for a typical monthly status review?
When I was doing chart-based reporting, I had about three key teams. This meant that 18-21 people were preparing for the meeting with me.
How long does it take to pull that information together for each team?
It takes a couple of hours for each person, approximately two to three hours to verify and answer are we green, yellow, or red?; how many test cases did we run?; what kinds of things are blocking us, etc. That was spent running reports and following up with people for verification. If things are not looking good, we may need more activity, up to maybe four hours. That’s about 24 hours per team. Add on a 60-90 minute review in the offering team meeting making up another nine hours of people time (33 hours), plus another three hours for turnaround time on any follow ups. We’re looking at maybe 36 – 40 person hours to generate that first set of data for review.
Then there is the review with me. The least number of people in the room are the three managers and maybe someone from their team if there was a key technical issue to discuss, so five to six people for another 90 minutes making up another nine to ten hours. Then another set of questions, and we’re up to 45-50 hours. We have another review including me and another person or two, the support and development leads, for another hour. After all of that, it adds up to about 60-65 person hours to get ready for the Sr. VP MOR.
Talk a little more about how the preparation differs.
The preparation is baked into the process. I update the executive status on the dashboard weekly as part of my preparation for the ALM release team weekly meeting. That is where I update my overall take on the release. If we’re yellow, I share why I think we’re yellow and what we’re doing to mitigate the risk. It takes about 15 minutes for me to update that and it’s part of my workflow. The release team meeting is where we talk about the release and where we need to focus. It’s less about status and more about what the team needs to work on accomplishing that week.
The chart-based reporting teams were also working on those things, but they needed to document it outside their natural process. How they met and communicated was a key difference because these types of updates were not part of their daily or weekly workflow.
It’s hard to do an apples to apples comparison because they are such different ways of working. In the CLM world, everyone can see when risks and issues are updated. In the chart world, the risks and issues were only seen by about 10% of the team. If only a subset of the team understands the risk, it’s hard to go to a newly hired developer and ask about the work they were doing on risk “X” because they might not know what you’re talking about, or know about it in the same context. Also, the “man hours” involved may be similar with the dashboards because the whole team is involved in updating issues regularly, but it’s a natural part of the workflow so it doesn’t feel like anyone on the team is spending time gathering status.
We use Jazz.net to build Jazz.net, using the dashboards to drive our work. Once that work is done, we all can see our prioritized backlog to pull off the next thing to do more work. The collaborative nature of the design makes transparency, priorities, and risk part of the process vs ppt where only a select number of people see it.
Most meetings result in follow up actions, how are actions from the meetings captured?
Actions in the chart world would have been tracked in an email. I’d get an email saying that the VP met with Sr. vP and here are the two to three things to work on based on the review. Then I’d assign the work, and follow up. If my VP doesn’t remember, or loses the email, the actions fall through the cracks. We’d report on those actions through email and follows up in status meetings or 1x1s.
In the RTC world, actions are captured as work items. We review the actions to pursue which are tracked and prioritized with other work items in RTC. So, if anyone has any questions, they can revert to the work item. There is a live audit trail, when it was opened, who opened it, who owns, what has happened, etc. It becomes part of the workflow.
I can also add that in test we have major productivity gains from dashboards usage that are linked to test plans. They help to eliminate the manual chart compilation work and results are kept live and up to date, which is imperative as our releases and test cycles are getting shorter.
I’d like to thank Brian for taking the time to discuss this topic. Using RTC for dashboards enables a natural development workflow supporting open communication and transparency while reducing overhead for the organization. Understanding the priority of the work, the status of the work, and the risks of the work all help keep teams focused. By making it part of the natural development workflow it increases productivity. I’m not sure who to attribute this quote to: “If it hurts, do it more often” (maybe?). But, it definitely has reduced stress and overhead in this context!
Program Director and Rational DevOps Protagonist