Blogs about Jazz

Blogs > Jazz Team Blog >

Web UI test scenarios in the Continuous Delivery Pipeline

Tags: , ,

History

I work as the Automation Lead for the Function Verification Test (FVT) team in the Collaborative Lifeycle Management (CLM) project. This post details some of the efforts we are making to improve our test automation to enable continuous delivery. The CLM FVT team created the CLM Build Verification Test (BVT) scenario to quickly sanity-test CLM functionality. The scenario uses a CLM deployment comprised of a Jazz Team Server and the Change and Configuration Management, Quality Management and Requirements Management applications. You can view the scenario in-depth on our wiki.

The scenario was first automated using JUnit 3 and Selenium RC1. Unfortunately, written tests have too many lines of code making them difficult to reuse and maintain. It was also hard to understand what happened when failures occurred in such complex tests and quite impossible to resume the scenario execution smoothly from the failure. Consequently, we had lot of difficulty writing other tests and we also wasted a lot of time when failures occurred during scenario run. Another issue was that the Selenium RC1 version uses an intermediate server while running which made the automated tests not entirely representative of real user actions.

Framework based on Selenium

To avoid all these issues, we rewrote the entire automated CLM BVT scenario to use the Selenium WebDriver API instead and, at the same time, we based our test implementation on the corresponding Page Objects pattern. It quickly became apparent that using this object-oriented pattern would allow us to address the maintenance and reusability issues of the existing automated code.

On the other hand, to address problems while writing and managing execution of scenario tests, we also decided to create our own framework. This was a coordinated effort across all of the CLM teams to develop a common Web UI test automation designed to hide the interface of Selenium in a specific layer and the page objects in another layer built on top of the Selenium layer. This framework was named CLM Selenium PageObject Framework (aka CSPF). Scenarios were intended to be implemented in an additional layer directly on top of page objects layer. The framework has standardized and adopted best practices for Web UI automation across the CLM product teams.

We built our framework based on the following principles:

1)     Robustness
CSPF has to be robust and should not raise any false errors (ie. due to the automated code itself). When such problems occur they should be rapidly addressed and all scenarios should be able to leverage the fix immediately.

2)     Ease
CSPF hides the underlying frameworks (ie. Selenium and JUnit) by proposing an API which is applications web pages oriented. So testers do not need to know how Selenium works to write and to run tests. As they are supposed to know the CLM product they want to test, looking at corresponding web page API methods is enough to write tests (e.g. click on a specific button, jump to another page, etc.).

3)     Configurable
CSPF allows testers to run the scenario in different configurations without changing anything in the code (neither in the framework, nor in the scenario). As an example, changing some parameters value is enough to run the scenario in such different environment than WebSphere Application Server (WAS) or Tomcat, Google Chrome or Firefox, DB2 or SQL Server, etc. Changing the scenario data is also possible using the same scenario arguments mechanism.

4)     Tolerant
CSPF is resilient if a transient error occurs during a test. When that happens, there is some default mechanism to resume the scenario execution without failing. Of course in such case, tester is warned of the error, but the scenario was able to smoothly run until the end.

5)     Helpful
CSPF helps testers when failure(s) occur(s) due to a product issue. Testers can globally decide whether a scenario should stop the execution as soon as an error occurs, but it also allows deciding this for each step or test. It also allows resuming the execution from the failure if a workaround has been applied manually for example. In case the scenario has continued after the failure, it allows running the single failing test again, e.g. for debug purposes.

6)     Repeatable
CSPF offers ways to know whether any data already exists, hence testers can decide what to do when creating data in their scenario, i.e. either resumes execution or fails. For example CLM BVT scenario has been written to allow running it even if it has been already run against the tested CLM server. This is peculiarly interesting to check that data is kept clean during our complex upgrades.

Web UI Scenarios

We integrated CSPF test cases into the CLM continuous delivery pipeline for automated execution with each build.  The framework allows new automated test cases to be easily plugged right into the pipeline to take full advantage of the pipeline’s capabilities. We started with only a single scenario (CLM BVT) at the end of 2012 with 64 tests. Six months later we have now 11 scenarios with 489 tests!

To find out more details about CSPF Scenarios, visit our wiki page.

Enabling Continuous Delivery

This is one example of the way in which we’re improving our test automation to enable continuous delivery. In this case our specific goal is to rapidly ascertain the quality level of a build and identify any recent regressions. We need enough test coverage to determine whether the build can be more widely used and tested.

So far, four of our scenarios are now run through Continuous Delivery Pipeline process. Three of them are BVT scenarios and their execution ensures that no build could be produced without the minimum quality level. Note however, that this basic testing already represents more than 4 hours of tests. If we did those tests manually, it would last around two days! With at least one CLM build per day (sometimes two or three), we obviously couldn’t afford to run these tests without automation…

However, we want to go deeper in our continuous build testing so we’re working on additional validation scenarios. The Money That Matters (MTM) Sample scenario is one example of this kind of validation. This scenario, which is partially automated, already runs for each CLM build. The CLM Licensing scenario, which we’re currently automating, is another example.

We’re also working to increase the coverage of our test environments. For example, we currently only use the Firefox browser for our Web UI testing. We are working to add Google Chrome and we have already made good progress on that front. We will also add Internet Explorer. Note this doesn’t mean those browsers aren’t tested at all as they are tested in lots of manual testing in both the FVT and System Verification Test (SVT) teams and throughout the development teams. By adding coverage for those browsers in the CLM BVT we can identify browser-specific issues earlier. We are also looking at running our scenarios in complex environments (as clustered, distributed, renamed or staging).

Are you investing in test automation to improve the throughput of your deployment pipeline? If so we’d love to hear about your experiences in comments on this post.

Frederic Fusier
CLM Function Test Automation Lead
IBM Rational