Jazz Jazz Community Blog Unify your test automation for continuous delivery

This is the second article about testing in our Continuous Delivery series to share practices we have adopted in our own development team within Rational Team Concert (RTC). The presented techniques are used to control software evolution and to improve the quality of our code base. Since there is a variety of IBM and open source tools we are using, we focus here on the practices and techniques rather than on particular tools.

Automated Tests In Continuous Delivery

A comprehensive suite of automated tests in your continuous integration builds provides you with confidence that newly created bits are working as expected. Otherwise, a “green” build only means that your software was compiled and assembled, and a significant amount of manual testing is needed to establish that confidence.

There are many kinds of automated tests. Some of them run fast and do not require much infrastructure. Others are expensive to run and involve a significant amount of setup work. All this extra work will lengthen your build time, thus having a comprehensive test suite seems to contradict with the requirement of fast builds. If a build takes too long, people will stop doing a full build or stop running the tests before check in. Also too many change deliveries will taken place in between two continuous builds, and people may simply deliver changes less often. In summary, the overall feedback loop will be lengthened and less used by your development team.

We believe that three kinds of automated tests are important to be run in continuous integration builds: unit tests, integration tests, and acceptance tests. Unit tests exercise small pieces of a software system and run fast without starting the whole system. They do not require a database, a filesystem or network connections to a server. Integration tests exercise program behavior of several components, and often do require a database, a filesystem and a running server. They have longer run times, but do not run the software in a production-like environment. Acceptance tests check that your software system meets the acceptance criteria decided by the business, including both functional and non-functional (capacity, security, performance etc.) characteristics. They can take a long time to run and should run the whole application in a production-like environment. With these three sets of tests, you should have a high level of confidence that none of the newly introduced changes does break any functionality.

Building a comprehensive suite of automated tests can be difficult and expensive for various reasons. Existing classes may have implicit dependencies, inaccessible state, or require a complex context. Some of them are hard to instantiate in new contexts and others have no side effect that can be observed from the outside. APIs for creating artifacts work differently on model, client, server, and UI layers. Sandboxing of test data can be difficult, thus executions of some tests interfere with subsequently ran tests. Often that leads to a common mistake writing integration tests first, without differentiating the individual testing goals. Nonetheless, the creation of new tests can be challenging and significantly more expensive than implementing the actual feature.

An API For Automated Tests

In RTC most new features require changes on several layers, and thus knowledge on how to write tests in each of them. We wanted every new contribution to our code base to be accompanied by tests that cover the change. To this end, we developed a fluent builder API that unifies how tests are implemented and run, how test artifacts are created and managed, and how developers can explore the API available in each layer of RTC. The key features of our testing API can be summarized as follows:

  • Unified API as single entry point:
    A testing API is available for every application layer. It allows developers to write tests with different purposes in a very similar way. The API uses a fluent builder approach for artifact creation, UI navigation, and execution control. A special field, the ‘$’ character, is surfaced in test classes on every layer as a single exploration point. Available functionality can be explored and learned quickly.
  • Smart default values for concise test code:
    The builders follow a functional approach with smart default values. Every builder accepts a value or a builder for that value, i.e., value dependencies are resolved behind the scenes and all artifact creation can be nested into a single method call. For creating test artifacts, you do not need to specify a value if you do not want to assert it later. This way, very little code is needed to create some test artifacts and assert certain values.
  • Unified test execution with hidden setups:
    In every application layer we provide a common base class for all tests. This class not only provides access to the builder API, but also hides the setup required to run a test case. For example, in the UI layer the whole Selenium setup including API to communicate with it is hidden for UI tests. In other cases, like the Web client layer, a special test runner is provided that hides all specific requirements for running a test within a JavaScript VM. But this is all hidden and all different kinds of tests can be run together in the same suite from within the code editor with a single click.

Let’s Look At An Example

RTC is built of multiple components which have dependencies among each other. For example, the work item component depends on the process and repository components, but the SCM and agile planning components depend on the work item component. Every component consists of a number of Eclipse plug-in projects that can contribute to different layers of RTC. In the work item component we contribute services to the server layer, libraries to client layer, as well as a data model to the common layer, which is shared between server and client.

Revert Editor Feature

As an example, assume we have a requirement to implement a revert button for the work item editor in the Web client. The button will replace the editor’s content with the last saved state. Such a feature would need a REST service to fetch the original editor content from the repository, some client code that runs in the browser to replace the editor content, and a new button in the Web UI that surfaces the new functionality to the user. We will go over the major implementation steps and focus here on how we can write new tests in each layer to cover the code that we are planning to modify or use. A step-by-step guide of the whole implementation with more code examples can be found in the Jazz.net Wiki.

Testing Server Code

On the server we can find WorkItemRestService#getWorkItemDTO2(ParmsGetWorkItem2) to fetch a work item from the repository. Since it’s untested we add a couple of unit tests to see how the method works, before we are going to use it. The tests document how the parameter object can be used and what other services are involved. A unit test that checks whether a certain state of a work item can be fetched from the repository could be implemented as follows:

public void testGetWorkItemDTO2WithIdReturnsWorkItemStateStoredInRepository() throws Exception {
  IWorkItem expected= $.workItem().projectArea(fProjectArea).summary("existing item").stateId(UUID.generate()).build();
  when(fWorkItemServerMock.findWorkItemById(eq(expected.getId()), any(ItemProfile.class), any(IProgressMonitor.class))).thenReturn(expected);
  ParmsGetWorkItem2 parms= new ParmsGetWorkItem2();
  parms.id= String.valueOf(expected.getId());
  parms.projectAreaItemId= fProjectArea.getItemId().getUuidValue();

  WorkItemDTO actual= fTestObj.getWorkItemDTO2(parms);

  assertEquals(expected.getStateId().getUuidValue(), actual.getStateId());
}

In the first line, we create a new work item in test’s project area and set its summary and state to a certain value. This is done via the fluent builder API. This APIis available on every application layer and accessible via the '$' character. The special character acts as a single entry point for exploring what the API offers on each layer. Even complex artifacts, such as iteration plans, can be created with little effort. For example, the following lines create a plan for a new iteration with id "m1" that is owned by a new team with a newly created member. All artifacts are created in a new project area, configured, and stored in the repository.

$.projectArea()
  .member($.contributor("user1").save())
  .developmentLine($.developmentLine().iteration($.iteration("iter1").id("m1")))
  .teamArea($.teamArea("team1").member($.get("user1")))
  .save();

$.plan()
  .owner($.get("team1"))
  .iteration($.get("iter1"))
  .save();

The test case also uses Mockito to mock dependencies to collaborating objects, e.g., the call to IWorkItemSever#findWorkItemById() is stubbed in the second line of the test. It makes sure that the work item we just created is received from the work item server. Fake objects are not the focus of this article. If you are not familiar with mocking in general or with Mockito in particular ceck out our article on “Unit Testing in the Real World“.

Testing Client Code

The client code for this feature runs within a browser. It calls some JavaScript code and communicates with the service via REST calls. We have two goals when testing the feature in this layer. First we want to check wether the new client code replaces the editor content properly. This can be tested in a unit test by calling the new function in isolation. Second, we want to ensure that the client is interacting with the service as expected. We will write a few integration tests that use actual data in the database and make a full round trip to the server.

Unit testing in the client layer works very similar to what we have seen before. The same fluent builder API is available for creating artifacts. Also, a Mockito-like DSL for mocking, stubbing and spying is provided in the Web client layer, which makes unit tests almost syntactically identical. The following unit tests ensures that method viewWorkItem() is called on the WorkItemPageController object that holds the WorkItemEditor instance.

public void testRevertOnExistingItemCallsPageControllerViewWorkItem() throws Exception {
  WorkItem workItem= $.workItem().projectArea(fProjectArea).summary("existing item").build();
  WorkItemPageController target= spy(new WorkItemPageController());

  WorkItemEditor testObj= new WorkItemEditor(new WorkItemEditorParams().isNew(false).parentController(target));
  testObj.setWorkingCopy(workItem);
  testObj.revert(null);

  verifyZeroInteractions(target).newWorkItem(any(Object.class), anyBoolean());
  verify(target).viewWorkItem(any(Object.class), anyBoolean());
}

Instead of asserting a certain value, we use behavior verification (see our wiki on “How to Mock, Stub, and Spy“) on two public methods because no other side effect can be observed from the outside of WorkItemPageController. Otherwise the test should look very familiar.

Integration tests for the Web client appear a little different, but can actually be written in the very same way. The tests use asynchronous method calls to contact the server, but besides that we write very similiar code here as well. In the following test we create and save a work item in the repository, change its summary and check whether a call to function revert() replaces the editor’s copy of the work item with the original value.

public void testWorkItemEditorRevertOnExistingItemReturnsStateFromRepository(IAsyncCallback<Void> callback) throws Exception {
  $.runAsync(new AsyncCall<WorkItem>() {
    public void invoke(CallArguments args, IAsyncCallback<WorkItem> nextStep) throws Exception {
      $.workItem().projectArea(fProjectArea).summary("existing item").save(nextStep);
    }
  }, new AsyncCall<Void>() {
    public void invoke(CallArguments args, IAsyncCallback<Void> nextStep) throws Exception {
      WorkItem workItem= args.get(0);
      workItem.setValue(WorkItemAttributes.SUMMARY, "CHANGED");
      final WorkItemPageController target= new WorkItemPageController();

      WorkItemEditor testObj= new WorkItemEditor(new WorkItemEditorParams().isNew(false).parentController(target));
      testObj.setWorkingCopy(workItem);

      args.add(target);
      testObj.revert(AsyncCallback.serviceResponseHandler(nextStep));
    }
  }, new VoidCall() {
      public void invoke(CallArguments args) throws Exception {
          WorkItemPageController target= args.get(1);
          WorkItem actual= target.getWorkItemCopy();

          assertEquals("existing item", actual.getValue(WorkItemAttributes.SUMMARY));
          assertFalse(actual.isNew());
    }
  }).execute(callback);
}

In the first async call a work item is created and saved in the repository. In contrast to unit tests, the builder API also stored the item in the database. The save() methods in builders for the Web client take a callback parameter, whereas the builders for the Java client do not. The remaining test changes the work item’s summary attribute and pass it as input to a WorkItemEditor instance. After the revert() method is called, we assert that the summary attribute in the page controller’s copy of the work item is reverted.

JUnit for the Web client

You may have noticed that all client tests are written in Java, including tests for the Web client. We develop most of our Web client code in Java using our Java-to-JavaScript compiler called JDojo. Developing code for all layers and clients in the same programming language has many advantages. Besides enabling Java and JavaScript code to be written and executed in the same way, development tools, such as Search and Refactoring are also available for the whole code base. Client, server and web tests can be run together leveraging the same infrastructure for creating test suites, navigating source code and providing stacktraces. Our Web client tests are executed with a special test runner for JUnit that loads and evaluates all JavaScript code in the Rhino script engine.

Testing UI Code

In the UI layer we surface a new button to the user. We want to test whether the button is only enabled when the editor is in dirty state and whether a click on it reverts the editor content. Automated UI tests help to cover the newly added behavior as shown to the user, an aspect that can not be covered in the other layers. We use Selenium for automating UI tests in the Web client. For example, the following test asserts that an unsaved change to the work item’s summary attribute can be undone by clicking the “Revert” button.

public void testRevertButtonClickRevertsSummaryOfExistingWorkItem() throws Exception {
  IWorkItem workItem= $.workItem().projectArea(fgProjectArea).summary("existing item").save();
  WorkItemEditorPage testObj= $.open(workItem);
  IWorkItemEditorPresentation target= testObj.getEditorHeader().getPresentation("Summary");
  target.input("CHANGED");
  assertEquals("CHANGED", target.value());

  testObj.getEditorHeader().getRevertButton().click();

  assertEquals("existing item", target.value());
}

The test creates a work item, saves it in the repository, and opens it in the work item editor. The fluent builder API can be used again for creating the work item and saving it in the repository. Also the API for accessing UI elements is surfaced in the same way via that API. All major UI parts are represented by page objects which abstracts from fragile navigation paths. The ability to setup a UI test without navigating through all UI elements to create, setup and store required artifacts allows for much better focus in every UI test. We have seen that UI tests became significanly faster, and less fragile against changes in unrelated UI parts.

Maintainable UI tests with Selenium

UI tests can be slow and fragile. Often a lot of UI elements have to be used for setting up a test case. We use the fluent builder API again for creating test artifacts and other setup work. All interaction with UI elements is done via API in page objects. As a result, all UI elements are discoverable over a single point of entry (the '$' field), the test code is very concise without any need for XPath expressions, and changes in the UI only break navigation paths in a single place. No need to maintain the test code itself. With such a test infrastructure we have reached a point where it is affordable to write and maintain UI tests.

What’s The Deal?

Software developed in large teams often spends a significant amount of development time in an unstable state. When we are working on a new feature we are not interested in trying to run the whole application. We simply want to check-in changes and may even run automated unit tests, but we do not want to start the whole application and test it in a production-like environment. With the presented testing approach we have established a common test bench, with a common structure in all tests that can be developed and run in the very same way. This facilitates the writing of new tests significantly, as it allows us to reuse the knowledge on how to create test artifacts and test code across different layers in RTC. The fluent builder API avoids poor coding practices and simplifies the learning when creating tests in unknown areas. As a direct result we can create tests at reasonable costs with every change we deliver into our code base.

With a sufficiently comprehensive set of automated tests, the continuous integration builds give us great confidence that the newly created bits for RTC will work as expected. Basically with every new change, we know when it broke some existing functionality and can fix it immediately. With such an automated safety net in place we are able to deliver software faster with fewer bugs. New defects can be caught much earlier in the delivery process when they are cheaper to fix, providing significant cost and time savings.

Jan Wloka
Developer, Jazz Tracking & Planning

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...
3 Comments
  1. Eric Jodet October 7, 2013 @ 8:32 am

    Excellent article, Jan – thanks
    (now I have to read the other articles linked from this one :)

  2. Ralph Schoon October 15, 2013 @ 3:21 pm

    Great Article Jan, thanks for sharing!

  3. Lukas Steiger February 24, 2014 @ 6:27 am

    Hi Jan,
    Thanks for this great write up. We have developed an administration tool using the jazz java client API’s. We wanted to implement Unit/Integration Tests for the API and got stuck on the question how to test the methods which use teamRepository and other jazz functions. I came across this entry and wanted to implement this approach. But I stuck on the fluent builder API
    see https://jazz.net/forum/questions/142948/how-to-unit-test-a-plain-java-api-application-using-fluent-builder-api
    It would be awesome if you can help me out

You must be logged in to post a comment.