Blogs about Jazz

Blogs > Jazz Team Blog >

Using IBM UrbanCode Deploy for deployments

Tags: , ,

Our journey towards IBM UrbanCode Deploy

Our team has been responsible for the deployment of IBM Collaborative Lifecycle Management (CLM) on production and staging systems since 2007.  On top of the production system known as jazz or, we have 2 staging systems named BLUES and SOUL. We deploy to the staging system weekly, and daily at the end of each sprint.  Because of such repetitive work, we created a set of scripts to help up install the bytes and upgrade the systems. However we do run these scripts manually and on demand; there is no orchestration system. We also manage the schedule manually using work items and itself.

In April 2013, IBM acquired UrbanCode. UrbanCode has a product named IBM UrbanCode Deploy which provides an automation deployment framework. We decided to migrate our scripts to the UrbanCode Deploy platform.  Our team sees a lot of value in moving to this new software which ultimately should allow us to implement continuous delivery on our production system.

In this blog I will explain our journey in migrating and moving our scripts to IBM UrbanCode Deploy.

Why did we chose to migrate to IBM UrbanCode Deploy?
We developed our own script and as developers we need to test them. To test our modified script means installing a CLM in a cluster, with front end proxies and load balancer. Then we can validate that our script can upgrade this CLM system and that we did not introduce a bug. IBM UrbanCode Deploy helps us in this area by deploying the upgradeable system on a cloud and then running our script automatically.

We also see the value of versioning our changes either in our scripts or our configuration files. The plan and schedule feature will allow us to better manage the time when we can deploy. Finally, we want to integrate our deployment on staging with a set of tests to validate that the upgrade was successful by running tests on the deployed system.

Creating Components and Environments
Our script is a multiuse Perl script. It supports all our different environments by using a dedicated configuration file. This file contains variables for that environment as well as which task to execute. The script manages the installation of the bytes, running the backups, installing and migrating CLM. See Working towards continuous deployment in for more information

The first problem we faced was attempting to map our ‘lingo’ to Deploy definitions. While we were using terms such as CLM as the application, IBM WebSphere Application Server and IBM DB2 as the database, Deploy talked about application only. We needed to decide if we wanted to use Deploy for the Application Server and the Database or only for the CLM application. We decided to only focus on upgrading the application itself, and not the rest of the stack.

The concept of Environment was closer to us, as it mapped perfectly with our staging and production systems. Our systems are called Jazzdev, Bluesdev or Souldev. So we created an environment for each system.

Figure 1: List of environments for our application.

We realized we needed to download the bytes for the new CLM and the configuration file our script is using. We could then have 2 components, the first one would contain the entire build to download and the second one would contain the response file or configuration for a specific environment. Our internal system in IBM already has a build farm and a storage server that saves all builds we need to preserve. So we quickly decided our first component should only contain the BuildID of the build to deploy and that a script would then fetch all the bytes from the storage server. We did not need Deploy to handle the build of the CLM application either. In the end, the first component would contain a version property only that would be the BuildID. Such info would be passed to the script which will download the zip file from the storage server and install the bytes.

Figure 2: The first component only contains the build ID.

The second component needed to contain a configuration file that would be used on the environment. This configuration file would be the response file to the generic script we already have. We decided to keep the script first, and that we would move the script itself into a deploy process at a later time. We realized each configuration is linked to s a specific environment….so should we create one component that would contain the configuration file then or should we set the properties on the environment? We wanted to stay as close as possible to our existing process and decided to stick with a response file for now and thus use a component.

The second question we had was: should we create a version per environment or should we create one component per environment? After playing with processes and environment, we decided to create a single component that would contain all configurations for all environments. We know it is not scalable, but it should help us in the meantime. This component will also contain the script itself, so it is contained and we ensure consistency.

Figure 3: The second component contains all configuration (properties/conf) and the script (

We then found we could create an environment property in the environment, such as it is mandatory for a component to define it. We used it to define the exact configuration that needed to be used on that environment.

Figure 4: The environment mandates the definition of which configuration file to use.

Versioning components

Then came the decision of the Version Management for the components. For security reasons, we could not use a versioned file system as access to the file system is restricted. We investigated SCM solutions such as GIT or IBM Rational Team Concert (RTC), and realized this was possibly overkill as, currently, only one person manages the script content and it is only one file.  We then decided to link UrbanCode Deploy and Rational Asset Manager (RAM) and create an Asset which would contain the configuration files and the scripts. This was a perfect balance of version system and asset management.

What’s Next?

Here are the next steps of our investigation, beside testing the upgrade process itself is working.

  1. We need to set up statuses for gate (approval). We have a very rigorous process as to who and when we can deploy the builds. We need to map this process to Deploy.
  2. Our script is not (yet) fully automated and we need to wait for some product improvement: see plan item Improve upgrade experience across CLM (272459), so we need to include wait time in the Deploy process that will require action from the administrator such as un-clustering the server or running a function in the user interface of the application.
  3. We also need to understand and architect interdependent scripts. As we have a cluster of 3 nodes we usually run a script on one node and when we validate the system is working we have a second script to copy on the node 2 and 3. We cannot start the script on node 2 and 3 before validation on node 1.
  4. Our component process is very minimal at this time. It only calls the existing script. We need to migrate our perl script to implement it into a process.

Figure 5: The current component process simply defines the call to the script and passes 3 variables conf: the configuration, bldID: the build to download and uConf: an environment variable that contains credentials for the script.

We are moving forward daily and are very hopeful we will be able to deploy our first build in staging using this solution before the end of the year. In the next blog, we will explain how we progressed and hopefully show you our success.

Christophe Elek
Jazz/CLM Deployment Manager