Tools:
- RTC (with SCM CMD line tool) for source control and builds
- UrbanCode Deploy (UCD) for deployment automation
- Docker for test environments
- Slack for team awareness
In recent years, the concepts of continuous delivery, deployment, and integration have been introduced, along with DevOps, and have changed the way we look at producing, testing, managing, and deploying software. In response to this, there have been many tools developed to capture as much of this space as possible. Today, we’re going to talk about a certain set of tools that we use together, to fulfill this idea of continuously delivering software.
In Jazz.net, as you might expect, we all use Eclipse with Rational Team Concert (RTC) plugins for development. IBM purchased a company a few years ago called UrbanCode that handles the deployment automation of things rather well. There’s integration between UrbanCode Deploy (UCD) and RTC that we wanted to leverage for management of application deployment. UCD has concepts such as Environment, Component, Component versions, and Applications that will help us extend and evolve our deployment strategy over time. We define various deployment processes in UCD that are executed on an agent machine.
Finally, we wanted to have UrbanCode use Docker to build containers for our test environment. Docker is relatively new, and changes a lot (sometimes with breaking changes), but is a very powerful tool for testing and defining how pieces of your application work together. Many of us cut our teeth in a world without such tools, and, while introducing them can be painful, the penalty for choosing not to use such tools in today’s world is increased disorganization, usually at critical times, when you most need to be organized. For our immediate needs, this meant creating and populating a data volume container, along with two other containers, one for web and the other for Tomcat. You might decide to do all of this on one or a set of UCD agents, instead of using Docker. Our treatment of Docker is going to be intentionally light, so we probably won’t go into the low level Dockerfile, but instead talk about how UCD builds and runs images/containers and how we use it to track the state of things in the environment.
At the end, we include a list of various resources that we’ve found useful while making this transition in how we operate. In particular, Robbie Minshall’s two part article helps to crystallize some of the concepts. Don’t pass it up.
What does this buy us, anyway?
- Encourages us define a URL where our latest code is always running. This is useful for people and bots.
- Encourages us to define our application environment, which we store (and load at build time) in source control.
- Opens the possibility of defining multiple environments, selectively deploying to them when (and how) needed. For instance, I might always want a test environment to be automatically brought up, but I don’t necessarily want my code going to the production server. We currently choose to do that manually on our team.
- I also have a place to go, where I can see how the deployment of a build went (UCD), what the RTC build ID deployed is, and I can edit configuration info for the application.
- If I check configuration changes into RTC, I can do personal builds with those changes and see them running in the UCD test environment.
- If I need to tweak the way things get deployed, I can do that via the UCD web interface and request another deployment with the same RTC build.
- When a build is deployed, the entire team is notified via a Slack channel. The message links to both the UCD environment and the root URL of our application in that environment.
We think doing this can give us a much greater operational awareness in the long term. In addition, we have traceability and repeatability in our deployment environment. This means we know what’s running, how it’s running, and can see the deployment history/logs for any of our deployed components, should things go wrong.
The rest of this post will go into detail on how each piece works together to achieve the points above.
How do things flow between the pieces?
An RTC build publishes a new component version to UrbanCode Deploy (properties recorded on component… war file location, build workspace) using the post build deploy support in RTC. To configure this support on a build definition, you may have to set the post build deploy step to be visible. Once enabled, we fill ours out like so…
In this section, we enable deploy for personal builds, supply the URL to the UCD server, along with our user credentials. We can also test the connection if we need to.
Here, we specify how and what, to publish to UCD. In our case, we have a component named jazz.net, the version of the component is going to be our buildLabel (which is just a timestamp), a base directory setting for the include files, below, and a set of properties we want on the newly published component version. In our case, we want rtcWorkspaceId to be the ID used in our build, filename is the name of the war file we’ve created, and warURL points to a location on the network where we can pick up that war.
The RTC build then invokes an application process on our UrbanCode Deploy test environment. This is configured on the post build deploy page, in the Process section, as shown below. In this case, we want to run the buildDeployContainers process on the jazz.net application, in the TEST_ENVIRONMENT environment. buildDeployContainers is an application level UCD process that simply calls to a component process, shown later.
Our environments in UCD have a number of properties, most of which are container IDs. One property points to the location in our RTC project where environmental configuration is stored. In our case, the path is relative to a known location on the filesystem. The basic pattern is <project root>/conf/env/<env name>/<container name> . <container name> for us is something like jazz-web or jazz-tomcat, with a bunch of files/folders inside. When our UrbanCode automation goes to build the Docker containers, we rely on this pattern to load resources into them from RTC source, using the RTC SCM command line tool. You can also see the defaultRTCWorkpaceId, which is set to our build workspace. In the event you do a personal build, we pass that workspace ID in instead, allowing personal builds to be done with all of the config files in RTC.
A UCD application process runs on an agent that also acts as a Docker host. That process calls out to a component level process that knows how to build and run a set of Docker images/containers. We’re choosing to use Docker here, but you could just as easily choose an existing physical or virtual machine to run the UCD agent on, then develop automation tailored to that environment. Our overall process looks something like this, in UCD
Containers are built, old ones are stopped/cleaned up, new ones are started.
- This diagram shows how UCD will build the Tomcat Docker container. Basically, we copy some authentication files we always have on the agent, download the component version from the server, open a zip attached to that component version and pull out some contents, and proceed to build the Tomcat container. It should be noted, that we’re not planning on continuing to use the zip attachments, to ease burden on the UCD server. Instead, we will rely on the RTC SCM command line tool to load our content and restrict our usage of component versions to properties.
This diagram shows how UCD will run the Tomcat Docker container we built. We have a basic check to see if there’s a running container. If there is one, we try to stop it. Then we run the container and set the container ID as a component environment property. Details below.
This is the build jazz-web UCD process. We’re only showing it here because of “Add configuration to Docker context”, which copies the Apache configuration files from RTC into the directory we build the container from (the context), so that the different modules, sites, and other bits of Apache config make their way to the right place.
You can see here how we copy the RTC workspace config for Apache into the Docker context.
Details on run-jazz-tomcat and set component environment property, shown earlier. We’re not going to delve into all the Docker details too much. This Docker run command says to run it as a daemon, using our jazz-dv data volume, which maps to /data in the running container. We give it a name containing the build version, so we can see what we’re running easily. Note the post processing script. When we run this command, UCD will allow us to parse the response in javascript. I’m not going to show that here, but that’s where we pull the Docker container ID out.
When we successfully run the Tomcat container, we want to record the container ID in the environment property we use for tracking. You can see we set a property named jazznet_container_id with a value of ${p:run-jazz-tomcat/containerid}, which is set by our post processing script from earlier.
The important thing to note in run-jazz-web, is that our Docker run command has a –link tomcat-${p:version.name}:tomcat piece to it. This says, link to the Tomcat image whose version matches ours and use the name Tomcat instead. It’s important to note that –link is deprecated in the newest, shiniest Docker versions, but not in 1.6, which is what we’ve been using.
The rather involved diagram below basically just shows our UCD process for building a data volume container that our web and Tomcat containers link to. The first step “Get environment configuration from RTC” uses the SCM command line tool to load our workspace content and grab the configuration data from it. From “Create Docker data volume container”, with the exception of “Grab WAR from GSA” and the last couple of steps, every other step is simply putting content into specific locations within the container. After putting content in, we get the data volume container ID and set it on our environment.
When we call loadEnvironmentConfig, we use the RTC SCM command line tool. That process looks like this. In the grab jazzcs workspace step, we use the SCM command line tool, something like the following command… /opt/jazz/scmtools/eclipse/scm load -r https://<jazz server path> ${p:version/rtcWorkspaceId} “JazzCS/com.ibm.team.website.releng”, which will load the right workpace (regular or personal, provided the workspace visibility is public). Afterward, we copy the config directory and set an environment property in UCD, jazz-env-location, to point to it.
Finally, a Slack channel notification is sent out with deployment information. We have a UCD process which uses curl to post messages to a Slack channel. The UCD process uses the version, as well as the environment name and url to craft the message. When successful, it looks like this in our Slack channel.
Whenever we see one of these messages, we know that our Docker host should be up and running with the code from that build and that it was deployed to a particular UCD Environment. We know to expect certain settings for that environment, and where to look in RTC to find, or modify, them. Even if we choose not to use Docker for our production (or other) systems, we can still automate the deployment to those environments from UCD, by writing processes that might use, scp, rsync, or some other common tool to deploy to the UCD agent or have that agent deploy to another machine reachable on the network.
Where to go for more information?
Robbie’s blog posts on UCD and Docker
Rational solutions as Docker containers
Continuous Delivery with Docker: Part 1, creating a Dockerized application and test suite
Continuous Delivery with Docker: Part 2, creating a Docker pipeline
RTC articles on JBE and UCD
CLM Urbancode integration documentation on jazz.net
Achieving continuous deployment with UrbanCode Deploy by integrating with Rational Team Concert on jazz.net –not the approach we use, but worth a look
RTC SCM CMD line articles
Getting started with the Jazz SCM command line in Rational Team Concert on jazz.net
Scripting using the Rational Team Concert SCM Command Line Interface on jazz.net
You must be logged in to post a comment.