Blogs about Jazz

Blogs > Jazz Team Blog >

Does Rational Team Concert support MultiSite?

Does Rational Team Concert support MultiSite? I was asked this question by a customer recently, and I realized that many customers who are familiar with Rational ClearCase and Rational ClearQuest may be asking similar questions. ClearCase and ClearQuest have MultiSite features that enhance the base products to support teams distributed across a Wide Area Network. In the case of Rational Team Concert, the tool has been designed from the start to support a globally distributed team. There’s more to MultiSite than just WAN-friendliness, so let’s look at some of the benefits MultiSite provides for CC/CQ and how we achieve the same results with Rational Team Concert.

Supporting a Globally Distributed Team across the WAN

ClearCase clients need high-bandwidth, low-latency connectivity to the ClearCase SCM data. MultiSite enables this across low-bandwidth or high-latency networks by replicating VOB contents across the network.  Instead of clients accessing files across the WAN, a network of CC servers absorbs the network latency and brings it closer to the clients.

In the case of Rational Team Concert, the client and server have been designed to optimize their WAN-friendliness from the start. The SCM client takes advantage of the local disk cache provided by the Eclipse file copy area, and only has to get or put incremental changes to the server. The conversation with the server is highly optimized, making efficient requests to get the latest pending changes. When a workspace is being initially loaded, the SCM client combines streaming and multi-threading approaches to minimize “chattiness”.  The net effect is that SCM clients around the world are able to collaborate on a single central server.

This is the deployment pattern that we use to support development on and throughout IBM Rational.

The above figure is a rough sketch of the Jazz Foundation and Rational Team Concert teams, which collaborate to build the Jazz Foundation platform and the RTC tools on top. They work together in a single repository, which is actually located in Toronto. A single central repository makes it easy for us to minimize the staff we need to support the team, and makes things like security, backups, and access control simpler. These teams work from around the world on the IBM network, and frequently from other places via the IBM VPN connection. They all load workspaces and deliver changes to the central RTC server on a daily basis.

The WAN-friendliness of the RTC client and server are the result of a lot of hard work from the SCM and Foundation teams, with help from our users. We built a tool into RTC to help monitor and capture user scenarios that did not perform well in a WAN environment. This tool is called “metronome”, and is discussed in the the post: The Jazz metronome tool keeps us honest. With this tool in place, the RTC team encouraged team members and other RTC users to open defects when they saw poor WAN performance. The metronome tool captures the details of the scenario and helps the dev team design a solution. Over time, we’ve concluded that RTC is very WAN-friendly up to about 200ms latency, at which point some lag becomes noticeable. Beyond about 600ms, file operations become pretty slow, so what then?

What if my latency or throughput still doesn’t cut it?

We realize that there will still be cases where a WAN connection will not meet the 200ms guidance. In this case, we’ve leveraged the Web architecture of RTC to allow caching proxies to be used. Based on standard Web caching technology, IBM or Apache HTTP Server or Squid, a cache can be deployed in the location which has a poor connection to the central server. This caching proxy will cache SCM contents that are fetched from the server, greatly improving access times for the RTC client, and reducing traffic on the network. Setting up a caching proxy is discussed in the article “Using content caching proxies for Jazz Source Control”.

In the case of the RTC team, we have deployed two caching proxies. The first one was deployed in Ottawa when we realized that the Ottawa RTC build farm was almost consuming the entire network capacity out of the Ottawa lab! We were apparently running 174 builds a day fetching roughly 600MB per build. We put a caching proxy in the build lab, and immediately saw that it absorbed almost 95% of the traffic. The build time was improved dramatically, reducing the fetch time from 20 minutes to four. The second proxy we have deployed is to support our Bangalore, India development team. In addition to improving local response time and build performance, there can be a substantial offloading from the server.

If we revisit our global deployment picture, the two caches are deployed in the figuree below:

MultiSite for redundancy and off-site backup

Another use case for CC or CQ MultiSite is to achieve redundancy and especially off-site backup. Because Rational Team Concert is backed by a relational database, we can take advantage of the capabilities that commercial databases offer. Databases like DB2 can support high availability configurations using technologies like DB2 High Availability Data Replication. The details of setting up Rational team Concert with DB2 HADR are described in “Configuring IBM DB2 9.7 High Availability Database Repository for Rational Team Concert”. Even without deploying a full HA database configuration, it’s possible to take online, incremental backups of the RTC database.

Supporting an “island” development team

It’s common for a large development project to include islands of development which have limited visibility, often working on a specific component, but loading some portion of the larger system for compiles and builds. Sometimes this component team is remotely located or even a contracted team from an outside supplier. MultiSite has been used to provide a limited replica for these teams, so that they don’t have access to more than they need, and don’t deliver changes beyond their component.

Rational Team Concert can support this model to a degree using a central SCM and private projects to control visibility. In the current milestones for the next release of RTC, you can see progress on delivering a full distributed SCM capability. The basics are described in the “RTC 3.0M7 New and Noteworthy”. With this capability, island teams can work independently and accept and deliver changes to the central repository.

RTC doesn’t yet address one of the extreme use cases which ClearCase MultiSite could handle: physically disconnected sites where changes are transferred by physical media.

What about “mastership”?

CC and CQ have a feature called “mastership”, where the ability to change a CQ record or modify an object in ClearCase is controlled by where the record or object is accessed in the MultiSite configuration. The feature is intended to provide orderly configuration management, and is also used in some configurations to implement “ownership” permissions on artifacts.

From a permission point of view, RTC uses role-based permissions to implement the ownership use cases. For work items, roles can be used to control who can read and write work items, down to a fine degree of specific changes and work item types. Similarly, role-based permissions are used to control who can load a component and who can deliver changes. In general, it’s easy to separate what members of a component team can do from what random participants in the project can do.

Going beyond MultiSite

Rational Team Concert’s support for global teams goes beyond just how they access SCM data but also how the teams can be integrated and configured to work together in the tool. We have an article “Implementing the ‘Remote Development Line’ pattern using Jazz SCM” that outlines how a remote team can be integrated into the tool with permissions and review processes. We will be enhancing the article in our next release to include tips for implementing this pattern across servers with the new distributed source control support.

Rational Team Concert is already enabled for multi-site use

I hope that this clarifies how Rational Team Concert handles the WAN and team configuration use cases that ClearCase and ClearQuest addressed via MultiSite.  You won’t see us introducing a new feature called “MultiSite”, because RTC already has capabilities that make it WAN-friendly.

Scott Rich, Distinguished Engineer
IBM Rational Chief Technical Officer for Northeast Europe