Jazz Jazz Community Blog Does Rational Team Concert support MultiSite?

Does Rational Team Concert support MultiSite? I was asked this question by a customer recently, and I realized that many customers who are familiar with Rational ClearCase and Rational ClearQuest may be asking similar questions. ClearCase and ClearQuest have MultiSite features that enhance the base products to support teams distributed across a Wide Area Network. In the case of Rational Team Concert, the tool has been designed from the start to support a globally distributed team. There’s more to MultiSite than just WAN-friendliness, so let’s look at some of the benefits MultiSite provides for CC/CQ and how we achieve the same results with Rational Team Concert.

Supporting a Globally Distributed Team across the WAN

ClearCase clients need high-bandwidth, low-latency connectivity to the ClearCase SCM data. MultiSite enables this across low-bandwidth or high-latency networks by replicating VOB contents across the network.  Instead of clients accessing files across the WAN, a network of CC servers absorbs the network latency and brings it closer to the clients.

In the case of Rational Team Concert, the client and server have been designed to optimize their WAN-friendliness from the start. The SCM client takes advantage of the local disk cache provided by the Eclipse file copy area, and only has to get or put incremental changes to the server. The conversation with the server is highly optimized, making efficient requests to get the latest pending changes. When a workspace is being initially loaded, the SCM client combines streaming and multi-threading approaches to minimize “chattiness”.  The net effect is that SCM clients around the world are able to collaborate on a single central server.

This is the deployment pattern that we use to support development on jazz.net and throughout IBM Rational.

The above figure is a rough sketch of the Jazz Foundation and Rational Team Concert teams, which collaborate to build the Jazz Foundation platform and the RTC tools on top. They work together in a single repository, which is actually located in Toronto. A single central repository makes it easy for us to minimize the staff we need to support the team, and makes things like security, backups, and access control simpler. These teams work from around the world on the IBM network, and frequently from other places via the IBM VPN connection. They all load workspaces and deliver changes to the central RTC server on a daily basis.

The WAN-friendliness of the RTC client and server are the result of a lot of hard work from the SCM and Foundation teams, with help from our users. We built a tool into RTC to help monitor and capture user scenarios that did not perform well in a WAN environment. This tool is called “metronome”, and is discussed in the the post: The Jazz metronome tool keeps us honest. With this tool in place, the RTC team encouraged team members and other RTC users to open defects when they saw poor WAN performance. The metronome tool captures the details of the scenario and helps the dev team design a solution. Over time, we’ve concluded that RTC is very WAN-friendly up to about 200ms latency, at which point some lag becomes noticeable. Beyond about 600ms, file operations become pretty slow, so what then?

What if my latency or throughput still doesn’t cut it?

We realize that there will still be cases where a WAN connection will not meet the 200ms guidance. In this case, we’ve leveraged the Web architecture of RTC to allow caching proxies to be used. Based on standard Web caching technology, IBM or Apache HTTP Server or Squid, a cache can be deployed in the location which has a poor connection to the central server. This caching proxy will cache SCM contents that are fetched from the server, greatly improving access times for the RTC client, and reducing traffic on the network. Setting up a caching proxy is discussed in the article “Using content caching proxies for Jazz Source Control”.

In the case of the RTC team, we have deployed two caching proxies. The first one was deployed in Ottawa when we realized that the Ottawa RTC build farm was almost consuming the entire network capacity out of the Ottawa lab! We were apparently running 174 builds a day fetching roughly 600MB per build. We put a caching proxy in the build lab, and immediately saw that it absorbed almost 95% of the traffic. The build time was improved dramatically, reducing the fetch time from 20 minutes to four. The second proxy we have deployed is to support our Bangalore, India development team. In addition to improving local response time and build performance, there can be a substantial offloading from the server.

If we revisit our global deployment picture, the two caches are deployed in the figuree below:

MultiSite for redundancy and off-site backup

Another use case for CC or CQ MultiSite is to achieve redundancy and especially off-site backup. Because Rational Team Concert is backed by a relational database, we can take advantage of the capabilities that commercial databases offer. Databases like DB2 can support high availability configurations using technologies like DB2 High Availability Data Replication. The details of setting up Rational team Concert with DB2 HADR are described in “Configuring IBM DB2 9.7 High Availability Database Repository for Rational Team Concert”. Even without deploying a full HA database configuration, it’s possible to take online, incremental backups of the RTC database.

Supporting an “island” development team

It’s common for a large development project to include islands of development which have limited visibility, often working on a specific component, but loading some portion of the larger system for compiles and builds. Sometimes this component team is remotely located or even a contracted team from an outside supplier. MultiSite has been used to provide a limited replica for these teams, so that they don’t have access to more than they need, and don’t deliver changes beyond their component.

Rational Team Concert can support this model to a degree using a central SCM and private projects to control visibility. In the current milestones for the next release of RTC, you can see progress on delivering a full distributed SCM capability. The basics are described in the “RTC 3.0M7 New and Noteworthy”. With this capability, island teams can work independently and accept and deliver changes to the central repository.

RTC doesn’t yet address one of the extreme use cases which ClearCase MultiSite could handle: physically disconnected sites where changes are transferred by physical media.

What about “mastership”?

CC and CQ have a feature called “mastership”, where the ability to change a CQ record or modify an object in ClearCase is controlled by where the record or object is accessed in the MultiSite configuration. The feature is intended to provide orderly configuration management, and is also used in some configurations to implement “ownership” permissions on artifacts.

From a permission point of view, RTC uses role-based permissions to implement the ownership use cases. For work items, roles can be used to control who can read and write work items, down to a fine degree of specific changes and work item types. Similarly, role-based permissions are used to control who can load a component and who can deliver changes. In general, it’s easy to separate what members of a component team can do from what random participants in the project can do.

Going beyond MultiSite

Rational Team Concert’s support for global teams goes beyond just how they access SCM data but also how the teams can be integrated and configured to work together in the tool. We have an article “Implementing the ‘Remote Development Line’ pattern using Jazz SCM” that outlines how a remote team can be integrated into the tool with permissions and review processes. We will be enhancing the article in our next release to include tips for implementing this pattern across servers with the new distributed source control support.

Rational Team Concert is already enabled for multi-site use

I hope that this clarifies how Rational Team Concert handles the WAN and team configuration use cases that ClearCase and ClearQuest addressed via MultiSite.  You won’t see us introducing a new feature called “MultiSite”, because RTC already has capabilities that make it WAN-friendly.

Scott Rich, Distinguished Engineer
IBM Rational Chief Technical Officer for Northeast Europe

1 Star2 Stars3 Stars4 Stars5 Stars (5 votes, average: 5.00 out of 5)
Loading...
13 Comments
  1. Kevin Ramer October 7, 2010 @ 6:07 am

    Excellent article. It answers in real terms questions my team is facing deciding on our global infrastructure.

  2. Victoria Pasha October 18, 2010 @ 7:08 am

    I think this was a very nice post. Thanks Scott Rich for the great content. I will take this into consideration.

  3. Manoj Panda December 8, 2010 @ 2:24 am

    Excellent Article. One of my customer using ClearCase and planing for Distributed Development. I am confused whether I should Suggest RTC MultiSite or CCRC which is already part of CC 7.1.2

  4. Andrew Wylie August 5, 2011 @ 3:58 pm

    Hi, if a central site wishes to also have an offsite team, like you had for India, but the central site has a restriction in place, in that no direct access to that server is allowed from outside their country, how would RTC handle this. We are allowed to have a local FTP server in the same country as the central site, and from there connect to the offsite team’s country. So in CC Multisite, we were able to produce deltas of the VOB, and synch that to the offsite team, and vice versa.

    Cheers
    Andrew

  5. Frank Schophuizen January 19, 2012 @ 7:46 am

    After reading this article, one point is still unclear to me.

    Is it possible to set up a multisite RTC configuration for development sites where users are NOT able to connect over WAN to a remote server, but local servers are able to connect over WAN to (designated) remote servers?

  6. Scott Rich January 19, 2012 @ 8:36 am

    Frank, the design of the RTC distributed SCM is such that a user has to trigger the accept or deliver of changes between two repositories, so he needs to be able to communicate to both of them. Not all team members have to have access, but at least one “Release Engineer” has to be able to access both, and the two servers need to be able to communicate.

  7. Scott Rich January 19, 2012 @ 8:38 am

    Just noticed comment 4 for the first time. We don’t have an FTP option for distributed SCM at this time in RTC.

  8. Anat Mizrahi May 31, 2012 @ 6:35 am

    Hi Scott. thanks for the great article!
    I am looking for a solution in RTC to implement a topology I used in ClearCase.
    We are working in a classified workaround that includes two servers that cannot be connected to each other. we used to transfer updates on VOB between servers using digital media (CD). are there any similar options in RTC or it goes the same as in FTP? if not, is there any purpose to implement such a requirement in future release?

  9. Scott Rich May 31, 2012 @ 7:53 am

    Anat, this is similar to the questions about regarding disconnected sites. We don’t have this capability yet, but we’ve clearly got a few votes for it here.

    There was a proposal to work around this by using distributed SCM to a “staging repository” in a sort of DMZ that was accessible from either of the isolated networks. This should be possible, but I haven’t seen it done in practice. Would this work in your environment, or are your networks completely isolated?

  10. Jirong Hu October 4, 2012 @ 9:09 pm

    Hi Scott

    I am working in the same organization as Andrew (comment 4). Our problem is how to provide RTC to off-shore developers in India who are NOT in our WAN. Currently they are using VPN but has the following concerns:

    * No way of ensuring that the person accessing the network is the person authorized (Someone else could access the network after the authorized person logged in)
    * Expensive to have managers fly to offshore locations to issue token to authorized persons.
    * Issues noted with performance

    Off-shore developers from other organizations is very common in large organizations. What’s Rational’s solution for this type of access?

    Thanks
    Jirong

  11. Scott Rich October 5, 2012 @ 2:31 am

    Hi Jirong, honestly, a VPN is the prevalent solution for off-shore teams. In addition, use of distributed SCM or local caching proxies can help alleviate some of the latency issues. In the face of extreme network or security constraints, the “last resort” is to put up a terminal server at the server side. Developers would still need VPN access, but they would only reach the terminal server, rather than the development servers directly. This also eliminates most of the network traffic as the browser or Eclipse client is running local to the servers. This might be worth investigating in your situation.

    Scott

  12. Liora Milbaum November 21, 2013 @ 1:39 am

    Hi Scott,

    Is there a reasonable solution for the use case specified by Anat in comment 8?

    Thanks,
    Liora

  13. Venkatesh Nadamuni December 6, 2014 @ 5:11 am

    Can this Blog be updated to show progress made since 2012?
    I am particularly interested in Load Balancing multiple clients and regional load balancing – clients connect to the closes server in terms of latency

You must be logged in to post a comment.