Jazz Library Rational Team Concert (RTC) 2.0 sizing guide
Author name

Rational Team Concert (RTC) 2.0 sizing guide

Note: The Team Concert 3.0 sizing guide contains more information with additional sizing guidelines for smaller team sizes and an update for 3.0.

Enterprise Scalability of RTC 2.0 is reaching new levels on single and dual tier configurations. We’ve created a sophisticated performance test harness to simulate actual client workload and validate our scalability requirements. With the RTC test harness, we are confident in our claims that we can support over 2000 active users in an enterprise RTC environment. Server performance data collected during these tests runs was necessary for the creation of this sizing document.

Test Harnesses

Our existing in-house IBM Jazz environment was used as the workload model for the RTC test harness. At the time of this writing, the RTC 2.0 scalability harness generates load levels of over 2000 active users operating within 8 active streams (over 200+ or more active users per stream). During the course of a typical scalability run, users generate over 200 work items per hour (4800 work items over 24 hours)! Builds are also simulated by pulling content from the RTC server at a rate equivalent of 95 builds per hour (over 2300 builds ever 24 hours)! For more details on the actual harness workload, service calls, calibration, please refer to our blog on jazz.net.

Hardware Sizing Recommendations

Using the above harness workload, the tables below were compiled based of the various hardware and software configurations which were tested. When considering sizing options, RTC 2.0 supports both single and multi-tier configurations. With our test harness and a server running on one of the latest state-of-the-art x64 hardware chipsets (e.g. Xeon 5500 or better), we demonstrated the ability to handle enterprise load on a single-tier configuration. Customers may decide to chose the lower cost single-tier with the option of adding an additional machine (dual-tier) to support team growth.

Here are the sizing recommendations for different enterprise deployments:

Small Scale Enterprise Configuration – Single-tier 100 – 700 user configuration


IBM System x3650 M2 – Single CPU Intel Xeon 5500 2.4 GHz or higher 64-bit
Memory – 12GB or higher Storage
Disk – High Performance SAS Disk, RAID, SAN or NAS direct connected disk subsystem
Operating System – Red Hat Release 5.3 or Windows 2003 Server
Webserver – Tomcat 5.5 or WAS 6.1.0. 23 or higher
Database – DB2 9.1, DB2 9.5 FP 4, SQL 2005 and 2008, Oracle 10GR2


Small Scale Enterprise Configuration – Dual-tier 100 – 700 user configuration


2 Systems – IBM System x3550 Dual CPU Intel Xeon 5160 2.4 GHz or higher 64-bit
Memory – 4GB or higher (recommend 8GB)
Disk – High Performance SAS Disk (15K), RAID, SAN or NAS direct connected disk subsystem
Operating System – Red Hat Release 5.3 or Windows 2003 Server
Webserver – Tomcat 5.5 or WAS 6.1.0. 23 or higher
Database – DB2 9.1, DB2 9.5 FP 4, SQL 2005 and 2008, Oracle 10GR2



Large Scale Enterprise Configuration – Single-tier 700 – 2000 user configuration


IBM System x3650 M2 –Dual CPU Intel Xeon 5500 2.4 GHz or higher 64-bit
Memory – 12GB or higher
Disk – SAN or NAS/DAS disk subsystem – Recommend NetApp 3050C with Fibre Channel connection
Operating System – Red Hat Release 5.3 or Windows 2003 Server
Webserver – Tomcat 5.5 or WAS 6.1.0. 23 or higher
Database – DB2 9.5 FP 4, SQL 2005 and 2008, Oracle 10GR2


Large Scale Enterprise Configuration – Dual-tier 700 – 2000+ user configuration


2 Systems – IBM System x3650 M2 – Dual CPU Intel Xeon 5500 2.4 GHz or higher 64-bit
Memory – 12GB or higher
Disk – SAN or NAS/DAS disk subsystem – Recommend NetApp 3050C with Fibre Channel connection
Operating System – Red Hat Release 5.3 or Windows 2003 Server
Webserver – Tomcat 5.5 or WAS 6.1.0. 23 or higher
Database – DB2 9.5 FP 4, SQL 2005 and 2008, Oracle 10GR2



Tomcat and Websphere Application Server (WAS) configurations

We’ve been doing performance testing with both WebSphere and Tomcat and because Jazz utilizes little of the application server functionality, there is little effect on overall scalability when choosing between the two. Basically, go with what you have and know. There are other advantages of using the WebSphere solution as it offers administrative, monitoring, security and High Availability benefits. Customers have told us that they are taking advantage of the ability to create instantaneous WAS instances to enable deployment of a new Jazz server which utlizes their existing security configuration. For more information on the benefits of Websphere, please go to the following link:

http://www-01.ibm.com/software/webservers/appserv/was/features/

Network Connectivity

Recommendation for network connectivity in dual-tier configurations is to minimize latency between the app server and DB server (no more than 1 – 2 ms). Recommendation is to also have the servers located on the same subnet. When using external storage solutions, recommendation is to minimize connectivity latencies, optimal configuration via Fibre (optical).

Disks

During our 2000 user scalability runs for RTC 2.0, we noticed that our 1TB SATA single disk drive was under a tremendous amount of load and physical stress. We knew that this single drive configuration was clearly a bottleneck but needed to collect data to prove it. Over the course of the last 5 months, we have measured the load on both a single drive and Network Attached Storage drive solution (NAS)/Storage Area Network (SAN) based drive solutions as well as overall system behavior. Our measurements were made using an AIX IBM tool called nmon to characterize overall system behavior.

During our runs, we could see that the single disk SATA solution was simply not handling user load for deployments of greater than 500 users. It was clear that we needed a more scalable enterprise storage solution as the single drive and controller had no idle time. We considered an inexpensive internal RAID configuration, a SAN or utilizing an enterprise storage solutions which we had in-house, our choice was the IBM N5500 (IBM OEM NetApp 3050C equivalent). We also knew that our customers would appreciate the fact that we certify and directly support this solution. This solution provides numerous benefits like high availability/disaster recovery as well as the ability to use it as a typical network attached storage (NAS) or direct attached storage (DAS) solution.

In our configuration, our goal was to provide optimal disk performance without the overhead of NAS network connectivity. With the storage directly connected via a Fibre Channel(FC) connection there is no latency between the storage and the server, even if the devices are located on separate floors. Benefits of using this configuration resulted in offloading all disk I/O from the server to the NAS while providing a complete fault tolerance disk subsystem with instantaneous snapshot backup capabilities. While the NAS is one solution, the Intel 5500 system we tested had motherboard RAID support for up to six disk drives. While this is a solution that one can consider, with these RAID configurations, there can be significant risks if there is a single drive failure. Alternative enterprise storage solutions do have real-time monitoring, hotswap drive replacement support as well as failover in real-time with zero downtime.

In summary, our recommendation for storage for large scale deployments is to consider a SAN or NAS solution. Please review the >IBM NAS solutions when making your decision as they provide both NAS and DAS connectivity. In addition, by reviewing our migration guide you can use our sample repository to benchmark your installation against ours.

Multiple repositories

Obviously fewer servers means fewer machines and application server instances to configure, monitor, and pay for, and is a big factor of the Total Cost of Ownership. With the read permission and scalability support in 2.0, you can now host many projects on the same server. But where does this end, will all 30,000 developers in IBM end up running on one giant server in the cloud? We don’t think so.

There are other reasons why it makes sense to always plan for multiple servers. Maintenance windows is a very practical one. A single server supporting intense round-the-globe development has no window for maintenance. Deploying a few regional servers to host projects where they are centered allows each of those servers to have a reasonable window for maintenance if needed. The sheer size of a single repository is another consideration. We already struggle with the time it takes to ship around backups and perform migrations of our 40GB repository on jazz.net. Extrapolating this to a repository 300 times as large hurts my head. There are things we can do to mitigate this: SAN vendors provide some pretty amazing backup technology, and we’re working on minimizing future migration impacts. What do you lose? Once you realize that it makes more sense to have multiple repositories deployed in your organization there are a lot of things which will continue to work and others that would be more difficult. Here is a quick summary:

  • You won’t be able to link work items as easily across repositories. Instead, you will have to create a “URL Reference” between servers.
  • You won’t be able to share source control between repositories in streams and repository workspaces.
  • You can continue to build by fetching source from multiple repositories.
  • You can continue to use the same LDAP server to control authentication on your servers.

Artifact Sizing Guidelines

Here is a listing of recommendations on artifact sizing that will ensure optimal performance of the repository when the data sizes increase significantly.

    Repository: There are no limits to the number of projects, users, or iteration structures in the server.

    Work Items: There is no upper limit to the amount of work items in the repository.

    Plans: You can have as many plans as needed in the repository and they can also be deleted. However, a plan can only contain 2048 work items.

    Reports: The bulk of the datawarehouse size is a function of the number of work items. The only performance limitation we’ve seen is based on the number of streams that are selected as part of the “Stream Size” reports. This is configurable, and we recommend that you only configure your integration streams to be collected. By default, no streams are configured and you can use the “Administer SCM Snapshot” page when you are logged-in as the administrator in the web ui.

    Build: There are no limit on the amount of build results as such, but there are guidelines on the size of the build results and the pruning schedule. We suggest that when you get into the enterprise scale and have 100s of build results a day, ensure that continuous builds are being pruned (eg, check the pruning option in the build definition editor). In addition, instead of uploading GBs of build results, what we do is keep the larger content on the build machines and create links from the build results to the location to find the build results. This avoids putting too much transient data into the repository.

    Source Control: We recommend 50K files and folders in a single component and if you have more files to split them into multiple components. We encourage individual users to keep less than 300 change sets suspended. While there is no limitation, there is a tendency for a large suspended set can slow down operations involving the suspended set (suspend, resume, discard) as well as add to some additional costs in refreshing or changing collaborations. We have tested the user experience of working with a large number of components in workspaces and streams. There are no known issues with having hundreds of components in a stream or workspace, and we have tested the user experience with 500 components in a collaboration.

    Dashboards: Configuration data in dashboards can hold 32KB. This affects viewlets such as static HTML, Headlines, Bookmarks etc. that can be edited directly by users, and of course any newly written viewlets using the dashboard mementos to store data for viewlets.

Wed, 24 Jun 2009