Which RTC topology do you recommend for a distributed environment?
We are using RTC/RQM (4.0) in our company. It's a single instance that is acceded from many countries. We need to improve the performance for the users that are placed in a country different to the one where RTC is installed.
Which topology do you recommend me to implement? I thought that the E3 topology (Enterprise - Clustered / Linux / Oracle) was the best option. However, I have just read the "Questionnaire for Clustering in the Rational solution for Collaborative Lifecycle Management" and it says that "Will the cluster live on a single site (the cluster will not span physical locations or buildings)?" must be YES.
So, I´m a bit confused. What do you recommend me to do?
I would really appreciate your help.
Thanks
Accepted answer
Clustering is not meant to cover the problem where you want "duplicate" instances in different sites - so that will not help.
There are a lot of different options for improving performance. A great place to start is look at Dan Toczala's blog with pointers to some practical ideas about performance improvement.
http://dtoczala.wordpress.com/2013/02/11/jazz-performance-a-guide-to-better-performance/
The rich clients (VS and Eclipse) tend to perform well anyway as they interact with the server in a clever way. Web UI performance is based on the overall network performance, your browser and (curiously) the distance from the application server running RTC/RQM and the database server. Put the database server very close (same subnet) as the app server. Also - consider upgrading to the latest releases (currently 4.0.2) as there are performance updates all the time that will help.
IBM's own usage of RTC/RQM (for developing RTC and many of the other software group products) is very distributed, often across relatively slow WAN links - and the software is designed to cope with this.
Hope that gives you a few ideas
anthony
4 other answers
- Put a reverse proxy in front of your Jazz server(s). It adds some time to the setup and deployment of your solution, but it gives you MUCH greater flexibility in the future should you ever need to move a Jazz application from one server to another.
- Make sure that you make allowance for the monitoring of your solution. Even a simple script that runs "synthetic" transactions over time can give you a much better view of any changes to your system performance over time.
- Go with a three tier deployment. Tier 1 is the web server and reverse proxy. Tier two are the Jazz application servers. Put one Jazz application on it's own WAS instance on each box. That allows you to deploy smaller boxes initially, and then upgrade or add additional instances later. It makes debugging configuration and performance issues MUCH easier. My personal favorite is E1 - Distributed Linux but I am a bit of a Linux bigot. The important thing is to avoid the temptation to combine things, and keep each individual application on it's own VM or physical machine. Whenever people begin to combine things, they almost always put too much on a single box, and end up with poor performance.
- Make sure that you have your machines set up with half of the physical memory allocated to the JVM (for the Jazz application), and half to the OS. The OS needs memory for transferring files, doing the normal OS things, and for logging and caching local content. A bigger JVM heap means more room for the Jazz application to operate in, but it also means longer "pauses" when Java does it's inevitable garbage collection.
- Not knowing anything about your workload, my standard "vanilla" Jazz server has 8GB of memory, two dual core CPU's, and at least 80GB of drive space. That should work for a "typical" Jazz application, all alone on a machine with WAS.
- Deploy content caching proxies for your remote sites. Be careful when you do this, and make sure that you are getting a good number of cache hits. I am not an expert on setting these up, but I have heard that some folks have had to make adjustments to the settings in order to get optimal use of the content caching.
- Keep asking questions. It's the only way that you'll learn and get smarter about this stuff.
Comments
Thanks Daniel. I found your blog very interesting
Hello Daniel,
Is there a good documentation for setting up E1 - Distributed Linux ?
Because the jazz.net documentation for reverse proxy is a little confusing.
Thanks
Vishnu M
Our WAN is not so good, so we need to have multiple instances of RTC (but working as only one) in different countries. We need this not only for SCM, but also for RQM and workitems of CCM.
Regarding E1-Distributed Linux, what I understand is that I can have the following architecture:
Country A:
- 1 server with JTS
- 1 server with CCM
- 1 server with RQM
- 1 server with DB
Country B
- 1 server with CCM
- 1 server with RQM
Is that ok? Is it possible to have multiple instances of JTS? Is this a supported scenario?
Thanks
Comments
>have multiple instances of RTC (but working as only one)
this you cannot do.. each CCM server has its own database of workitems.
they cannot be functionally shared across servers (you can use the very limited Change Request sharing (see project associations in the top right of the project config, links tab, (on eclipse)).
we are using caching proxies.. and like all other tools, they need to be monitored and tuned on an ongoing basis
note that there are different user authentication processes too.. you could have TWO complete complexes (JTS/CCM/RQM).. or ONE JTS central and just CCM/RQM distributed.. if the link fails, then all work stops.. but if you have two jts's then you need two licenses during any inter system work (particularly for SCM operations)..
our IT group will not allow database, WAS and IHS on the same server in production (virtual) so we end up with server sprawl.
1 vote
Hi Leonardo
Unfortunately, you will not be able to have different instances of RTC (and RQM) on different sites but still operate as one single instance. You can certainly have a instance on each site - but these will be different instances.
I would try improving performance first (as per Dan's blog and comments above).
anthony
1 vote
So, the unique alternative is to use caching proxies for SCM, and to improve the WAN performance as much as possible. Not supporting multi-site installation is a big limitation of the tool. Are there any plans to support it in the future?
far as I know, there is no plan to change the system design. these are independent servers that happen to share a licensing server. each CCM has its own unique database (repository).. this app is NOT one database with multiple processing engines in front (horizontal scalability). It is more like vertical scalability, one engine, one database, add cpu and memory.
you can setup another complete instance, (we have 4) but they do not share anything. AND currently (as far as the eye can see) there is no mechanism to move data between them with any fidelity (project move for example, or workitem move across servers). You also cannot really link across servers or move source changesets & history across servers.
One correction to the previous comment ... replication of change sets and change set history is supported. In particular, you can deliver changes from a workspace on one repository to a stream in a different repository to achieve this. Also note that you can use OSLC links to link across servers (but those are just "traversable" links for manual navigation and reporting ... so the only "semantics" they have is their effect on reporting).