Jazz Library Rational Software Architect Design Manager 4.0.1 Performance And Sizing Report
Author name

Rational Software Architect Design Manager 4.0.1 Performance And Sizing Report

This article is an update to IBM Rational Software Architect Design Manager 4.0 Performance And Sizing Report. In this article we report the results of the performance and scalability testing we have done on Rational Software Architect Design Manager 4.0.1 (RSADM) and make some comparisons to RSADM 4.0. We also offer some guidance for planning the deployment and configuration of RSADM.

We executed automated multi-user tests to get performance data on a typical mix of use cases on a RSADM server under load. We also performed single-user tests of other important but less frequently used use cases. We ran the performance tests on RSADM installed on virtual machines in a cloud environment hosted on an enterprise-class virtualization platform. The goals were to compare the performance and response times of RSADM 4.0.1 to RSADM 4.0 and to demonstrate a deployment and configuration of RSADM 4.0.1 that could support at least 100 concurrent users. The rest of this article describes this in more detail, presents our measurements and findings and provides some guidance and advice for planning the deployment and configuration of RSADM.


Workload

To provide a reasonably realistic volume of model data for the tests, before each performance test run we imported into RSADM approximately 100 MB of model data comprising approximately 25,000 resources and over 800,000 total model elements. This was the same test data that we used for the RSADM 4.0 performance tests as reported in the previous article. We again performed two kinds of tests: multi-user tests of the most common use cases and single-user tests of other important, but less commonly used, operations.

To drive the multi-user tests we used the same automated test framework and suite that we used for the RSADM 4.0 performance tests. It is capable of simulating an arbitrary number of users executing a configurable mix of user-level operations (use cases) on a RSADM server over an arbitrary period of time. The framework drives the tests and records the response time of every user-level operation invoked during a test session. We used this framework and test suite to drive the multi-user tests and gather performance and response time data. The single-user tests were performed manually.

Multi-user tests

For the multi-user tests we used our performance test framework to simulate 50 and 100 concurrent users performing a variety of common user operations over the course of approximately 1 hour. The use cases we used were the same ones that we used for the RSADM 4.0 report. The use cases were split between the use of the two supported RSADM client types:

  • Web client – A web browser
  • Rich client – Rational Software Architect (RSA) with the Design Management (DM) extensions installed on it

The web client use cases were:

  • Expand an explorer tree node – Simulates a user expanding a node in the Explorer tree.
  • Create a resource – Creates an Ontology using forms.
  • save a resource – Changes the title of an Ontology and saves it.
  • Lock a resource – Locks an Ontology (implies the intent to modify it).
  • Unlock a resource – Unlocks an Ontology that was previously locked.
  • Open a resource – Simulates opening a resource in a form-based editor.
  • Open a diagram – Simulates opening a UML diagram.
  • Add a comment – Creates a textual comment on a resource.
  • Search all resources – Searches for a keyword in all models in all project areas, retrieving additional information about the matching resources.
  • Search diagrams – Same as above except that it searches in diagrams only.
  • Get a OSLC representation – Simulates a client requesting the OSLC representation of a resource.

The rich client uses cases were:

  • Expand an explorer tree node – Simulates a user expanding a node in the Explorer tree.
  • Open properties for a UML resource – Simulates showing a model element’s properties in the Properties Viewer.

Note that in most cases each test case execution issues more than one, sometimes several, server requests in order to emulate the requests that a web client or rich client would make for a particular use case. For example, the ‘Open a resource’ test, in addition to retrieving the form-based representation of the resource, also retrieves any existing comments, retrieves the ‘breadcrumbs’ (hierarchical path), etc.

For the multi-user tests, we aimed for a mix of 70% read operations and 30% write operations. Table 1 shows, for each use case, the number of simulated users and execution frequencies we used.

Number of simulated users and execution frequencies for each use case

Client type Use case 50 users 100 users Notes
Number of users Execution rate
(per minute)
Number of users Execution rate
(per minute)
Per-user Net Per-user Net
Web client Expand an explorer tree node 4 1 4 8 1 8
Create a resource 5 ¾ 10 ¾
Save a resource 5 1 5 10 1 10
Lock a resource
Unlock a resource
5 1
1
5
5
10 1
1
10
10
Each simulated user executes both use cases.
Open a resource 5 ½ 10 ½ 5
Open a diagram 5 ½ 10 ½ 5
Add a comment 5 1 5 10 1 10
Search all resources 3 1 3 6 1 6
Search diagrams 2 1 2 4 1 4
Get OSLC representation 5 1 5 10 1 10
Rich client Expand an explorer tree node 4 1 4 8 1 8
Open properties for a UML resource 2 1 2 4 1 4
Total: 50 48¾ 100 97½

These distributions resulted in 28.2% of the total net operations being write operations (“Create a resource”, “Save a resource” and “Add a comment”) for both the 50- and 100-user test loads, which is quite close to the 30% we were aiming for.

Single-user tests

The single-user tests of what are expected to be relatively lower-frequency (but still important) use cases were:

  • Finalize RSADM setup – Finalize the RSADM application during server setup on Tomcat using the default Derby database configuration.
  • Import 100 MB model data
  • Re-import 100 MB model data – Re-import the 100 MB of model data in which approximately 6% of its resources have been modified.
  • Create a workspace (from a snapshot)
  • Create a snapshot
  • Run a report – Run a report on a model using the UML metrics report template supplied with Design Manager.
  • Run an impact analysis – Run an impact analysis on a UML component using 3 levels of depth and the default configuration and views.

Test environments

Hardware configuration

We tested RSADM on virtual machines hosted on an enterprise-class virtualization platform. The virtualization platform was the same as that used for the previous report. It consisted of a cluster of 7 IBM System x3850 X5 servers each with 4 8-core x 2-thread 64-bit Intel® Xeon® X7550 CPUs running at 2 GHz (224 CPU cores total) with a total of 4.25 TB of RAM. This was connected to an IBM System Storage DS8800 with a capacity of 100 TB. See Figure 1. The server cluster was lightly loaded, typically running at approximately 20% of capacity.

Figure 1: Virtual machine host platform hardware configuration

Image of the virtual machine host platform hardware configuration

Each virtual machine we used was configured with 2 CPUs (2 cores total), 8 GB RAM, and 100 GB of disk storage running under the VMWare;® ESX 4.1.0 hypervisor. See Figure 2.

Figure 2: Virtual machine hardware configuration

Image of the virtual machine hardware configuration

We did not test RSADM 4.0.1 on dedicated hardware. However, as we reported in the RSADM 4.0 report, we did see performance improvements running RSADM 4.0 on dedicated hardware versus a virtual machine, and we would expect the same to be true for RSADM 4.0.1.

Software deployment and configuration

In order to effectively plan for the deployment of RSADM it is important to have at least a high-level understanding of its major components and their interactions. Figure 3 illustrates these components and interactions, also showing the different types of clients.

Figure 3: Component relationships and interactions in RSADM

Image of the relationships and interactions in RSADM

There are three general types of components, loosely arranged in Figure 3 into three columns:

  • On the left are the various client types supported.
  • In the center column are the web applications that comprise a minimal RSADM installation. Each of these is a distinct web application which could be run on the same application server or on separate ones (which may be run on separate machines).
    • The Design Management Application provides Design Management services to clients via HTTP/HTTPS.
    • The Configuration Application stores and manages configuration and version metadata of resources for Jazz applications (RSADM in this case).
    • The Jazz Team Server provides common services for Jazz applications (e.g., user management and authentication).
  • On the right are the data stores used by the applications.
    • Each application has a database which it uses to store its application-specific data. Depending on scalability needs and the database implementation being used, each database may be local or accessed remotely.
    • The Design Management and Configuration applications each maintain indices for quickly searching for information about the resources they store and manage. The index is a Jena-based Resource Description Framework (RDF) triplestore which resides on the application’s local file system. See the Storage requirements section for more details.

The platform and middleware software we used was as follows.

  • The operating system was Microsoft Windows Server 2008 R2 Enterprise (64-bit).
  • We used IBM DB2® 9.7 Workgroup Server Edition for the databases.
  • We ran the RSADM applications (Design Management and Configuration applications) and the Jazz Team Server (JTS) on Apache Tomcat 7.0 (the version that ships with RSADM).
  • In the server-startup script we changed the JVM settings of -Xms and -Xmx to 6G (default is 4G).
  • We ran Tomcat on the IBM Java™ SE Runtime Environment 1.6.0 that is installed with RSADM.
  • We used the Tomcat user registry to store user information.

For the multi-user testing we ran with 50 users and 100 users. For the 50-user testing we ran everything on a single virtual machine as shown in Figure 4. For the 100-user testing we configured a distributed deployment of RSADM using three virtual machines, with the RSADM application (Design Management and Configuration applications) on one, the Jazz Team Server (JTS) on a second and the relational databases (DB2) on a third, as shown in Figure 5.

Figure 4: RSADM 50-user single-machine deployment and software configuration

Image of RSADM 50-user single-machine deployment and software configuration

Figure 5: RSADM 100-user distributed deployment and software configuration

Image of RSADM 100-user distributed deployment and software configuration

The multi-user tests were driven by our automated performance test framework and suite running on separate virtual machines hosted on the same virtualization platform. For the 50-user tests we ran the test framework and suite on a single virtual machine simulating 50 users. For the 100-user tests we used two virtual machines simulating 50 users each.


Results

Multi-user test results

Table 2 summarizes the mean response times that we recorded running the 50- and 100-user tests on RSADM for 1 hour each on the configurations described above. For comparison, it also includes the response times from the 50-user tests RSADM 4.0 (which were also run on a single virtual machine configured identically and running on the same virtualization platform). All times are shown in seconds ± one standard deviation (σ). As before, the “Web client” client type refers to a web browser, while “Rich client” refers to Rational Software Architect (RSA) with the Design Management (DM) extensions installed on it.

Table 2: Response times from 50- and 100-user performance tests

Client type Use case Mean response time
(seconds ± σ)
RSADM 4.0 RSADM 4.0.1
50 users
(single machine)
50 users
(single machine)
100 users
(distributed deployment)
Web client Expand an explorer tree node 1.95 ± 1.37 1.60 ± 0.50 2.54 ± 1.14
Create a resource 6.38 ± 2.65 2.96 ± 0.96 5.13 ± 1.96
Save a resource 5.86 ± 2.39 2.45 ± 0.79 4.32 ± 1.69
Lock a resource 0.99 ± 2.12 0.20 ± 0.15 0.37 ± 0.57
Unlock a resource 1.16 ± 0.84 0.20 ± 0.14 0.32 ± 0.25
Open a resource 7.10 ± 2.99 3.35 ± 1.40 7.04 ± 2.18
Open a diagram 1.55 ± 2.07 0.14 ± 0.14 0.26 ± 0.32
Add a comment 0.82 ± 0.93 0.09 ± 0.09 0.13 ± 0.26
Search all resources 0.80 ± 0.71 0.63 ± 0.35 0.95 ± 0.52
Search diagrams 3.54 ± 3.37 1.52 ± 1.22 2.25 ± 1.69
Get OSLC representation 0.96 ± 1.64 0.38 ± 0.26 0.56 ± 0.40
Rich client Expand an explorer tree node 4.23 ± 1.17 0.32 ± 0.18 0.53 ± 0.47
Open properties for a UML resource 0.56 ± 0.95 0.61 ± 0.74 1.08 ± 1.00

Figure 6 shows these results graphically, making it easier to compare them. Each short horizontal bar indicates the mean response time for that use case (in seconds). Each vertical bar indicates the range of ±1 standard deviation from that mean.

Figure 6: Response times from 25 and 50 user performance tests

Graph of response times from 25 and 50 user performance tests

Single-user test results

Table 3 summarizes the execution times that we recorded from running the single-user tests of the low-frequency use cases on RSADM 4.0 and 4.0.1. Times are shown in hours, minutes and seconds. For the “Create a workspace” and “Create a snapshot” use cases, three times are reported:

  • Creation – After clicking the “Create” button, the amount of time until the “Create successful” message appears.
  • Construction – After creation, the amount of time that the workspace or snapshot remains “under construction” before it can actually be used.
  • Load – After construction, the amount of time it takes to load the new workspace or snapshot (i.e., in the Explorer).

For the “Run a report” and “Run an impact analysis” use cases, two times are reported:

  • First run – The amount of time it takes for the first run of a report or impact analysis to complete.
  • Subsequent runs – The mean time it takes for subsequent runs of a report or impact analysis to complete.

Table 3: Execution times from the single-user tests

Use case Time (hh:mm:ss)
RSADM 4.0 RSADM 4.0.1
single machine single machine distributed deployment
Finalize Design Manager setup 10:13 20:15
Import 100 MB model data 1:38:00 1:34:00 2:03:00
Re-import 100 MB model data 38:26 40:02 46:33
Create a workspace Creation 0:01 0:01
Construction 9:40 14:15
Load 0:35 0:23
Create a snapshot Creation 0:01 0:01
Construction 0:20 0:10
Load 1:45 0:30
Run a report First run 3:22 4:18
Subsequent runs 2:30 3:52
Run an impact analysis First run 1:13 1:04
Subsequent runs 0:55 0:56

Figures 7 and 8 show these results graphically.

Figure 7: Execution times from the “Finalize Design Manager setup”, “Import 100 MB model data”, and “Re-import 100 MB model data” use case tests

Graph of the execution times from the "Finalize Design Manager setup", "Import 100 MB model data", and "Re-import 100 MB model data" use case tests

Figure 8: Execution times from the “Create a workspace”, “Create a snapshot”, “Run a report”, and “Run an impact analysis” use case tests

Graph of the execution times from the "Create a workspace", "Create a snapshot", "Run a report", and "Run an impact analysis" use case tests


Server resource utilization

In addition to the 1-hour multi-user and single-user tests reported above, we ran the 50-user tests continuously for 5 days (on a single dedicated hardware machine) and periodically collected resource utilization data from the JVM running the RSADM server. We also installed and enabled some additional server instrumentation code on the server to enable us to collect additional application-specific information (such as statistics on the number of open HTTP connections). We recorded the following:

  • Heap memory usage – The amount of memory being used for the Java heap as reported by the JVM.
  • CPU utilization – The fraction of available CPU cycles being used as reported by the JVM.
  • Number of open HTTP connections – The number of open connections being used for HTTP requests as by our instrumentation code.

Table 4 summarizes the results, which were relatively consistent with those for RSADM 4.0. For each measurement, the maximum is the largest measurement we recorded, while the peak is continuously monitored by the JVM (for heap memory usage and CPU utilization) or our server instrumentation code (for open HTTP connections). The peaks exceed the maximums because none of our recorded measurements happened to be taken a point in time when any current value was at its peak.

Table 4: Summary of measurements from 5-day, 50-user test

Measurement Mean Minimum Maximum Peak
Heap memory usage (GB) 3.3 ± 0.91 2.1 4.8 6.0
CPU utilization 2% ± 3% 0% 11% 75%
Number of open HTTP connections 16.7 ± 1.2 15 19 25

Figures 9-11 graphically show the data collected over the 5-day run, including both current and peak measurements.

Figure 9: Heap memory usage during 5-day, 50-user test

Graph of heap memory usage during 5-day, 50-user test

Figure 10: CPU utilization during 5-day, 50-user test

Graph of CPU utilization during 5-day, 50-user test

Figure 11: Open HTTP connections during 5-day, 50-user test

Graph of open HTTP connections during 5-day, 50-user test


Storage requirements

We did not perform as detailed an analysis of storage requirements for RSADM 4.0.1 as we did for RSADM 4.0 because they should be about the same. However, in RSADM 4.0.1 we did reduce the number of triples stored in the triple store by indexing less information from most model resources. In the 100 MB model data that we imported for these tests this reduced the number of triples from approximately 8 million in RSADM 4.0 to about 5 million in RSADM 4.0.1, resulting in a 37.5% reduction in the size of the index.


Hardware sizing recommendations

We showed that RSADM can support at least 100 users on a distributed deployment in the virtual machine environment that we tested on using Tomcat, DB2, and a local Tomcat user registry (i.e., we did not use LDAP). We also confirmed that RSADM can support at least 50 users on a single virtual machine in the same kind of environment. Depending on your particular user load, you may be able to scale these down or you may need to scale them up. Table 5 summarizes how the virtual machines we used and the dedicated hardware we used in the previous article compare with what is documented in the Rational Software Architect Design Manager system requirements. As you can see, our virtual machines met the requirements, but just barely, while the dedicated hardware machine far exceeded them. The performance numbers seem to reflect this. We probably would not want to try to support more than 50 users on a single virtual machine configuration.

Table 5: Test machine configurations and RSADM hardware requirements

Virtual Machines Dedicated Server RSADM Hardware Requirements
CPUs 2 cores 24 cores (4 CPUs with 6 cores each) 2 to 4 cores
RAM 8 GB 128 GB 4 GB, 8+ GB recommended
Architecture 64-bit 64-bit 64-bit (32-bit supported only for small-scale evaluation or demonstration purposes)

Application server configurations

Our test environment configurations correspond most closely to the “Departmental topology” described in the RSADM documentation section “Choosing an installation topology“. For our 50-user tests we ran the Design Management and Configuration applications and the JTS on a single server and on the same instance of the Tomcat application server with local databases. While this was sufficient for our 50-user test load to perform reasonably well, for a larger-scale deployment we would recommend using a distributed topology such as we did for the 100-user tests for which we deployed the Design Management and Configuration applications, the JTS, and the databases on three separate machines. If using a distributed topology like this, you should consider using the Websphere Application Server (WAS). Besides its scalability advantages, support for clustering, high availability, etc., WAS has several other features that make it particularly well-suited for hosting RSADM and other Jazz-based Collaborative Lifecycle Management (CLM) applications, including better support for single sign-on (SSO) and reverse proxy. See the Collaborative Lifecycle Management 2012 Sizing Report for a few more details. In general, the recommendations outlined there are applicable to RSADM as well, especially if you are integrating RSADM with CLM Applications.


Network connectivity

In our test configurations we ran the Design Management, Configuration and JTS applications using local databases. It can be a good load balancing strategy to deploy one or more of the databases on other machines and have the server(s) access them remotely, such as we did for our 100-user tests. If you do so, however, network connectivity (especially latency) between the Jazz application server(s) and the database server(s) becomes a key factor. The CLM 2012 Sizing Report recommends no more than 1-2 ms latency and to locate the servers on the same subnet. This would apply to a RSADM deployment as well.

You could also choose to run the JTS on a different server than RSADM, such as we did for our 100-user tests. This would typically be the case in a large RSADM deployment, especially if integrated with CLM applications. You might deploy the Design Management, Configuration and JTS applications each on a different server. In this case, again, good network connectivity between each application server and the JTS is important.


Artifact sizing guidelines

Repository database
If you use one of the supported DB2 or Oracle® database applications there are no hard limits to the number of project areas or users you can host. There is a limit of 10 registered users if you are using Apache Derby.
Concurrent users
Unless you are using Apache Derby for the repository database there is no hard limit on the number of users RSADM will support (other than what is effectively imposed by your hardware and deployment topology). Our performance testing has demonstrated that RSADM will support up to 100 concurrent users with reasonable response times on the distributed deployment configuration we tested, and up to 50 concurrent users on the single-machine configuration. With more users than these we would expect to see higher response times.
Model import
There is no hard upper limit to the size of a model project being imported. The model data we imported for the performance tests reported here was approximately 100 MB in size comprising approximately 25,000 resources and over 800,000 total model elements.

Importing can impact server performance, so if you have large amounts of model data that needs to be imported we recommend using an automated import process scheduled for when there are few active users (e.g., outside of business hours). You should also consider limiting imports to one at a time.


Performance tuning

The performance tuning considerations for RSADM are similar to those for other Jazz-based applications (see, for example, the Rational Requirements Composer (RRC) 4.0 performance and tuning guide).

  • Comply with the Hardware sizing recommendations: 64-bit architecture, 2-4 CPU cores, and at least 8 GB RAM.
  • Configure the number of available TCP/IP ports to be the maximum supported by the host operating system and hardware. See below for instructions.
  • Increase the size of the thread pool used by the server’s web application container from the default. See below for recommendations and instructions.
  • Set the size of the heap the JVM uses appropriately. See below for recommendations and instructions.

Number of available TCP/IP ports

On AIX and Linux the default number of TCP/IP ports available is too low. To increase it, issue the following command:

ulimit -n 65000

On Windows the default number of available TCP/IP ports is higher than on AIX or Linux, but should still be increased.

Windows Server 2008

See the instructions at http://support.microsoft.com/kb/929851. Set the start port to 2000 and the number of ports to 63535.

Windows Server 2003

Open the registry. Under HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Services/Tcpip/Parameters, create a new DWORD named “MaxUserPort” and set its value to 65000. Reboot the machine for the change take effect.

Note: Be very careful when modifying the Windows registry. Incorrect changes to the registry have the potential to cause Windows to behave erratically or to become inoperable. Always backup the registry before modifying it.

Thread pool size

The size of the thread pool used by the RSADM server’s web application container should be at least 2.5 times the expected active user load. For example, if you expect to have 100 concurrently active users, set the thread pool size to at least 250.

For Tomcat 7, refer to the “Executor (thread pool)” section in the Apache Tomcat 7 documentation.

For Websphere Application Server (WAS), open the administrative console and select Servers ⇒ Server Types ⇒ Websphere application servers ⇒ server name ⇒ Thread pools. See the “Thread pool settings” section in the WAS information center for more details.

JVM heap size

The maximum JVM heap size should be set to 4 GB if the server has at least 6 GB of RAM. If the server only has 4 GB RAM (not recommended) you may want to experiment with configuring a maximum JVM heap size of 2 GB or 3 GB. However, avoid setting the maximum JVM heap size to more than about 70-80% of the amount of RAM the server has or you may experience poor performance due to thrashing.

If you are running Tomcat, edit the server-startup script and change the values of -Xms and -Xmx to the desired values. You will need to stop and restart the server for the changes to take effect.

If you are running WAS, see the “Java virtual machine settings” section in the WAS information center for instructions specific to your WAS deployment.


For more information

On jazz.net:

Product documentation and support:


Summary and next steps

Rational Software Architect’s Design Management capability integrates architecture and design into Rational’s Solution for Collaborative Lifecycle Management.

Design Management provides many features including: a central design repository, Web client access to designs, change control and configuration management for designs, lifecycle traceability, dashboards, searching, impact analysis, document generation, commenting and markup of designs and design reviews.

The Design Management project on jazz.net offers a 60 day trial so you can download and try it out. The trial includes design artifacts for a sample application call Money That Matters. Additionally the library section on jazz.net contains articles, videos and other resources you may find helpful in learning more.


Thu, 10 Jan 2013