<div id="header-title" style="padding: 10px 15px; border-width:1px; border-style:solid; border-color:#FFD28C; background-image: url(<nop>https://jazz.net/wiki/pub/Deployment/WebPreferences/TLASE.jpg); background-size: cover; font-size:120%"> <!-- * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup, Main.TWikiDeploymentDatasheetsAuthorsGroup, Main.GrantCovell, Main.VivekJaykrishnan, Main.AanjanHari --> ---+!! <img src="https://jazz.net/wiki/pub/Deployment/WebPreferences/new.png" alt="new.png" width="50" height="50" align="right">Rational Engineering Lifecycle Manager (!RELM) performance report 5.0</br> %DKGRAY% Authors: Main.AanjanHari, Main.GrantCovell</br> Build basis: !RELM 5.0 </br> Date: August 6, 2014 %ENDCOLOR%</div></sticky> <!-- Page contents top of page on right hand side in box --> <sticky><div style="float:right; border-width:1px; border-style:solid; border-color:#DFDFDF; background-color:#F6F6F6; margin:0 0 15px 15px; padding: 0 15px 0 15px;"> %TOC{title="Page contents"}% </div></sticky> <sticky><div style="margin:15px;"></sticky> ---++ Introduction This article presents the results of our performance testing done for the !RELM 5.0 release and deployment suggestions derived from the tests. ---+++!! Disclaimer %INCLUDE{"PerformanceDatasheetDisclaimer"}% ---++ Performance goals The goal of this test was to document our testing for !RELM 5.0 and provide topology and base configuration suggestions. The topology and hardware used for this test simulated approximately 250 concurrent users at the tested workload. ---++ Topology The test environment used was the standard [[https://jazz.net/wiki/bin/view/Deployment/AlternativeSSEDeploymentTopologies5#SSE_E3_Enterprise_distributed_Re][(SSE-E3) Enterprise distributed Red Hat Enterprise Linux (RHEL) with DB2]] topology. The table below shows the hardware specifications used in the test. <table class="gray-table"> <tbody> <tr> <th valign="top"> Role</th> <th valign="top"> Model</th> <th valign="top"> Processor Type</th> <th valign="top"> Number of Processors</th> <th valign="top"> Processor Speed</th> <th valign="top"> Memory (GB)</th> <th valign="top"> OS</th> <th valign="top"> Physical / Virtual</th> <th valign="top"> Using Solid State Disks (SSD) YES/NO </th> </tr> <tr> <td>RELM Server 5.0</td> <td>IBM System X3650 M2</td> <td>Intel ® Xeon ® X5570</td> <td>16</td> <td>2.99 GHz</td> <td>64 GB</td> <td>RHEL 6.1</td> <td>Physical</td> <td>YES</td> </tr> <tr> <td>Lifecycle Query Engine (LQE) Server 5.0</td> <td>IBM System X3250 M4</td> <td>Intel ® Xeon ® CPU E3-1270 V2</td> <td>4</td> <td>3.5 GHz</td> <td>32 GB</td> <td>RHEL 6.1</td> <td>Physical</td> <td>YES</td> </tr> <tr> <td>Jazz Team Server (JTS) Server 5.0</td> <td>IBM System X3250 M4</td> <td>Intel ® Xeon ® CPU E3-1270 V2</td> <td>4</td> <td>3.5 GHz</td> <td>32 GB</td> <td>RHEL 6.1</td> <td>Physical</td> <td>NO</td> </tr> <tr> <td>DB2 Server </td> <td>IBM System X3850 </td> <td>Intel ® Xeon</td> <td>4</td> <td>3.17 GHz</td> <td>16 GB</td> <td>RHEL 6.1 </td> <td>Physical</td> <td>NO</td> </tr> <tr> <td>Rational Team Concert (RTC) Server 5.0</td> <td>IBM System X3550 M4*</td> <td>Intel ® Xeon CPU ES-26900</td> <td>4</td> <td>2.899 GHz</td> <td>8 GB</td> <td>RHEL 6.1</td> <td>Virtual</td> <td>NO</td> </tr> <tr> <td>Rational Quality Manager (RQM) Server 5.0</td> <td>IBM System X3550 M4*</td> <td>Intel ® Xeon CPU ES-26900</td> <td>4</td> <td>2.899 GHz</td> <td>8 GB</td> <td>RHEL 6.1</td> <td>Virtual</td> <td>NO</td> </tr> <tr> <td> Rhapsody Design Manager (RDM) Server 5.0</td> <td>IBM System X3550 M4*</td> <td>Intel ® Xeon CPU ES-26900</td> <td>4</td> <td>2.899 GHz</td> <td>8 GB</td> <td>RHEL 6.1</td> <td>Virtual</td> <td>NO</td> </tr> <tr> <td>DOORS Server 9.6.0.0</td> <td>IBM System X3550 M4*</td> <td>Intel ® Xeon CPU ES-26900</td> <td>4</td> <td>2.899 GHz</td> <td>8 GB</td> <td>RHEL 6.1</td> <td>Virtual</td> <td>NO</td> </tr> <tr> <td>DOORS Interop Server 9.6.0.0</td> <td>IBM System X3550 M4*</td> <td>Intel ® Xeon CPU ES-26900</td> <td>4</td> <td>2.899 GHz</td> <td>8 GB</td> <td>RHEL 6.1</td> <td>Virtual</td> <td>NO</td> </tr> <tr> <td>DOORS Web Access (DWA) Server 9.6.0.0</td> <td>IBM System X3550 M4*</td> <td>Intel ® Xeon CPU ES-26900</td> <td>4</td> <td>2.899 GHz</td> <td>8 GB</td> <td>RHEL 6.1</td> <td>Virtual</td> <td>NO</td> </tr> <tr> <td>Rhapsody Client </td> <td>IBM System X3690 X5*</td> <td>Intel ® Xeon CPU E7-8837</td> <td>4</td> <td>2.659 GHz</td> <td>8 GB</td> <td>Windows 2008</td> <td>Virtual</td> <td>NO</td> </tr> <tr> <td>DOORS Client </td> <td>IBM System X3690 X5*</td> <td>Intel ® Xeon CPU E7-8837</td> <td>4</td> <td>2.659 GHz</td> <td>8 GB</td> <td>Windows 2008</td> <td>Virtual</td> <td>NO</td> </tr> </tbody> </table> __Please note: SSD have been used for the !RELM and LQE servers in this configuration__ ---+++ Network connectivity There are two aspects to network connectivity: connectivity between the !RELM application and database server and connectivity between the !RELM server and the end users. The recommendation for network connectivity in dual-tier configurations of the !RELM environment is to minimize latency between the application server and database server (no more than 1-2 ms) and to have the servers located on the same subnet. When using external storage solutions, the recommendation is to minimize connectivity latencies, with the optimal configuration being fibre attached Storage Area Network (SAN) storage. ---+++ Disks For larger configuration, the use of fibre attached SAN storage is recommended but not required. For the !RELM and LQE servers you could consider using SSD drives, but it depends whether your environment will easily support them. For your database server, be sure to review the recommendations from your database vendor when considering how to map the !RELM database tables to storage devices. ---++ Methodology ---+++ LQE settings <verbatim> ACP Check Disabled: True Stack Suspension Enabled: True Suspend Timeout: 4500 Maximum Pending Writebacks: 750 Query History: 1000 Clustered LQE environment using Nginx server as a proxy, Number of Nodes: 2 </verbatim> ---+++ LQE pre-test conditions and post-test conditions <table class="gray-table"> <tbody> <tr> <th valign="top"> Condition</th> <th valign="top"> Pre-test</th> <th valign="top"> Post-test</th> </tr> <tr> <td>Size of dataset on disk, on the LQE server </td> <td>2 GB</td> <td>4 GB</td> </tr> <tr> <td>Number of RDF elements </td> <td>203611</td> <td>227019</td> </tr> <tr> <td>Number of graphs and average of the number of triples in a graph </td> <td>203611 graphs and 20 triples per graph</td> <td>227019 graphs and 20 triples per graph</td> </tr> </tbody> </table> ---+++Workload characterization The workload is broken down into three parts: 1. The !RELM workload 2. Queries happening into LQE 3. Updates happening into LQE. The !RELM workload was generated with 250 active users performing an average of 1.5 user transactions-per-second over a 10-hour period, the workload was spread across various user operations that are possible in !RELM. The operations have been elaborated in the section <hyper link to the Scenario details sections>. The queries happening into LQE have also been factored into the !RELM workload mentioned above. The total updates happening into LQE are simulated by updates happening not only from !RELM into LQE but also happening from other tools of the SSE including RTC, RQM, RDM and DOORS. Please note that the updates happening from RTC, RQM, RDM and DOORS into LQE are performed programmatically as a single user in each of these tools and not with web browsers and multiple users in these tools. In this test the total updates happening into LQE were 2150 updates/hour and the total number of queries against LQE was 4000 queries/hour. ---+++ Results ---++++ Average CPU utilization and memory <table class="gray-table"> <tbody> <tr> <th valign="top"> Server</th> <th valign="top"> Total Average CPU % Usage</th> <th valign="top"> Total Average Memory Usage</th> </tr> <tr> <td>RELM</td> <td>7% </td> <td>26% </td> </tr> <tr> <td>LQE </td> <td>10% </td> <td>53%</td> </tr> <tr> <td>JTS </td> <td>3% </td> <td>19%</td> </tr> </tbody> </table> ---++++ Scenario details and average response times The table below provides test scenario details and the average response time for the various !RELM operations during the test. We used Rational Performance Tester (RPT) to simulate a workload normally generated by end-users. All response times listed are those measured by our automated tooling and not a browser client. <table class="gray-table"> <tbody> <tr> <th valign="top"> Operation</th> <th valign="top"> Percentage of workload</th> <th valign="top"> Scenario details</th> <th valign="top"> Operations per hour</th> <th valign="top"> Average resonse time in seconds </th> </tr> <tr> <td>Products </td> <td>16% </td> <td>This scenario involved creating a product hierarchy 5 levels deep, Adding 10 artifact and 10 URL links to each level of the hierarchy, Checking in the product hierarchy, Creating a branch of the product hierarchy, Editing and adding properties to the branch and Replacing the branch. </td> <td>240 products/hour; 2400 artifacts/hour; 2400 URLs/hour</td> <td>products 4.2; artifacts 8.16; URLs 0.89 </td> </tr> <tr> <td>Views </td> <td>36% </td> <td>The scenario involved executing a View including 20 queries. </td> <td>180 views/hour</td> <td>1.40</td> </tr> <tr> <td>Impact Analysis </td> <td>20% </td> <td>The scenario involved running a saved Impact Analysis diagram on an artifact resulting in 50 nodes in the impact analysis tree. </td> <td>200 IA diagrams/hour</td> <td>3.66</td> </tr> <tr> <td>Query </td> <td>22% </td> <td>The queries involved executing two queries 1) Find all the artifacts which have changed in the last one hour 2) Find all the artifacts owned by a user. </td> <td>220 queries/hour</td> <td>4.64</td> </tr> <tr> <td>Idle Users </td> <td>6% </td> <td>Users does login operations and be idle</td> <td>login operation</td> <td></td> </tr> </tbody> </table> #AppendixA ---++ Appendix A The following WAS settings were configured during the tests. <table class="gray-table"> <tbody> <tr> <th valign="top"> Server</th> <th valign="top"> JVM Heap Memory (min)</th> <th valign="top"> JVM Heap Memory (max)</th> <th valign="top"> Default threads</th> <th valign="top"> Web Container Threads</th> <th valign="top"> Connection Pool</th> </tr> <tr> <td>RELM </td> <td>4 GB </td> <td>24 GB </td> <td>400 </td> <td>400 </td> <td>400 </td> </tr> <tr> <td>LQE </td> <td>4 GB </td> <td>12 GB </td> <td>400 </td> <td>400 </td> <td>200 </td> </tr> <tr> <td>JTS </td> <td>4 GB </td> <td>12 GB </td> <td>400 </td> <td>400 </td> <td>200 </td> </tr> </tbody> </table> #AppendixB ---++ Appendix B: Design considerations ---+++ Tomcat and !WebSphere Application Server (WAS) Configurations While !RELM is pre-configured with Tomcat out of the box, which is good for evaluation and smaller deployments, we recommend using !WebSphere Application Server (WAS) for departmental and enterprise deployments. !WAS does provide flexibility for creating multiple profiles, has built-in configurable [[http://publib.boulder.ibm.com/infocenter/clmhelp/v3r0m1/topic/com.ibm.jazz.install.doc/topics/c_reverse_proxy.html][reverse proxy]], and provides full support for Single Sign-On (SSO) whether the applications are all in one server, or distributed amongs multiple machines for better scalability; WAS also offers an out of the box administration user interface for installing; stopping and (re)starting applications, configuring Java virtual machine properties, security, etc. The standard [[https://jazz.net/wiki/bin/view/Deployment/AlternativeSSEDeploymentTopologies5#SSE_E3_Enterprise_distributed_Re][(SSE-E3) Enterprise distributed Red Hat Enterprise Linux (RHEL) with DB2]] topology has !RELM, LQE and JTS configured on WAS. ---+++RAM ---++++ Heap memory and native memory The Lifecycle Query Engine (LQE) application makes use of two kinds of memory: *heap memory* and *native memory*. Heap memory is allocated via the JVM properties and is used by LQE for various heap allocations. Native memory is allocated on demand by the operating system and is used by LQE to load the index into the memory. The amount of RAM needed on a system, for efficient functioning of LQE, should be calculated based upon heap and native memory projections. 1. The JVM heap size for LQE should be 4 GB or greater. For dataset sizes (sum of the size of the indexTdb and textIndex folders on the hard drive) greater than 16 GB, the heap size should be 25% of dataset size. For example if the estimated size of dataset on the disk is 20 GB, heap allocation should be at least 5 GB (25% of 20 GB) 1. In addition to JVM heap allocation, there should be sufficient free memory available to load the dataset into memory. For example, if the estimated indexed dataset is 16 GB on the disk, there should be at least 16 GB of free memory available in addition to 4 GB of JVM heap allocation. In this case, the total memory reserved for LQE should be at least 20 GB. Examples: 1. Estimated dataset size on disk = 12 GB </br> Total reserved memory for LQE = 4 GB heap + 12 GB native = 16 GB 1. Estimated dataset size on disk = 20 GB </br> Total reserved memory for LQE = 25% of 20 GB heap + 20 GB native = 25 GB Reserved memory for LQE should be in addition to memory required by the operating system and other processes. ---++++ Estimating the size of the dataset As discussed above, determining the amount of RAM to be allocated for LQE on a system requires estimating the size of the dataset. The size of the dataset is dependent on the amount of the data indexed by LQE. For example if a RTC Work items TRS was indexed by LQE, the indexTdb folder may take anywhere from 1.5 GB to 3 GB space on the disk depending on the amount of content in each work item. The textIndex folder may take between 500 MB and 1 GB on the disk. Thus the total size of the dataset (size of indexTdb + size of textIndex) could range from 2 GB to 4 GB on the disk. ---++++ Configuring JVM heap * The JVM minimum heap size should be set to 1 GB using the =-Xms1G= JVM property * The JVM maximum heap size can be configured by using the =-Xmx= JVM property. For example, if the estimated heap size (as determined based on the guidance above) is 4 GB, it can be configured as =-Xmx4G=. ---+++ Query performance and scalability Query performance and scalability depends upon many factors: CPU processing capability, RAM capacity, hard drive read/write speeds, network latency, indexed dataset size, data complexity, and query optimization. * Increased CPU capacity can increase query execution performance * Increased RAM capacity can improve in-memory computations and can prevent potential memory constraints * Solid State Drives can reduce read and write times * Reduced network latency can improves ACP check response times * Indexed dataset size can make a significant difference, usually due to the increased number of nodes to traverse in queries * All queries should be optimized by reducing the result set earlier in the query structure. Query response times should target less than 100 milliseconds (ms) for optimum scaling in larger datasets. The query can be restructured for optimum efficiency by understanding the data structures. * Having multiple nodes by setting up clustered LQE environment using Nginx server as a proxy can increase query execution performance. ---+++ Indexing Performance Indexing performance depends on several factors: CPU processing capability, hard drive read/write speeds, and network latency. LQE is a highly concurrent web application and optimally uses multi-core CPUs for parallel processing. Since the index is written to a hard drive, the hard drive should be optimally capable of fast read/write speeds. When LQE indexes a data provider, there should be optimum networking capability for LQE and the data provider to send and receive http messages. For best indexing performance, it is recommended: * LQE should be deployed on servers with CPUs which have clock speeds greater than 2 GHz (faster CPUs increase indexing performance) * LQE should be deployed on servers with multi-core CPUs to increase the capacity for concurrent processing * LQE should be deployed on servers with Solid State Drives (SSD) drives when possible to increase indexing performance (Solid State Drives can increase indexing performance by a factor of 2 times) * LQE should be deployed with other data providers on the same network subsystems for faster indexing performance ---++++!! About the authors Aanjan Hari is Test Lead for !RELM and VVC working from India Software Labs Bangalore. He can be reached at aanjanhari@in.ibm.com -------------------- ---+++++!! Questions and comments: * What other performance information would you like to see here? * Do you have performance scenarios to share? * Do you have scenarios that are not addressed in documentation? * Where are you having problems in performance? %COMMENT{type="below" target="PerformanceDatasheetReaderComments" button="Submit"}% %INCLUDE{"PerformanceDatasheetReaderComments"}% <sticky></div></sticky>
This topic: Deployment
>
WebHome
>
DeploymentPlanningAndDesign
>
PerformanceDatasheetsAndSizingGuidelines
>
RationalEngineeringLifecycleManagerRELMPerformanceReport50
History: r6 - 2014-08-19 - 19:51:16 - Main.gcovell
Copyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our
Terms of Use.
Please read the following
disclaimer
.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more
here
.