Engineering Lifecycle Management Wiki - Deployment
Deployment Web
Planning and design
Installing and upgrading
Migrating and evolving
Integrating
Administering
Monitoring
Troubleshooting
Community information and contribution guidelines
Create new topic
Topic list
Search
Advanced search
Notify
RSS
Atom
Changes
Statistics
Web preferences
Edit
Attach
P
rintable
TWiki
>
Deployment Web
>
DeploymentPlanningAndDesign
>
PerformanceDatasheetsAndSizingGuidelines
>
RationalEngineeringLifecycleManagerReport406
Revision 8 - 2014-04-06 - 23:43:39 - Main.sbeard
<div id="header-title" style="padding: 10px 15px; border-width:1px; border-style:solid; border-color:#FFD28C; background-image: url(<nop>https://jazz.net/wiki/pub/Deployment/WebPreferences/TLASE.jpg); background-size: cover; font-size:120%"> <!-- * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup, Main.TWikiDeploymentDatasheetsAuthorsGroup, Main.GrantCovell, --> ---+!! <img src="https://jazz.net/wiki/pub/Deployment/WebPreferences/uc.png" alt="uc.png" width="50" height="50" align="right"> Rational Engineering Lifecycle Manager (!RELM) performance report: 4.0.6 release </br> %DKGRAY% Authors: Main.VivekJaykrishnan</br> Build basis: !RELM 4.0.6 %ENDCOLOR%</div></sticky> <!-- Page contents top of page on right hand side in box --> <sticky><div style="float:right; border-width:1px; border-style:solid; border-color:#DFDFDF; background-color:#F6F6F6; margin:0 0 15px 15px; padding: 0 15px 0 15px;"> %TOC{title="Page contents"}% </div></sticky> <sticky><div style="margin:15px;"></sticky> ---++ Introduction This article presents the results of our performance testing done for the !RELM 4.0.6 release and deployment suggestions derived from the tests. ---+++!! Disclaimer %INCLUDE{"PerformanceDatasheetDisclaimer"}% ---++ Performance goals The goal of this test was to document our testing for !RELM 4.0.6 and provide topology and base configuration suggestions. The topology and hardware used for this test simulated approximately 200 concurrent users at the tested workload. ---++ Topology The test environment used was the standard [[https://jazz.net/wiki/bin/view/Deployment/AlternativeSSEDeploymentTopologies#SSE_E3_Enterprise_distributed_Re][(SSE-E3) Enterprise distributed Red Hat Enterprise Linux (RHEL) with DB2]] topology. The table below shows the hardware specifications used in the test. <table class="gray-table"> <tbody> <tr> <th valign="top"> Role</th> <th valign="top"> Model</th> <th valign="top"> Processor Type</th> <th valign="top"> Number of Processors</th> <th valign="top"> Processor Speed</th> <th valign="top"> Memory (GB)</th> <th valign="top"> OS</th> <th valign="top"> Physical / Virtual</th> <th valign="top"> Using Solid State Disks (SSD) YES/NO </th> </tr> <tr> <td>RELM Server 4.0.6</td> <td>IBM System X3650 M2</td> <td>Intel ® Xeon ® X5570</td> <td>16</td> <td>2.99 GHz</td> <td>64 GB</td> <td>RHEL 6.1</td> <td>Physical GB</td> <td>YES</td> </tr> <tr> <td>Lifecycle Query Engine (LQE) Server 4.0.6</td> <td>IBM System X3250 M4</td> <td>Intel ® Xeon ® CPU E3-1270 V2</td> <td>4</td> <td>3.5 GHz</td> <td>32 GB</td> <td>RHEL 6.1</td> <td>Physical GB</td> <td>YES</td> </tr> <tr> <td>Jazz Team Server (JTS) Server 4.0.6</td> <td>IBM System X3250 M4</td> <td>Intel ® Xeon ® CPU E3-1270 V2</td> <td>4</td> <td>3.5 GHz</td> <td>32 GB</td> <td>RHEL 6.1</td> <td>Physical GB</td> <td>NO</td> </tr> <tr> <td>DB2 Server </td> <td>IBM System X3850 </td> <td>Intel ® Xeon</td> <td>4</td> <td>3.17 GHz</td> <td>16 GB</td> <td>RHEL 6.1 </td> <td>Physical GB</td> <td>NO</td> </tr> <tr> <td>Rational Team Concert (RTC) Server 4.0.6</td> <td>IBM System X3550 M4*</td> <td>Intel ® Xeon CPU ES-26900</td> <td>4</td> <td>2.899 GHz</td> <td>8 GB</td> <td>RHEL 6.1</td> <td>Virtual GB</td> <td>NO</td> </tr> <tr> <td>Rational Quality Manager (RQM) Server 4.0.6</td> <td>IBM System X3550 M4*</td> <td>Intel ® Xeon CPU ES-26900</td> <td>4</td> <td>2.899 GHz</td> <td>8 GB</td> <td>RHEL 6.1</td> <td>Virtual GB</td> <td>NO</td> </tr> <tr> <td> Rhapsody Design Manager (RDM) Server 4.0.6</td> <td>IBM System X3550 M4*</td> <td>Intel ® Xeon CPU ES-26900</td> <td>4</td> <td>2.899 GHz</td> <td>8 GB</td> <td>RHEL 6.1</td> <td>Virtual GB</td> <td>NO</td> </tr> <tr> <td>DOORS Server 9.5.2.1</td> <td>IBM System X3550 M4*</td> <td>Intel ® Xeon CPU ES-26900</td> <td>4</td> <td>2.899 GHz</td> <td>8 GB</td> <td>RHEL 6.1</td> <td>Virtual GB</td> <td>NO</td> </tr> <tr> <td>DOORS Interop Server 9.5.2.1</td> <td>IBM System X3550 M4*</td> <td>Intel ® Xeon CPU ES-26900</td> <td>4</td> <td>2.899 GHz</td> <td>8 GB</td> <td>RHEL 6.1</td> <td>Virtual GB</td> <td>NO</td> </tr> <tr> <td>DOORS Web Access (DWA) Server 9.5.2.1</td> <td>IBM System X3550 M4*</td> <td>Intel ® Xeon CPU ES-26900</td> <td>4</td> <td>2.899 GHz</td> <td>8 GB</td> <td>RHEL 6.1</td> <td>Virtual GB</td> <td>NO</td> </tr> <tr> <td>Rhapsody Client </td> <td>IBM System X3690 X5*</td> <td>Intel ® Xeon CPU E7-8837</td> <td>4</td> <td>2.659 GHz</td> <td>8 GB</td> <td>Windows 2008</td> <td>Virtual GB</td> <td>NO</td> </tr> <tr> <td>DOORS Client </td> <td>IBM System X3690 X5*</td> <td>Intel ® Xeon CPU E7-8837</td> <td>4</td> <td>2.659 GHz</td> <td>8 GB</td> <td>Windows 2008</td> <td>Virtual GB</td> <td>NO</td> </tr> </tbody> </table> __Please note: SSD have been used for the !RELM and LQE servers in this configuration__ ---+++ Network connectivity There are two aspects to network connectivity: connectivity between the !RELM application and database server and connectivity between the !RELM server and the end users. The recommendation for network connectivity in dual-tier configurations of the !RELM environment is to minimize latency between the application server and database server (no more than 1-2 ms) and to have the servers located on the same subnet. When using external storage solutions, the recommendation is to minimize connectivity latencies, with the optimal configuration being fibre attached Storage Area Network (SAN) storage. ---+++ Disks For larger configuration, the use of fibre attached SAN storage is recommended but not required. For the !RELM and LQE servers you could consider using SSD drives, but it depends whether your environment will easily support them. For your database server, be sure to review the recommendations from your database vendor when considering how to map the !RELM database tables to storage devices. ---++ Methodology ---+++ LQE settings <verbatim> ACP Check Disabled: True Stack Suspension Enabled: True Suspend Timeout: 4500 Maximum Pending Writebacks: 750 Query History: 1000 </verbatim> ---+++ LQE initial conditions and post test conditions <table class="gray-table"> <tbody> <tr> <th valign="top"> Condition</th> <th valign="top"> Initial</th> <th valign="top"> Post Type</th> </tr> <tr> <td>Size of dataset on disk, on the LQE server </td> <td>2 GB</td> <td>4 GB</td> </tr> <tr> <td>Number of RDF elements </td> <td>203611</td> <td>255638</td> </tr> <tr> <td>Number of graphs and average of the number of triples in a graph </td> <td>203611 graphs and 20 triples per graph</td> <td>255649 graphs and 20 triples per graph</td> </tr> </tbody> </table> ---+++Workload characterization The workload is broken down into three parts: 1. The !RELM workload 2. Queries happening into LQE 3. Updates happening into LQE. The !RELM workload was generated with 200 active users performing an average of 1.5 user transactions-per-second over a 24-hour period, the workload was spread across various user operations that are possible in !RELM. The operations have been elaborated in the section <hyper link to the Scenario details sections>. The queries happening into LQE have also been factored into the !RELM workload mentioned above. The total updates happening into LQE are simulated by updates happening not only from !RELM into LQE but also happening from other tools of the SSE including RTC, RQM, RDM and DOORS. Please note that the updates happening from RTC, RQM, RDM and DOORS into LQE are performed programmatically as a single user in each of these tools and not with web browsers and multiple users in these tools. In this test the total updates happening into LQE were 2150 updates/hour and the total number of queries against LQE was 4000 queries/hour. ---+++ Results ---++++ Average CPU utilization and memory <table class="gray-table"> <tbody> <tr> <th valign="top"> Server</th> <th valign="top"> Total Average CPU % Usage</th> <th valign="top"> Total Average Memory Usage</th> </tr> <tr> <td>RELM </td> <td>13% </td> <td>40% </td> </tr> <tr> <td>LQE </td> <td>20% </td> <td>56%</td> </tr> <tr> <td>JTS </td> <td>5% </td> <td>22% </td> </tr> </tbody> </table> ---++++ Scenario details and average response times The table below provides test scenario details and the average response time for the various !RELM operations during the test. We used Rational Performance Tester (RPT) to simulate a workload normally generated by. All response times listed are those measured by our automated tooling and not a browser client. <table class="gray-table"> <tbody> <tr> <th valign="top"> Operation</th> <th valign="top"> Percentage of workload</th> <th valign="top"> Scenario details</th> <th valign="top"> Operations per hour</th> <th valign="top"> Average resonse time in seconds </th> </tr> <tr> <td>Products </td> <td>16% </td> <td>This scenario involved creating a product hierarchy 5 levels deep, Adding 10 artifact and 10 URL links to each level of the hierarchy, Checking in the product hierarchy, Creating a branch of the product hierarchy, Editing and adding properties to the branch and Replacing the branch. </td> <td>240 products/hour; 2400 artifacts/hour; 2400 URLs/hour</td> <td>products 3.0; artifacts 0.5; URLs 0.1 </td> </tr> <tr> <td>Views </td> <td>36% </td> <td>The scenario involved executing a View including 20 queries. </td> <td>180 views/hour</td> <td>2.52</td> </tr> <tr> <td>Impact Analysis </td> <td>20% </td> <td>The scenario involved running a saved Impact Analysis diagram on an artifact resulting in 50 nodes in the impact analysis tree. </td> <td>200 IA diagrams/hour</td> <td>5.33</td> </tr> <tr> <td>Query </td> <td>22% </td> <td>The queries involved executing two queries 1) Find all the artifacts which have changed in the last one hour 2) Find all the artifacts owned by a user. </td> <td>220 queries/hour</td> <td>1.35</td> </tr> <tr> <td>Search </td> <td>6% </td> <td>The search involved Full text search using multi word strings. </td> <td>60 free text searches/hour</td> <td>0.75</td> </tr> </tbody> </table> ---+++ Topology design considerations ---++++ Tomcat and !WebSphere Application Server (WAS) Configurations While !RELM is pre-configured with Tomcat out of the box, which is good for evaluation and smaller deployments, we recommend using !WebSphere Application Server (WAS) for departmental and enterprise deployments. !WAS does provide flexibility for creating multiple profiles, has built-in configurable [[http://publib.boulder.ibm.com/infocenter/clmhelp/v3r0m1/topic/com.ibm.jazz.install.doc/topics/c_reverse_proxy.html][reverse proxy]], and provides full support for Single Sign-On (SSO) whether the applications are all in one server, or distributed amongs multiple machines for better scalability; WAS also offers an out of the box administration user interface for installing; stopping and (re)starting applications, configuring Java virtual machine properties, security, etc. The standard [[https://jazz.net/wiki/bin/view/Deployment/AlternativeSSEDeploymentTopologies#SSE_E3_Enterprise_distributed_Re][(SSE-E3) Enterprise distributed Red Hat Enterprise Linux (RHEL) with DB2]] topology has !RELM, LQE and JTS configured on WAS. #AppendixA ---++ Appendix A The following WAS settings were configured during the tests. <table class="gray-table"> <tbody> <tr> <th valign="top"> Server</th> <th valign="top"> JVM Heap Memory (min)</th> <th valign="top"> JVM Heap Memory (max)</th> <th valign="top"> Default threads</th> <th valign="top"> Web Container Threads</th> <th valign="top"> Connection Pool</th> </tr> <tr> <td>RELM </td> <td>4 GB </td> <td>24 GB </td> <td>400 </td> <td>400 </td> <td>400 </td> </tr> <tr> <td>LQE </td> <td>4 GB </td> <td>12 GB </td> <td>400 </td> <td>400 </td> <td>200 </td> </tr> <tr> <td>JTS </td> <td>4 GB </td> <td>12 GB </td> <td>400 </td> <td>400 </td> <td>200 </td> </tr> </tbody> </table> ---+++ ---++ Appendix B - References ---++ Appendix C - History ---++++!! About the authors -------------------- ---+++++!! Questions and comments: * What other performance information would you like to see here? * Do you have performance scenarios to share? * Do you have scenarios that are not addressed in documentation? * Where are you having problems in performance? %COMMENT{type="below" target="PerformanceDatasheetReaderComments" button="Submit"}% %INCLUDE{"PerformanceDatasheetReaderComments"}% <sticky></div></sticky>
Edit
|
Attach
|
P
rintable
|
V
iew topic
|
Backlinks:
We
b
,
A
l
l Webs
|
H
istory
:
r13
|
r10
<
r9
<
r8
<
r7
|
More topic actions...
Copyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our
Terms of Use.
Please read the following
disclaimer
.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more
here
.