E
dit
A
ttach
P
rintable
r3 - 2016-06-15 - 18:09:20 -
VaughnRokosz
You are here:
TWiki
>
Deployment Web
>
DeploymentPlanningAndDesign
>
PerformanceDatasheetsAndSizingGuidelines
>
CollaborativeLifecycleManagementPerformanceReportRQM60Release_v2
<div id="header-title" style="padding: 10px 15px; border-width:1px; border-style:solid; border-color:#FFD28C; background-image: url(<nop>https://jazz.net/wiki/pub/Deployment/WebPreferences/TLASE.jpg); background-size: cover; font-size:120%"> <!-- * Set ALLOWTOPICCHANGE = Main.TWikiDeploymentDatasheetsAuthorsGroup, Main.TWikiAdminGroup, Main.GrantCovell --> ---+!! <img src="https://jazz.net/wiki/pub/Deployment/WebPreferences/new.png" alt="new.png" width="50" height="50" align="right">Collaborative Lifecycle Management performance report: Rational Quality Manager 6.0 release</br> %DKGRAY% Authors: [[Main.MarcoAraujo][Marco Antonio Ferreira de Araujo]], [[Main.AlfredoBittencourt][Alfredo Bittencourt]], [[Main.VaughnRokosz][Vaughn Rokosz]]</br> Last updated: Jan 18, 2016</br> Build basis: Rational Quality Manager 6.0 %ENDCOLOR%</div></sticky> <!-- Page contents top of page on right hand side in box --> <sticky><div style="float:right; border-width:1px; border-style:solid; border-color:#DFDFDF; background-color:#F6F6F6; margin:0 0 15px 15px; padding: 0 15px 0 15px;"> %TOC{title="Page contents"}% </div></sticky> <sticky><div style="margin:15px;"></sticky> ---++Introduction This article examines the performance of Rational Quality Manager on small, virtual systems. It should not be used for sizing; please refer to the CLM Sizing Strategy for sizing guidance. <br> The test methodology involves: * Collecting the standard 20 minutes performance test data using user stages ranging from 100 to 1,000 concurrent users with 100 user increment intervals. * Executing all use cases in a single test suite to define the limit of concurrent users based on this standard workload. * Executing each use case individually to define the limit of concurrent users per scenario. * Using a standard hardware and installation topology throughout the tests in order to evaluate the workload related to each user case and how they interact in a performance perspective. Further details about the configuration parameters can be found on *[[#Appendix_A][Appendix A]]*. * All the detailed information is shown until the limiting user stage. All the data after the server is overloaded is not reliable as the server just becomes unresponsive. ---+++!! Disclaimer %INCLUDE{"PerformanceDatasheetDisclaimerEndToEnd"}% <br/> ---++ Methodology Rational Performance Tester (RPT) was used to simulate the workload created using the web client. Each user completed a random use case from a set of available use cases. A Rational Performance Tester script is created for each use case. The scripts are organized by pages and each page represents a user action. The work load is role based as each of the areas defined under sequence of actions which are separated into individual user groups within an RPT schedule. A new user starts its actions every 5 seconds until the number of users for that stage is achieved. There is five minutes settle time in order to avoid the workload generated by the user ramp up to interfere with the stage results. After the settle time, the user stage is executed for 20 minutes, where each user performs an use case at least two times in a random interval. When the stage is completed, RPT starts adding new users in order to move to the next stage. The limiting stage is defined by the last stage where the servers are presenting acceptable average response times, for this test defined as 2 seconds. The following graph illustrates the user load over time during the text execution: <img src="%ATTACHURLPATH%/RQM60ResultGraph_1.png"width="660" /> ---+++User roles *User Roles* <table class="gray-table"> <tbody> <tr> <th valign="top">Use role</th> <th valign="top">% of Total</th> <th valign="top">Related Actions</th> </tr> <tr> <td valign="top" ><p>QE Manager</p></td> <td valign="top" ><p>8</p></td> <td valign="top" ><p>Test plan create, Browse test plan and test case, Browse test script, Simple test plan copy, Defect search, View dashboard</p></td> </tr> <tr> <td valign="top" ><p>Test Lead</p></td> <td valign="top" ><p>19</p></td> <td valign="top" ><p>Edit Test Environments, Edit test plan, Create test case, Bulk edit of test cases, Full text search, Browse test script, Test Execution, Defect search</p></td> </tr> <tr> <td valign="top" ><p>Tester</p></td> <td valign="top" ><p>68</p></td> <td valign="top" ><p>Defect create, Defect modify, Defect search, Edit test case, Create test script, Edit test script, Test Execution, Browse test execution record</p></td> </tr> <tr> <td valign="top" ><p>Dashboard Viewer</p></td> <td valign="top" ><p>5</p></td> <td valign="top" ><p>View dashboard(with login and logout)</p></td> </tr> </tbody> </table> <br/> ---+++Page performance The page performance is measured as mean value (or average) of its response time in the result data. For the majority of the pages under tests, there is little variation between runs, and the mean values are close to median in the sample for the load. ---+++Topology Figure 1 illustrates the topology under test, which is based on [[RecommendedCLMDeploymentTopologies#CLM_E1_Enterprise_Distributed_Li][Standard Topology (E1) Enterprise - Distributed / Linux / DB2]]. <img src="%ATTACHURLPATH%/RQM60ResultGraph_2.png"width="660" /> The specifications of machines under test are listed in the table below. Server tuning details listed in *[[#Appendix_A][Appendix A]]* <table class="gray-table"> <tbody> <tr> <th valign="top"><p>Function</p></td> <th valign="top"><p>Number of Machines</p></td> <th valign="top"><p>Machine Type</p></td> <th valign="top"><p>CPU / Machine</p></td> <th valign="top"><p># of CPU <br /> vCores/Machine</p></td> <th valign="top"><p>Memory/Machine</p></td> <th valign="top"><p>Disk</p></td> <th valign="top"><p>Disk capacity</p></td> <th valign="top"><p>Network interface</p></td> <th valign="top"><p>OS and Version</p></td> </tr> <tr> <td valign="top" ><p>Proxy Server (IBM HTTP Server and WebSphere Plugin)</p></td> <td valign="top" ><p>1</p></td> <td valign="top" ><p>IBM System x3550 M4</p></td> <td valign="top" ><p>4 x Intel Xeon <br /> E5-2640 2.5GHz</p></td> <td valign="top" ><p>4</p></td> <td valign="top" ><p>4GB</p></td> <td valign="top" ><p>RAID 1</p> <p>SAS Disk x 2</p></td> <td valign="top" ><p>299GB</p></td> <td valign="top" ><p>Gigabit Ethernet</p></td> <td valign="top" ><p>Red Hat Enterprise Linux Server release 6.5</p></td> </tr> <tr> <td valign="top" ><p>JTS Server</p></td> <td valign="top" ><p>1</p></td> <td valign="top" ><p>IBM System x3550 M4</p></td> <td valign="top" ><p>4 x Intel Xeon <br /> E5-2640 2.5GHz</p></td> <td valign="top" ><p>4</p></td> <td valign="top" ><p>8GB</p></td> <td valign="top" ><p>RAID 5</p> <p>SAS </p> <p>Disk x 2</p></td> <td valign="top" ><p>897GB</p></td> <td valign="top" ><p>Gigabit Ethernet</p></td> <td valign="top" ><p>Red Hat Enterprise Linux Server release 6.5</p></td> </tr> <tr> <td valign="top" ><p>QM Server</p></td> <td valign="top" ><p>1</p></td> <td valign="top" ><p>IBM System x3550 M4</p></td> <td valign="top" ><p>4 x Intel Xeon </p> <p>E5-2640 2.5GHz</p></td> <td valign="top" ><p>4</p></td> <td valign="top" ><p>8GB</p></td> <td valign="top" ><p>RAID 5</p> <p>SAS Disk x 2</p></td> <td valign="top" ><p>897GB</p></td> <td valign="top" ><p>Gigabit Ethernet</p></td> <td valign="top" ><p>Red Hat Enterprise Linux Server release 6.5</p></td> </tr> <tr> <td valign="top" ><p>Database Server</p></td> <td valign="top" ><p>1</p></td> <td valign="top" ><p>IBM System x3650 M4</p></td> <td valign="top" ><p>4 x Intel Xeon </p> <p>E5-2640 2.5GHz</p></td> <td valign="top" ><p>4</p></td> <td valign="top" ><p>8GB</p></td> <td valign="top" ><p>RAID 5</p> <p>SAS Disk x 2</p></td> <td valign="top" ><p>2.4TB</p></td> <td valign="top" ><p>Gigabit Ethernet</p></td> <td valign="top" ><p>Red Hat Enterprise Linux Server release 6.1</p></td> </tr> <tr> <td valign="top" ><p>RPT workbench</p></td> <td valign="top" ><p>1</p></td> <td valign="top" ><p>IBM System x3550 M4</p></td> <td valign="top" ><p>4 x Intel Xeon </p> <p>E5-2640 2.5GHz</p></td> <td valign="top" ><p>4</p></td> <td valign="top" ><p>8GB</p></td> <td valign="top" ><p>RAID 5</p> <p>SAS Disk x 2</p></td> <td valign="top" ><p>30GB</p></td> <td valign="top" ><p>Gigabit Ethernet</p></td> <td valign="top" ><p>Red Hat Enterprise Linux Server release 6.4</p></td> </tr> <tr> <td valign="top" ><p>RPT Agents</p></td> <td valign="top" ><p>3</p></td> <td valign="top" ><p>IBM System x3550 M4</p></td> <td valign="top" ><p>4 x Intel Xeon </p> <p>E5-2640 2.5GHz</p></td> <td valign="top" ><p>4</p></td> <td valign="top" ><p>8GB</p></td> <td valign="top" ><p>N/A</p></td> <td valign="top" ><p>30GB</p></td> <td valign="top" ><p>Gigabit Ethernet</p></td> <td valign="top" ><p>Red Hat Enterprise Linux Server release 6.5</p></td> </tr> <tr> <td valign="top" ><p>Network switches</p></td> <td valign="top" ><p>N/A</p></td> <td valign="top" ><p>Cisco 2960G-24TC-L</p></td> <td valign="top" ><p>N/A</p></td> <td valign="top" ><p>N/A</p></td> <td valign="top" ><p>N/A</p></td> <td valign="top" ><p>N/A</p></td> <td valign="top" ><p>N/A</p></td> <td valign="top" ><p>Gigabit Ethernet</p></td> <td valign="top" ><p>24 Ethernet 10/100/1000 ports</p></td> </tr> </tbody> </table> *N/A:* Not applicable. vCores = Cores with hyperthreading ---+++Network connectivity All server machines and test clients are located on the same subnet. The LAN has 1000 Mbps of maximum bandwidth and less than 0.3 ms latency in ping. ---+++Data volume and shape The artifacts were contained in one large project for a total of 579,142 artifacts. The repository contained the following data: * 50 test plans * 30,000 test scripts * 30,000 test cases * 120,000 test case execution records * 360,000 test case results * 3,000 test suites * 5,000 work items(defects) * 200 test environments * 600 test phases * 30 build definitions * 6,262 execution sequences * 3,000 test suite execution records * 15,000 test suite execution results * 6,000 build records * Database size = 15 GB * QM index size = 1.3 GB <br/> ---++Full Workload Performance Result ---+++Workload characterization *Test Cases* <table class="gray-table"> <tbody> <tr> <th valign="top">Use Role</th> <th valign="top">Percentage of the user role</th> <th valign="top">Sequence of Operations</th> </tr> <tr> <td rowspan="6" valign="top" ><p>QE Manager</p></td> <td valign="top" ><p>1</p></td> <td valign="top" >Test plan create</a>:user creates test plan, then adds description, business objectives, test objectives, 2 test schedules, test estimate quality objectives and entry and exit criteria.</p></td> </tr> <tr> <td valign="top" ><p>26</p></td> <td valign="top" >Browse test plans and test cases</a>: user browses assets by: View Test Plans, then configure View Builder for name search; open test plan found, review various sections, then close. Search for test case by name, opens test case found, review various sections, then close.</p></td> </tr> <tr> <td valign="top" ><p>26</p></td> <td valign="top" >Browse test script</a>: user search for test script by name, open it, reviews it, then closes.</p></td> </tr> <tr> <td valign="top" ><p>1</p></td> <td valign="top" >Simple test plan copy</a>: user search test plan by name, then select one, then make a copy.</p></td> </tr> <tr> <td valign="top" ><p>23</p></td> <td valign="top" >Defect search</a>: user searches for specific defect by number, user reviews the defect (pause), then closes.</p></td> </tr> <tr> <td valign="top" ><p>20</p></td> <td valign="top" >View Dashboard</a>: user views dashboard</p></td> </tr> <tr> <td rowspan="8" valign="top" ><p>Test Lead</p></td> <td valign="top" ><p>8</p></td> <td valign="top" >Edit Test Environment</a>: user lists all test environments, and then selects one of the environments and modifies it.</p></td> </tr> <tr> <td valign="top" ><p>15</p></td> <td valign="top" >Edit test plan</a>: list all test plans; from query result, open a test plan for editing, add a test case to the test plan, a few other sections of the test plan are edited and then the test plan is saved.</p></td> </tr> <tr> <td valign="top" ><p>4</p></td> <td valign="top" >Create test case</a>: user create test case by: opening the Create Test Case page, enters data for a new test case, and then saves the test case.</p></td> </tr> <tr> <td valign="top" ><p>1</p></td> <td valign="top" >Bulk edit of test cases</a>: user searches for test cases with root name and edits all found with owner change.</p></td> </tr> <tr> <td valign="top" ><p>3</p></td> <td valign="top" >Full text search</a>: user does a full text search of all assets in repository using root name, then opens one of found items.</p></td> </tr> <tr> <td valign="top" ><p>32</p></td> <td valign="top" >Browse test script</a>: user search for test script by name, open it, reviews it, then closes.</p></td> </tr> <tr> <td valign="top" ><p>26</p></td> <td valign="top" >Test Execution</a>: selects View Test Execution Records, by name, starts execution, enters pass/fail verdict, reviews results, sets points then saves.</p></td> </tr> <tr> <td valign="top" ><p>11</p></td> <td valign="top" >Defect search</a>: user searches for specific defect by number, user reviews the defect (pause), then closes.</p></td> </tr> <tr> <td rowspan="8" valign="top" ><p>Tester</p></td> <td valign="top" ><p>8</p></td> <td valign="top" >Defect create</a>: user creates defect by: opening the Create Defect page, enters data for a new defect, and then saves the defect.</p></td> </tr> <tr> <td valign="top" ><p>5</p></td> <td valign="top" >Defect modify</a>: user searches for specific defect by number, modifies it then saves it.</p></td> </tr> <tr> <td valign="top" ><p>14</p></td> <td valign="top" >Defect search</a>: user searches for specific defect by number, user reviews the defect (pause), then closes.</p></td> </tr> <tr> <td valign="top" ><p>6</p></td> <td valign="top" >Edit test case</a>: user searches Test Case by name, the test case is then opened in the editor, then a test script is added to the test case (user clicks next a few times (server size paging feature) before selecting test script), The test case is then saved.</p></td> </tr> <tr> <td valign="top" ><p>4</p></td> <td valign="top" >Create test script</a>: user creates test case by: selecting Create Test Script page, enters data for a new test script, and then saves the test script.</p></td> </tr> <tr> <td valign="top" ><p>8</p></td> <td valign="top" >Edit test script</a>: user selects Test Script by name. test script then opened for editing, modified and then saved.</p></td> </tr> <tr> <td valign="top" ><p>42</p></td> <td valign="top" >Test Execution</a>: selects View Test Execution Records, by name, starts execution, enters pass/fail verdict, reviews results, sets points then saves.</p></td> </tr> <tr> <td valign="top" ><p>7</p></td> <td valign="top" >Browse test execution record</a>: user browses TERs by: name, then selects the TER and opens the most recent results.</p></td> </tr> <tr> <td valign="top" ><p>Dashboard Viewer</p></td> <td valign="top" ><p>100</p></td> <td valign="top" >View dashboard(with login and logout)</a>: user logs in, views dashboard, then logs out. This user provides some login/logout behavior to the workload</p></td> </tr> </tbody> </table> ---+++Average page response time This load test reproduced an environment under full load reaching 1000 concurrent users. The following graph shows the average page response time (in milliseconds) ranging from 100 to 1000 concurrent users in 100 user increment intervals. At the 600 user stage the average response time starts to increase rapidly, indicating that the server reached its limit. The combined work order graphs shows the average response time calculated based on all the steps in this execution. <img src="%ATTACHURLPATH%/RQM60ResultGraph_3.png"width="660" /> This test requested an average of 81.56 elements/second and 10.96 pages/second at the 600 user stage, its maximum capacity. The average response time for all pages was 853.26 ms, jumping to 2,656.64 ms when overloaded at 700 user stage. <img src="%ATTACHURLPATH%/RQM60ResultGraph_4.png"width="660" /> ---+++Performance result breakdown The following graph shows the average step response time for each user stage in milliseconds. Each step is part of a test case listed on the table Test Cases. In this case, a lower number is better. <img src="%ATTACHURLPATH%/RQM60ResultGraph_5.png"width="660" /> The average number of operations per hour during each user stage for each user case. <img src="%ATTACHURLPATH%/RQM60ResultGraph_6.png"width="660" /> Total number of iterations user is performing during the test for each user case. <img src="%ATTACHURLPATH%/RQM60ResultGraph_7.png"width="660" /> ---+++Detailed resource utilization The limiting resource was the Application Server, which reached 71.26% maximum CPU utilization, with an average of 63.67% utilization at this stage. The database had an average of 16.68% CPU utilization, and the HTTP Server only 2.35% average CPU utilization. <img src="%ATTACHURLPATH%/RQM60ResultGraph_8.png"width="660" /> The following graph shows the average memory utilization for each server until the limiting stage. <img src="%ATTACHURLPATH%/RQM60ResultGraph_9.png"width="660" /> The next graph shows the garbage collector behavior during the test. Detailed garbage collector configuration can be found in *[[#Appendix_A][Appendix A]]*. <img src="%ATTACHURLPATH%/RQM60ResultGraph_10.png"width="660" /> Additional information regarding disk and network for each server during the test can be found on the following graphs. <img src="%ATTACHURLPATH%/RQM60ResultGraph_11.png" width="400" height="390"/> <img src="%ATTACHURLPATH%/RQM60ResultGraph_12.png" width="400" height="390" /> <img src="%ATTACHURLPATH%/RQM60ResultGraph_13.png" width="400" height="390"/> <img src="%ATTACHURLPATH%/RQM60ResultGraph_14.png" width="400" height="390" /> <img src="%ATTACHURLPATH%/RQM60ResultGraph_15.png" width="400" height="390"/> <img src="%ATTACHURLPATH%/RQM60ResultGraph_16.png" width="400" height="390" /> <br/> ---++Browse TPs TCs Performance Result ---+++Workload characterization *Test Cases* <table class="gray-table"> <tbody> <tr> <th valign="top">Use role</th> <th valign="top">% of Total</th> <th valign="top">Related Actions</th> </tr> <tr> <td><p>QE Manager</p></td> <td><p>100</p></td> <td><p>Browse test plans and test cases</p></td> </tr> </tbody> </table> ---+++Average page response time This load test reproduced an environment under full load reaching 1000 concurrent users. The following graph shows the average page response time (in milliseconds) ranging from 100 to 1000 concurrent users in 100 user increment intervals. The server never reached its limit during the test. The combined work order graphs shows the average response time calculated based on all the steps in this execution. This test requested an average of 102.31 elements/second and 1.11 pages/second at the 1000 user stage. The average response time for all pages reached 103.02 milliseconds. <img src="%ATTACHURLPATH%/RQM60ResultGraph_18.png"width="660" /> ---+++Performance result breakdown The following graph shows the average step response time for each user stage in milliseconds. Each step is part of a test case listed on the table Test Cases. In this case, a lower number is better. <img src="%ATTACHURLPATH%/RQM60ResultGraph_19.png"width="660" /> The average number of operations per hour during each user stage for each user case. <img src="%ATTACHURLPATH%/RQM60ResultGraph_20.png"width="660" /> Total number of iterations user is performing during the test for each user case. <img src="%ATTACHURLPATH%/RQM60ResultGraph_21.png"width="660" /> ---+++Detailed resource utilization The Database Server reached 2.68% maximum CPU utilization, with an average of 1.24% utilization at the 1000 concurrent users stage. The Application Server had an average of 3.03% CPU utilization, and the HTTP Server 2.00% average CPU utilization. <img src="%ATTACHURLPATH%/RQM60ResultGraph_22.png"width="660" /> The following graph shows the average memory utilization for each server until the limiting stage. <img src="%ATTACHURLPATH%/RQM60ResultGraph_23.png"width="660" /> The next graph shows the garbage collector behavior during the test. Detailed garbage collector configuration can be found in *[[#Appendix_A][Appendix A]]*. <img src="%ATTACHURLPATH%/RQM60ResultGraph_24.png"width="660" /> Additional information regarding disk and network for each server during the test can be found on the following graphs. <img src="%ATTACHURLPATH%/RQM60ResultGraph_25.png" width="400" height="390"/> <img src="%ATTACHURLPATH%/RQM60ResultGraph_26.png" width="400" height="390" /> <img src="%ATTACHURLPATH%/RQM60ResultGraph_27.png" width="400" height="390"/> <img src="%ATTACHURLPATH%/RQM60ResultGraph_28.png" width="400" height="390" /> <img src="%ATTACHURLPATH%/RQM60ResultGraph_29.png" width="400" height="390"/> <img src="%ATTACHURLPATH%/RQM60ResultGraph_30.png" width="400" height="390" /> <br /> ---++Browse Test Execution (4 Steps) Performance Result ---+++Workload characterization *Test Cases* <table class="gray-table"> <tbody> <tr> <th valign="top">Use role</th> <th valign="top">% of Total</th> <th valign="top">Related Actions</th> </tr> <tr> <td><p>Tester</p></td> <td><p>100</p></td> <td><p>Test execution for Steps 4</p></td> </tr> </tbody> </table> ---+++Average page response time This load test reproduced an environment under full load reaching 1000 concurrent users. The following graph shows the average page response time (in milliseconds) ranging from 100 to 1000 concurrent users in 100 user increment intervals. The server never reached its limit during the test. The combined work order graphs shows the average response time calculated based on all the steps in this execution. This test requested an average of 252.5 elements/second and 2.08 pages/second at the 1000 user stage. The average response time for all pages reached 128.29 milliseconds. <img src="%ATTACHURLPATH%/RQM60ResultGraph_31.png"width="660" /> ---+++Performance result breakdown The following graph shows the average step response time for each user stage in milliseconds. Each step is part of a test case listed on the table Test Cases. In this case, a lower number is better. <img src="%ATTACHURLPATH%/RQM60ResultGraph_32.png"width="660" /> The average number of operations per hour during each user stage for each user case. <img src="%ATTACHURLPATH%/RQM60ResultGraph_33.png"width="660" /> Total number of iterations user is performing during the test for each user case. <img src="%ATTACHURLPATH%/RQM60ResultGraph_34.png"width="660" /> ---+++Detailed resource utilization The Database Server reached 0.9% maximum CPU utilization, with an average of 0.93% utilization at the 1000 concurrent users stage. The Application Server had an average of 4.42% CPU utilization, and the HTTP Server 4.26% average CPU utilization. <img src="%ATTACHURLPATH%/RQM60ResultGraph_35.png"width="660" /> The following graph shows the average memory utilization for each server until the limiting stage. <img src="%ATTACHURLPATH%/RQM60ResultGraph_36.png"width="660" /> The next graph shows the garbage collector behavior during the test. Detailed garbage collector configuration can be found in *[[#Appendix_A][Appendix A]]*. <img src="%ATTACHURLPATH%/RQM60ResultGraph_37.png"width="660" /> Additional information regarding disk and network for each server during the test can be found on the following graphs. <img src="%ATTACHURLPATH%/RQM60ResultGraph_38.png" width="400" height="390"/> <img src="%ATTACHURLPATH%/RQM60ResultGraph_39.png" width="400" height="390" /> <img src="%ATTACHURLPATH%/RQM60ResultGraph_40.png" width="400" height="390"/> <img src="%ATTACHURLPATH%/RQM60ResultGraph_41.png" width="400" height="390" /> <img src="%ATTACHURLPATH%/RQM60ResultGraph_42.png" width="400" height="390"/> <img src="%ATTACHURLPATH%/RQM60ResultGraph_43.png" width="400" height="390" /> <br/> ---++Appendix A - Key configuration parameters <table class="gray-table"> <tbody> <tr> <th valign="top">Product</th> <th valign="top">Version</th> <th valign="top">Highlights for configurations under test</th> </tr> <tr> <td valign="top" ><p>IBM HTTP Server for WebSphere Application Server</p></td> <td valign="top" ><p>8.5.5.1</p></td> <td valign="top" ><p>IBM HTTP Server functions as a reverse proxy server implemented via Web server plug-in for WebSphere Application Server.</p> <p>Configuration details can be found from the <a href="http://pic.dhe.ibm.com/infocenter/clmhelp/v4r0/index.jsp?topic=%2Fcom.ibm.jazz.install.doc%2Ftopics%2Ft_config_reverse_proxy_ihs.html" target="_blank">CLM infocenter</a>.</p> <p><b>HTTP server (httpd.conf)</b>:</p> <ul> <li>MaxClients: increase value for high-volume loads <a href="http://publib.boulder.ibm.com/httpserv/ihsdiag/ihs_performance.html" target="_blank">(adjust value based on user load)</a></li> <li>ThreadsPerChild = 25</li> </ul> <p><b>OS Configuration</b>:</p> <ul> <li>max user processes = unlimited</li> </ul> </td> </tr> <tr> <td valign="top" ><p>IBM WebSphere Application Server Network Deployment</p></td> <td valign="top" ><p>8.5.5.1</p></td> <td valign="top" ><p><b>JVM settings:</b></p> <ul> <li>GC policy and arguments, max and init heap sizes:</li> </ul> <p>-Xgcpolicy:gencon -Xmx4g -Xms4g -Xmn1500m -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000 </p> <p>-Xverbosegclog:gcJVM.log -XX:MaxDirectMemorySize=1g</p> <p><b>Thread pools:</b></p> <ul> <li>Maximum WebContainer = Minimum WebContainer = 200</li> </ul> <p><b>LTPA Authentication:</b></p> <ul> <li>720 minutes</li> </ul> <p><b>OS Configuration:</b></p> <p>System wide resources for the app server process owner:</p> <ul> <li>max user processes = unlimited</li> <li>open files = 65536</li> </ul> </td> </tr> <tr> <td valign="top" ><p>DB2</p></td> <td valign="top" ><p>ESE 10.5.3</p></td> <td valign="top" ><p><b>DB2 Connection pool size (set on JTS): </b></p> <ul> <li>JDBC Connection Pool = 128</li> </ul> <p><b>DBM CFG:</b></p> <ul> <li>MON_HEAP_SZ = AUTOMATIC</li> <li>JAVA_HEAP_SZ = 2048</li> <li>INSTANCE_MEMORY = AUTOMATIC</li> <li>AGENT_STACK_SZ = 1024</li> <li>NUM_POOLAGENTS = AUTOMATIC</li> </ul> </td> </tr> <tr> <td valign="top" ><p>LDAP server</p></td> <td valign="top" ></td> <td valign="top" ><p>N/A</p></td> </tr> <tr> <td valign="top" ><p>License server</p></td> <td valign="top" ></td> <td valign="top" ><p>N/A</p></td> </tr> <tr> <td valign="top" ><p>RPT workbench</p></td> <td valign="top" ><p>8.3.0.3</p></td> <td valign="top" ><p>Defaults</p></td> </tr> <tr> <td valign="top" ><p>RPT agents</p></td> <td valign="top" ><p>8.3.0.3</p></td> <td valign="top" ><p>Defaults</p></td> </tr> <tr> <td valign="top" ><p>Network</p></td> <td valign="top" ></td> <td valign="top" ><p>Shared subnet within test lab</p></td> </tr> </tbody> </table> -------------------- ---++++!! About the authors: * [[Main.MarcoAraujo][Marco Antonio Ferreira de Araujo]] is a performance specialist for CLM, Maximo, and TRIRIGA product families. * [[Main.AlfredoBittencourt][Alfredo Bittencourt]] is a performance specialist for CLM, Maximo, and TRIRIGA product families. * [[Main.VaughnRokosz][Vaughn Rokosz]] is the performance lead for the CLM product family. -------------------- ---++++!! Related topics: [[CollaborativeLifecycleManagementPerformanceReportRQM50Release][Collaborative Lifecycle Management performance report: Rational Quality Manager 5.0 release]], [[PerformanceDatasheetsAndSizingGuidelines][Performance datasheets]] -------------------- ---+++++!! Questions and comments: * What other performance information would you like to see here? * Do you have performance scenarios to share? * Do you have scenarios that are not addressed in documentation? * Where are you having problems in performance? %COMMENT{type="below" target="PerformanceDatasheetReaderComments" button="Submit"}% %INCLUDE{"PerformanceDatasheetReaderComments"}% <sticky></div></sticky>
E
dit
|
A
ttach
|
P
rintable
|
V
iew topic
|
Backlinks:
We
b
,
A
l
l Webs
|
H
istory
: r3
<
r2
<
r1
|
M
ore topic actions
Deployment
Deployment web
Planning and design
Installing and upgrading
Migrating and evolving
Integrating
Administering
Monitoring
Troubleshooting
Community information and contribution guidelines
Create new topic
Topic list
Search
Advanced search
Notify
RSS
Atom
Changes
Statistics
Web preferences
NOTE: Please use the Sandbox web for testing
Status icon key:
To do
Under construction
New
Updated
Constant change
None - stable page
Smaller versions of status icons for inline text:
Copyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our
Terms of Use.
Please read the following
disclaimer
.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more
here
.