Edit
Attach
P
rintable
r20 - 2017-05-03 - 19:09:04 - Main.hhuo
You are here:
TWiki
>
Deployment Web
>
DeploymentPlanningAndDesign
>
PerformanceDatasheetsAndSizingGuidelines
>
CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release
<div id="header-title" style="padding: 10px 15px; border-width:1px; border-style:solid; border-color:#FFD28C; background-image: url(<nop>https://jazz.net/wiki/pub/Deployment/WebPreferences/TLASE.jpg); background-size: cover; font-size:120%"> <!-- * Set ALLOWTOPICCHANGE = Main.TWikiDeploymentDatasheetsAuthorsGroup, Main.TWikiAdminGroup, Main.GrantCovell --> ---+!! <img src="https://jazz.net/wiki/pub/Deployment/WebPreferences/new.png" alt="new.png" width="50" height="50" align="right">DRAFT - Collaborative Lifecycle Management performance and scalability report: Rational Quality Manager 6.0.3 release - DRAFT</br> %DKGRAY% Authors: [[Main.HongyanHuo][Hongyan Huo]]</br> Last updated: May 3, 2017</br> Build basis: Rational Quality Manager 6.0.3 %ENDCOLOR%</div></sticky> <!-- Page contents top of page on right hand side in box --> <sticky><div style="float:right; border-width:1px; border-style:solid; border-color:#DFDFDF; background-color:#F6F6F6; margin:0 0 15px 15px; padding: 0 15px 0 15px;"> %TOC{title="Page contents"}% </div></sticky> <sticky><div style="margin:15px;"></sticky> ---++ Introduction This report evaluates the performance and scalability of <b>Rational Quality Manager (RQM) </b> 6.0.3 release. The test methodology involves these steps: * Proportionally increase the number of the artifacts of one single project area in the QM repository using the standard datashape (see [[#Data_volume_and_shape][Data volume and shape]] ); each repository under test includes 100 thousand, 200 thousand, 300 thousand ... and 1 million test cases, respectively * Repeat the same standard one-hour performance test load against each sized repository * Compare the page performance results and the server statuses ---+++!! Disclaimer <font face="Georgia" size=2> %INCLUDE{"PerformanceDatasheetDisclaimerEndToEnd"}% </font> ---++ Summary of results A series of load tests was conducted in the lab that showed consistent scalability trend as we increased the artifact numbers. To illustrate the performance characteristics, in this report, we extracted and compared the test data from the tests against repositories containing 100,000 Test Cases (total of 1 million Test Artifacts), 500,000 Test Cases (total of 5 million Test Artifacts), and 1 million Test Cases (total of 10 million Test Artifacts), respectively. In general, the average test results observed have an estimated +/-5% margin of error statistically. Here is the summary of findings: * We increased the repository size up to one million test cases (or ten million QM artifacts in total); the one-hour performance tests against each sized repository were run smoothly in the lab, while the servers remained healthy during the course of the load tests. * There are pages of which the performance is sensitive to the growth of the artifacts, under the pre-defined datashape. Loading these pages, measured by page response time, will take longer as the number of artifacts in the repository continues to increase. These pages include the viewing, sorting, and searching of all test artifacts per artifact type. For instance, under the traditional regression test load, the browsing/sorting/searching of test plans, test cases, test scripts, and test execution records degrade as their total counts grow, as seen in the chart below. %TABLE{ sort="off" headerbg="#3399FF" cellpadding="0" cellspacing="0" dataalign="center" caption="Browsing/sorting/searching artifacts: response time increases as function of repository size, some pages degrade faster due to the more rapid growth of the artifact counts" tableborder="3" tableframe="border" tablerules="none"}% |<p><a href="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/readAllTERs_pageRespTime_vs_numArtifacts.png" target="_blank" title="click to enlarge"><img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/readAllTERs_pageRespTime_vs_numArtifacts.png" /></a> <p>*<font size=1>TC: Test Case, TER: Test Case Execution Record. Every 100k Test Cases denotes 1M Test Artifacts in total.</font>|<p><a href="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/readAllTestScripts_pageRespTime_vs_numArtifacts.png" target="_blank" title="click to enlarge"><img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/readAllTestScripts_pageRespTime_vs_numArtifacts.png" /></a> <p>*<font size=1>*TC: Test Case, TS: Test Script</font>| | <p><a href="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/readAllTPsTCs_pageRespTime_vs_numArtifacts.png" target="_blank" title="click to enlarge"><img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/readAllTPsTCs_pageRespTime_vs_numArtifacts.png" /></a> <p>*<font size=1>*TC: Test Case, TP: Test Plan</font>| * The performance of the following pages does not trend to degrade under load when the artifact numbers increase by datashape. * Page performance for opening a particular artifact remains unchanged when the repository is getting larger. These pages under test include: opening a test plan, or a test case, or a test script by loading the default summary section, as shown in chart <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/selectedReadIndArtifact_pageRespTime_vs_numArtifacts.png" /> * As the artifact number increases, the write operations tested remained at the same performance level under multi-user load. For example, saving a test artifact after modifying its contents, or executing a test case. Below is a set of selected write operations compared against different sized repository <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/selectedWrites_pageRespTime_vs_numArtifacts.png" /> * For the detailed performance results for each use case, see this section: [[#DetailedResults][Detailed performance results]]. * As the repository size grows, the CPU on the database server is found to be busier in order to maintain the same test load. The CPU utilization has approached to 100% occasionally against the 10 million total test artifacts -based repository. For more details about the system resource please see section [[#Resource_utilization][Resource utilization]] and, and section [[#Topology_network_and_data_volume][Topology]] of the systems under test. <p><img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/dbCpuUtil_vs_numArtifacts.png" /> ---++ Topology, network, and data volume The topology under test is based on [[RecommendedCLMDeploymentTopologies#CLM_E1_Enterprise_Distributed_Li][Standard Topology (E1) Enterprise - Distributed / Linux / DB2.]] <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceReportRQM50Release/ServerOverview.png" alt="ServerOverview.png"width="40%" height="40%" /> The specifications of machines under test are listed in the table below. Server tuning details listed in *[[#Appendix_Key_configuration_param][Appendix - Key configuration parameters]]* <table class="gray-table"> <tbody> <tr> <th valign="top">Function</th> <th valign="top">Number of Machines</th> <th valign="top">Machine Type</th> <th valign="top">CPU / Machine</th> <th valign="top">Total # of CPU vCores/Machine</th> <th valign="top">Memory/Machine</th> <th valign="top">Disk</th> <th valign="top">Disk capacity</th> <th valign="top">Network interface</th> <th valign="top">OS and Version</th> </tr> <tr> <td>Proxy Server (IBM HTTP Server and !WebSphere Plugin)</td> <td>1</td> <td>IBM System x3250 M4</td> <td>1 x Intel Xeon E3-1240 3.4GHz (quad-core)</td> <td>8</td> <td>16GB</td> <td>RAID 1 -- SAS Disk x 2</td> <td>299GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 6.5</td> </tr> <tr> <td>JTS Server</td> <td>1</td> <td>IBM System x3550 M4</td> <td>2 x Intel Xeon E5-2640 2.5GHz (six-core)</td> <td>24</td> <td>32GB</td> <td>RAID 5 -- SAS Disk x 2</td> <td>897GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 6.5</td> </tr> <tr> <td>QM Server</td> <td>1</td> <td>IBM System x3550 M4</td> <td>2 x Intel Xeon E5-2640 2.5GHz (six-core)</td> <td>24</td> <td>32GB</td> <td>RAID 5 -- SAS Disk x 2</td> <td>897GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 6.5</td> </tr> <tr> <td>Database Server</td> <td>1</td> <td>IBM System x3650 M4</td> <td>2 x Intel Xeon E5-2640 2.5GHz (six-core)</td> <td>24</td> <td>64GB</td> <td>RAID 5 -- SAS Disk x 2</td> <td>2.4TB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 6.1</td> </tr> <tr> <td>RPT workbench</td> <td>1</td> <td>IBM System x3550 M4</td> <td>2 x Intel Xeon E5-2640 2.5GHz (six-core)</td> <td>24</td> <td>32GB</td> <td>RAID 5 -- SAS Disk x 2</td> <td>897GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 6.4</td> </tr> <tr> <td>RPT Agents</td> <td>6</td> <td>VM image</td> <td>4 x Intel Xeon X5650 CPU (1-Core 2.67GHz)</td> <td>1</td> <td>2GB</td> <td>N/A</td> <td>30GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 6.5</td> </tr> <tr> <td>Network switches</td> <td>N/A</td> <td>Cisco 2960G-24TC-L</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>Gigabit Ethernet</td> <td>24 Ethernet 10/100/1000 ports</td> </tr> </tbody> </table> N/A: Not applicable. vCores = Cores with hyperthreading ---+++Network connectivity All server machines and test clients are located on the same subnet. The LAN has 1000 Mbps of maximum bandwidth and less than 0.3 ms latency in ping. ---+++Data volume and shape The artifacts were created and increased gradually using the datagen utility. Every increase of 100,000 Test Cases in the repository is consisted of the following increase per artifact type: * 167 test plans * 100,000 test scripts * 100,000 test cases * 400,000 test case execution records * 400,000 test case results * 10,000 test suites * 667 test environments * 2,000 test phases * 20 build definitions * 4,747 execution sequences * 10,000 test suite execution records * 50,000 test suite execution results * 3,333 build records * QM Database size increase on disk ~ 31 GB The artifact maintains the following associations: * each test plan is linked to 600 unique test cases, and 600 unique test scripts * each test suite contains 500 test cases In addition, there are 5,000 work items (defects) in the CCM project area. The CCM repository size remains unchanged. In this test, the <b> Configuration Management </b> is not enabled. ---++ Methodology Rational Performance Tester(RPT) was used to simulate the workload created using the web client. Each user completed a random use case from a set of available use cases. A Rational Performance Tester script is created for each use case. The scripts are organized by pages and each page represents a user action. The work load is role based as each of the areas defined under sequence of actions which are separated into individual user groups within an RPT schedule. The settings of the RPT schedule is shown below: <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/userload.png" /> ---+++User roles, test cases and workload characterization *User Roles* #UserRoles <table class="gray-table"> <tbody> <tr> <th align="center" height="40"><b>Use role</b></th> <th align="center" height="40"><b>% of Total</b></th> <th align="center" height="40"><b>Related Actions</b></th> </tr> <tr> <td>QE Manager</td> <td align="center">8</td> <td>Test plan create, Browse test plan and test case, Browse test script, Simple test plan copy, Defect search, View dashboard</td> </tr> <tr> <td>Test Lead</td> <td align="center">19</td> <td>Edit Test Environments, Edit test plan, Create test case, Bulk edit of test cases, Full text search, Browse test script, Test Execution, Defect search</td> </tr> <tr> <td>Tester</td> <td align="center">68</td> <td>Defect create, Defect modify, Defect search, Edit test case, Create test script, Edit test script, Test Execution, Browse test execution record</td> </tr> <tr> <td>Dashboard Viewer</td> <td align="center">5</td> <td>View dashboard(with login and logout)</td> </tr> </tbody> </table> *Test Cases* #TestCases <table class="gray-table"> <tbody> <tr> <th align="center" height="40"><b>Use Role</b></th> <th align="left" height="40"><b>Percentage of the user role</b></th> <th align="left" height="40"><b>Sequence of Operations</b></th> </tr> <tr> <td rowspan=6>QE Manager</td> <td align="center">1</td> <td>[[#TestPlanCreate][Test plan create]]:user creates test plan, then adds description, business objectives, test objectives, 2 test schedules, test estimate quality objectives and entry and exit criteria.</td> </tr> <tr> <td align="center">26</td> <td>[[#BrowseTestPlansAndTestCases][Browse test plans and test cases]]: user browses assets by: View Test Plans, then configure View Builder for name search; open test plan found, review various sections, then close. Search for test case by name, opens test case found, review various sections, then close.</td> </tr> <tr> <td align="center">26</td> <td>[[#BrowseTestScript][Browse test script]]: user search for test script by name, open it, reviews it, then closes.</td> </tr> <tr> <td align="center">1</td> <td>[[#SimpleTestPlanCopy][Simple test plan copy]]: user search test plan by name, then select one, then make a copy.</td> </tr> <tr> <td align="center">23</td> <td>[[#DefectSearch][Defect search]]: user searches for specific defect by number, user reviews the defect (pause), then closes.</td> </tr> <tr> <td align="center">20</td> <td>[[#ViewDashboard][View Dashboard]]: user views dashboard</td> </tr> <tr> <td rowspan=8>Test Lead</td> <td align="center">8</td> <td>[[#EditTestEnvironment][Edit Test Environment]]: user lists all test environments, and then selects one of the environments and modifies it.</td> </tr> <tr> <td align="center">15</td> <td>[[#EditTestPlan][Edit test plan]]: list all test plans; from query result, open a test plan for editing, add a test case to the test plan, a few other sections of the test plan are edited and then the test plan is saved.</td> </tr> <tr> <td align="center">4</td> <td>[[#TestCaseCreate][Create test case]]: user create test case by: opening the Create Test Case page, enters data for a new test case, and then saves the test case.</td> </tr> <tr> <td align="center">1</td> <td>[[#BulkEditofTestCases][Bulk edit of test cases]]: user searches for test cases with root name and edits all found with owner change.</td> </tr> <tr> <td align="center">3</td> <td>[[#FullTextSearch][Full text search]]: user does a full text search of all assets in repository using root name, then opens one of found items.</td> </tr> <tr> <td align="center">32</td> <td>[[#BrowseTestScript][Browse test script]]: user search for test script by name, open it, reviews it, then closes.</td> </tr> <tr> <td align="center">26</td> <td>[[#TestExecutionFor4Steps][Test Execution]]: selects View Test Execution Records, by name, starts execution, enters pass/fail verdict, reviews results, sets points then saves. </td> </tr> <tr> <td align="center">11</td> <td>[[#DefectSearch][Defect search]]: user searches for specific defect by number, user reviews the defect (pause), then closes.</td> </tr> <tr> <td rowspan=8>Tester</td> <td align="center">8</td> <td>[[#CreateDefect][Defect create]]: user creates defect by: opening the Create Defect page, enters data for a new defect, and then saves the defect. </td> </tr> <tr> <td align="center">5</td> <td>[[#DefectModify][Defect modify]]: user searches for specific defect by number, modifies it then saves it.</td> </tr> <tr> <td align="center">14</td> <td>[[#DefectSearch][Defect search]]: user searches for specific defect by number, user reviews the defect (pause), then closes.</td> </tr> <tr> <td align="center">6</td> <td>[[#EditTestCase][Edit test case]]: user searches Test Case by name, the test case is then opened in the editor, then a test script is added to the test case (user clicks next a few times (server size paging feature) before selecting test script), The test case is then saved.</td> </tr> <tr> <td align="center">4</td> <td>[[#TestScriptCreate][Create test script]]: user creates test case by: selecting Create Test Script page, enters data for a new test script, and then saves the test script. </td> </tr> <tr> <td align="center">8</td> <td>[[#EditTestScript][Edit test script]]: user selects Test Script by name. test script then opened for editing, modified and then saved.</td> </tr> <tr> <td align="center">42</td> <td>[[#TestExecutionFor4Steps][Test Execution]]: selects View Test Execution Records, by name, starts execution, enters pass/fail verdict, reviews results, sets points then saves. </td> </tr> <tr> <td align="center">7</td> <td>[[#TestExecutionRecordBrowsing][Browse test execution record]]: user browses TERs by: name, then selects the TER and opens the most recent results.</td> </tr> <tr> <td>Dashboard Viewer</td> <td align="center">100</td> <td>[[#ViewDashboardWithLogin][View dashboard(with login and logout)]]: user logs in, views dashboard, then logs out. This user provides some login/logout behavior to the workload </td> </tr> </tbody> </table> ---+++Response time comparison The page performance is measured as mean value (or average) of its response time in the result data. For the majority of the pages under tests, there is little variation between runs, and the mean values are close to median in the sample for the load. ---++ Detailed performance results #DetailedResults ---+++Average page response time comparison breakdown *NOTE* For all usecase comparison charts, the unit is millisecond, and for the data, smaller is better. #BrowseTestPlansAndTestCases Browse Test Plans & Test Cases <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/browseTPsTCs.png" /> <p> <b><font size=1>*Page will be improved in 604, see [[#Highlight_of_performance_improve][Performance enhancement highlights in 604]]</font></b> [[#TestCases][Back to Test Cases & workload characterization]] #BrowseTestScript Browse Test Scripts <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/browseTestscripts.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #TestExecutionRecordBrowsing Test Execution Record Browsing <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/browseTERs.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #CreateDefect Create Defect <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/createDefect.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #TestPlanCreate Create Test Plan <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/createTestplan.png" /> <p> <b><font size=1>*Page will be improved in 604, see [[#Highlight_of_performance_improve][Performance enhancement highlights in 604]]</font></b> [[#TestCases][Back to Test Cases & workload characterization]] #TestCaseCreate Create Test Case <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/createTestcase.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #TestScriptCreate Create Test Script <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/createTestscript.png"/> [[#TestCases][Back to Test Cases & workload characterization]] #EditTestCase Edit Test Case <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/editTestcase.png" /> <p> <b><font size=1>*Page will be improved in 604, see [[#Highlight_of_performance_improve][Performance enhancement highlights in 604]]</font></b> [[#TestCases][Back to Test Cases & workload characterization]] #EditTestEnvironment Edit Test Environment <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/editTestenv.png"/> [[#TestCases][Back to Test Cases & workload characterization]] #EditTestPlan Edit Test Plan <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/editTestplan.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #EditTestScript Edit Test Script <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/editTestscript.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #TestExecutionFor4Steps Test Execution For 4 Steps <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/executeTER.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #SimpleTestPlanCopy Simple Test Plan Copy <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/simpleCopy.png"/> [[#TestCases][Back to Test Cases & workload characterization]] #BulkEditofTestCases Bulk Edit of Test Cases <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/bulkEditTC.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #FullTextSearch Full Text Search <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/fullTextSearch.png" /> <p> <b><font size=1>*Page will be improved in 604, see [[#Highlight_of_performance_improve][Performance enhancement highlights in 604]]</font></b> [[#TestCases][Back to Test Cases & workload characterization]] #DefectSearch Defect Search <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/searchDefect.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #DefectModify Defect Modify <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/modifyDefect.png" /> [[#TestCases][Back to Test Cases & workload characterization]] ---+++RPT network transfer comparison <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/serverThroughputs_vs_numArtifacts.png" /> ---+++RPT script executions RPT script average executions during staging <font size=1> %TABLE{ sort="off" headerbg="#3399FF" cellpadding="2" cellspacing="2" dataalign="center" caption="" tableborder="2" tableframe="border" tablerules="none" }% |<img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/scriptExecutionCounts.png" />| *Test scenario* | *Average execution counts per load test*| |^|[[#TestExecutionFor4Steps][Execute the TER]]|1,459| |^|[[#TestExecutionRecordBrowsing][Browse TER]]|907| |^|[[#DefectSearch][Search Defect]]|602| |^|[[#BrowseTestScript][Browse Test Script]]|501| |^|[[#EditTestScript][Edit Test Script]]|205| |^|[[#CreateDefect][Create Defect]]|193| |^|[[#EditTestPlan][Edit Test Plan]]|160| |^|[[#DefectModify][Modify Defect]]|158| |^|[[#EditTestCase][Edit Test Case]]|153| |^|[[#BrowseTestPlansAndTestCases][Browse Test Plans And Test Cases]]|96| |^|[[#TestScriptCreate][Create Test Script]]|90| |^|[[#EditTestEnvironment][Edit Test Environment]]|71| |^|[[#TestCaseCreate][Create Test Case]]|45| |^|[[#FullTextSearch][Full Text Search and Open Test Suite]]|32| |^|[[#BulkEditofTestCases][Bulk Edit of Test Cases]]|15| |^|[[#TestPlanCreate][Create Test Plan]]|2| |^|[[#SimpleTestPlanCopy][Simple Test Plan Copy]]|2| </font> ---+++Resource utilization %TABLE{ sort="off" headerbg="#3399FF" cellpadding="2" cellspacing="2" dataalign="center" caption="DB2 Server resource utilization comparison: " tableborder="2" tableframe="border" tablerules="none"}% ||1000users performance load against 100kTestCases vs 1mTestCases based repository|| |CPU|<img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/DB2server_CPUutil.png" />|Disk|<img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/DB2server_DiskBusy.png" />| |Memory|<img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/DB2server_Memory.png" />|Network|<img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/DB2server_NetworkIO.png" />| %TABLE{ sort="off" headerbg="#3399FF" cellpadding="2" cellspacing="2" dataalign="center" caption="WAS Server resource utilization comparison: " tableborder="2" tableframe="border" tablerules="none"}% ||1000users performance load against 100kTestCases vs 1mTestCases based repository|| |CPU|<img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/WASserver_CPUutil.png" />|Disk|<img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/WASserver_DiskBusy.png" />| |Memory|<img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/WASserver_Memory.png" />|Network|<img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/WASserver_NetworkIO.png" />| ---+++Garbage collection - JVM GC Chart For JVM parameter please refer to [[#Appendix_Key_configuration_param][Appendix - Key configuration parameters]] Verbose garbage collection is enabled to log the GC activities. These GC logs showed very little variation between runs. There is also no discernible difference between versions. Below is one example of the output from the GC log for each application. %TABLE{ sort="off" headerbg="#3399FF" cellpadding="2" cellspacing="2" dataalign="center" caption="WAS JVM Garbage Collections" tableborder="2" tableframe="border" tablerules="none"}% ||same load against 1mTestCases vs 100kTestCases based repository| |QM|<img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/gcChart_qm.png" />| |JTS|<img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/gcChart_jts.png"/>| ---++Highlight of performance improvements in RQM 6.0.4 release Performance enhancements have been made into the 6.0.4 release. Below is the graph showing the top improvements for pages that were degraded in large repository in 603GA. <p> <img src="https://jazz.net/wiki/pub/Deployment/CollaborativeLifecycleManagementPerformanceScalabilityReportRQM603Release/perfImprovement_604M5vs603GA.png"/> ---++Appendix - Key configuration parameters <table class="gray-table"> <tbody> <tr> <th align="left" width="200"><strong>Product</strong><br></th> <th align="left" width="100"><strong>Version</strong></th> <th align="left" width="400"><strong>Highlights for configurations under test</strong></th> </tr> <tr> <td style="vertical-align: top;">IBM HTTP Server for !WebSphere Application Server</td> <td style="vertical-align: top;">8.5.5.9</td> <td style="vertical-align: top;">IBM HTTP Server functions as a reverse proxy server implemented via Web server plug-in for !WebSphere Application Server. Configuration details can be found from the [[http://pic.dhe.ibm.com/infocenter/clmhelp/v4r0/index.jsp?topic=%2Fcom.ibm.jazz.install.doc%2Ftopics%2Ft_config_reverse_proxy_ihs.html][CLM infocenter]]. *HTTP server (httpd.conf)*: * !MaxClients: increase value for high-volume loads [[http://publib.boulder.ibm.com/httpserv/ihsdiag/ihs_performance.html][(adjust value based on user load)]] * !ThreadsPerChild = 25 *OS Configuration*: * max user processes = unlimited </td> </tr> <tr> <td style="vertical-align: top;">IBM !WebSphere Application Server Base</td> <td style="vertical-align: top;">8.5.5.9</td> <td style="vertical-align: top;"><strong>JVM settings:</strong> * GC policy and arguments, max and init heap sizes: <verbatim> -Xgcpolicy:gencon -Xmx8g -Xms8g -Xmn2g -Xss786K -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000 -verbose:gc -Xverbosegclog:gc.log -XX:MaxDirectMemorySize=1G </verbatim> *SDK version:* * IBM WebSphere SDK for Java Technology Edition Version 7.1.3.40 *Thread pools:* * Maximum !WebContainer = Minimum !WebContainer = 500 *OS Configuration:* System wide resources for the app server process owner: * max user processes = unlimited * open files = 65536 </td> </tr> <tr> <td>DB2</td> <td>ESE 10.1.0.5</td> <td> </td> </tr> <tr> <td>LDAP server</td> <td> </td> <td> </td> </tr> <tr> <td>License server</td> <td> </td> <td>N/A</td> </tr> <tr> <td>RPT workbench</td> <td>8.3.0.3</td> <td>Defaults</td> </tr> <tr> <td>RPT agents</td> <td>8.3.0.3</td> <td>Defaults</td> </tr> <tr> <td>Network</td> <td> </td> <td>Shared subnet within test lab</td> </tr> </tbody> </table> -------------------- ---++++!! About the authors: Main.HongyanHuo is a performance engineer focusing on the performance test & analysis of products in the Collaborative Lifecycle Management family. <br> ---++++!! Related topics: [[CollaborativeLifecycleManagementPerformanceReportRQM60Release_v2][Collaborative Lifecycle Management performance report: Rational Quality Manager 6.0 release]], [[PerformanceDatasheetsAndSizingGuidelines][Performance datasheets]] -------------------- ---+++++!! Questions and comments: * What other performance information would you like to see here? * Do you have performance scenarios to share? * Do you have scenarios that are not addressed in documentation? * Where are you having problems in performance? %COMMENT{type="below" target="PerformanceDatasheetReaderComments" button="Submit"}% %INCLUDE{"PerformanceDatasheetReaderComments"}% <sticky></div></sticky>
Attachments
Attachments
Topic attachments
I
Attachment
Action
Size
Date
Who
Comment
png
browseTERs.png
manage
24.1 K
2017-04-25 - 14:49
UnknownUser
png
browseTestscripts.png
manage
17.8 K
2017-04-25 - 14:49
UnknownUser
png
browseTPsTCs.png
manage
38.1 K
2017-04-25 - 13:34
UnknownUser
png
bulkEditTC.png
manage
13.8 K
2017-04-25 - 14:49
UnknownUser
png
createDefect.png
manage
9.5 K
2017-04-25 - 14:56
UnknownUser
png
createTestcase.png
manage
9.8 K
2017-04-25 - 14:56
UnknownUser
png
createTestplan.png
manage
11.1 K
2017-04-25 - 14:57
UnknownUser
png
createTestscript.png
manage
10.0 K
2017-04-25 - 14:57
UnknownUser
png
DB2server_CPUutil.png
manage
18.9 K
2017-05-01 - 19:03
UnknownUser
png
DB2server_DiskBusy.png
manage
21.0 K
2017-05-01 - 19:17
UnknownUser
png
DB2server_Memory.png
manage
8.9 K
2017-04-27 - 12:50
UnknownUser
png
DB2server_NetworkIO.png
manage
36.9 K
2017-05-01 - 19:17
UnknownUser
png
dbCpuUtil_vs_numArtifacts.png
manage
31.0 K
2017-04-26 - 13:18
UnknownUser
png
editTestcase.png
manage
29.0 K
2017-04-25 - 14:43
UnknownUser
png
editTestenv.png
manage
14.5 K
2017-04-25 - 14:43
UnknownUser
png
editTestplan.png
manage
15.0 K
2017-04-25 - 14:43
UnknownUser
png
editTestscript.png
manage
24.6 K
2017-04-25 - 14:44
UnknownUser
png
executeTER.png
manage
21.3 K
2017-04-25 - 14:44
UnknownUser
png
fullTextSearch.png
manage
11.9 K
2017-04-25 - 17:00
UnknownUser
png
gcChart_jts.png
manage
29.0 K
2017-05-02 - 12:42
UnknownUser
png
gcChart_qm.png
manage
50.0 K
2017-05-02 - 12:43
UnknownUser
png
modifyDefect.png
manage
19.0 K
2017-04-25 - 17:00
UnknownUser
png
perfImprovement_604M5vs603GA.png
manage
26.4 K
2017-04-25 - 13:49
UnknownUser
png
readAllTERs_pageRespTime_vs_numArtifacts.png
manage
13.3 K
2017-05-03 - 14:35
UnknownUser
png
readAllTestScripts_pageRespTime_vs_numArtifacts.png
manage
19.5 K
2017-05-03 - 14:36
UnknownUser
png
readAllTPsTCs_pageRespTime_vs_numArtifacts.png
manage
14.8 K
2017-05-03 - 14:46
UnknownUser
png
scriptExecutionCounts.png
manage
48.4 K
2017-04-26 - 14:32
UnknownUser
png
searchDefect.png
manage
11.7 K
2017-04-25 - 17:01
UnknownUser
png
selectedReadIndArtifact_pageRespTime_vs_numArtifacts.png
manage
21.6 K
2017-05-03 - 14:36
UnknownUser
png
selectedWrites_pageRespTime_vs_numArtifacts.png
manage
20.9 K
2017-05-03 - 14:36
UnknownUser
png
serverThroughputs_vs_numArtifacts.png
manage
16.1 K
2017-04-26 - 13:02
UnknownUser
png
simpleCopy.png
manage
16.3 K
2017-04-25 - 17:01
UnknownUser
png
userload.png
manage
17.9 K
2017-04-26 - 13:02
UnknownUser
png
WASserver_CPUutil.png
manage
13.1 K
2017-04-27 - 14:04
UnknownUser
png
WASserver_DiskBusy.png
manage
10.8 K
2017-04-27 - 14:04
UnknownUser
png
WASserver_Memory.png
manage
8.3 K
2017-04-27 - 14:04
UnknownUser
png
WASserver_NetworkIO.png
manage
19.9 K
2017-04-27 - 14:04
UnknownUser
Edit
|
Attach
|
P
rintable
|
V
iew topic
|
Backlinks:
We
b
,
A
l
l Webs
|
H
istory
:
r24
|
r22
<
r21
<
r20
<
r19
|
More topic actions...
Deployment
Deployment web
Planning and design
Installing and upgrading
Migrating and evolving
Integrating
Administering
Monitoring
Troubleshooting
Community information and contribution guidelines
Create new topic
Topic list
Search
Advanced search
Notify
RSS
Atom
Changes
Statistics
Web preferences
NOTE: Please use the Sandbox web for testing
Status icon key:
To do
Under construction
New
Updated
Constant change
None - stable page
Smaller versions of status icons for inline text:
Copyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our
Terms of Use.
Please read the following
disclaimer
.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more
here
.