Edit
Attach
P
rintable
r30 - 2019-06-20 - 19:56:49 - Main.cystrunk
You are here:
TWiki
>
Deployment Web
>
DeploymentPlanningAndDesign
>
PerformanceDatasheetsAndSizingGuidelines
>
CollaborativeLifecycleManagementPerformanceReportRQM6061Release
<div id="header-title" style="padding: 10px 15px; border-width:1px; border-style:solid; border-color:#FFD28C; background-image: url(<nop>https://jazz.net/wiki/pub/Deployment/WebPreferences/TLASE.jpg); background-size: cover; font-size:120%"> <!-- * Set ALLOWTOPICCHANGE = Main.TWikiDeploymentDatasheetsAuthorsGroup, Main.TWikiAdminGroup, Main.GrantCovell --> ---+!! <img src="https://jazz.net/wiki/pub/Deployment/WebPreferences/uc.png" alt="uc.png" width="50" height="50" align="right">Collaborative Lifecycle Management performance report: Rational Quality Manager 6.0.6.1 release</br> %DKGRAY% Authors: Main.SentellaCystrunk</br> Last updated: June 11, 2019</br> Build basis: Rational Quality Manager 6.0.6.1 %ENDCOLOR%</div></sticky> <!-- Page contents top of page on right hand side in box --> <sticky><div style="float:right; border-width:1px; border-style:solid; border-color:#DFDFDF; background-color:#F6F6F6; margin:0 0 15px 15px; padding: 0 15px 0 15px;"> %TOC{title="Page contents"}% </div></sticky> <sticky><div style="margin:15px;"></sticky> ---++ Introduction This report focuses on <b>Rational Quality Manager (RQM) </b> version 6.0.6.1 performance using a large repository that contains 15 million test artifacts in total. This is a new benchmark in terms of repository size and scenario coverage. Compared to previous standard performance regression reports for 1,000 concurrent users operating against an RQM server not enabled for configuration management using a 10 million artifact repository: * The _scenario coverage_ for this report included additional use cases for editing of test suites, browsing of test suite results, and browsing of test case results. Full text search use case was excluded. * The _repository size_ was increased to 15 million artifacts. ---+++!! Disclaimer %INCLUDE{"PerformanceDatasheetDisclaimerEndToEnd"}% ---++ Summary of performance results We applied the standard performance load using a large QM repository containing 15 million artifacts against RQM 6.0.6.1 (refer to section [[#Data_volume_and_shape][Data volume and shape]] for details). <img src="" /> Below is a chart showing the top pages that have been improved greatly <img src="" /> For more details on the 1000 user workload, see this section: [[#User_roles_test_cases_and_worklo][User roles, test cases and workload characterization ]]. For the detailed performance results for each use case, see this section: [[#DetailedResults][Detailed performance results]]. ---+++ Summary of page response times <img src="%ATTACHURLPATH%/Histogram.png" /> The details are provided in section [[#MedianRespTime][Median page response time breakdown]]. ---+++ Summary of OS resource utilization * CPU utilization averaged about 35% on the RQM application server and 14% on the database server. * The reverse proxy server, Jazz Team Server, and Jazz Authorization Server utilized relatively low CPU resources averaging 3% or less. * Disk utilization averaged about 51% on the database server and less than 2% on the RQM application server. * Memory consumption averaged about 40GB on the database server. Heap usage on the RQM application server remained stable within the allocated amounts. The details are provided in section [[#Resource_utilization][Resource utilization]]. ---++ Appendix A: Topology The topology under test is based on [[RecommendedCLMDeploymentTopologies#CLM_E1_Enterprise_Distributed_Li][Standard Topology (E1) Enterprise - Distributed / Linux / DB2.]] <img src="%ATTACHURLPATH%/ServerOverview.PNG" alt="ServerOverview.PNG" width="50%" height="50%" /> The specifications of machines under test are listed in the table below. Server tuning details listed in *[[#Appendix_D_Key_configuration_par][Appendix D: Key configuration parameters]]* <table class="gray-table"> <tbody> <tr> <th valign="top">Function</th> <th valign="top">Number of Machines</th> <th valign="top">Machine Type</th> <th valign="top">CPU / Machine</th> <th valign="top">Total # of CPU vCores/Machine</th> <th valign="top">Memory/Machine</th> <th valign="top">Disk</th> <th valign="top">Disk capacity</th> <th valign="top">Network interface</th> <th valign="top">OS and Version</th> </tr> <tr> <td>Reverse Proxy Server (HA Proxy)</td> <td>1</td> <td>VMware Virtual Machine</td> <td>2 x Intel Xeon E5-2665 2.4GHz (dual-core)</td> <td>4</td> <td>8GB</td> <td>N/A</td> <td>80GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 7.2</td> </tr> <tr> <td>JAS Server</td> <td>1</td> <td>VMware Virtual Machine</td> <td>2 x Intel Xeon E5-2665 2.4GHz (dual-core)</td> <td>4</td> <td>16GB</td> <td>N/A</td> <td>130GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 7.1</td> </tr> <tr> <td>JTS Server</td> <td>1</td> <td>VMware Virtual Machine</td> <td>2 x Intel Xeon E5-2665 2.4GHz (dual-core)</td> <td>4</td> <td>16GB</td> <td>N/A</td> <td>130GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 7.1</td> </tr> <tr> <td>QM Server</td> <td>1</td> <td>VMware Virtual Machine</td> <td>2 x Intel Xeon E5-2665 2.4GHz (dual-core)</td> <td>4</td> <td>16GB</td> <td>N/A</td> <td>130GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 7.1</td> </tr> <tr> <td>Database Server</td> <td>1</td> <td>Lenovo Think System SR650</td> <td>2 x Intel Xeon Silver 4114 CPU @ 2.20GHz (ten-core)</td> <td>40</td> <td>64GB</td> <td>RAID 10 -- 900GB SAS Disk x 16</td> <td>5.4TB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 7.6</td> </tr> <tr> <td>RPT workbench</td> <td>1</td> <td>VMware Virtual Machine</td> <td>2 x Intel Xeon E5-2698 2.3GHz</td> <td>4</td> <td>8GB</td> <td>N/A</td> <td>200GB</td> <td>Gigabit Ethernet</td> <td>Windows Server 2008 R2 Enterprise</td> </tr> <tr> <td>RPT Agents</td> <td>8</td> <td>VMware Virtual Machine</td> <td>2 x Intel Xeon E5-2698 2.3GHz (dual-core)</td> <td>4</td> <td>8GB</td> <td>N/A</td> <td>80GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 6.9</td> </tr> <tr> <td>Network switches</td> <td>N/A</td> <td>Cisco 2960G-24TC-L</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>Gigabit Ethernet</td> <td>24 Ethernet 10/100/1000 ports</td> </tr> </tbody> </table> N/A: Not applicable. vCores = Cores with hyperthreading ---+++ Network connectivity All server machines and test clients are located on the same subnet. The LAN has 1000 Mbps of maximum bandwidth and less than 0.3 ms latency in ping. ---+++ Data volume and shape The artifacts were migrated from 6.0.6GA repository which contains a total of 15M artifacts in *one* large project. The repository contained the following data: * 2,339 test plans * 1,403,200 test scripts * 1,403,200 test cases * 5,612,800 test case execution records * 5,612,800 test case results * 140,320 test suites * 5,000 work items(defects) * 9,338 test environments * 28,064 test phases * 280 build definitions * 66,458 execution sequences * 140,320 test suite execution records * 701,600 test suite execution results * 46,661 build records * QM Database size = 669 GB The artifact maintains the following associations: * each test plan is linked to 600 unique test cases, and 600 unique test scripts * each test suite contains 500 test cases In this test environment, the <b> Configuration Management </b> is not enabled. ---++ Appendix B: Methodology Rational Performance Tester (RPT) was used to simulate the workload created using the web client. Each user completed a random use case from a set of available use cases. A Rational Performance Tester script is created for each use case. The scripts are organized by pages and each page represents a user action. The work load is role based as each of the areas defined under sequence of actions are separated into individual user groups within an RPT schedule. The settings of the RPT schedule are shown below: ---+++ User roles, test cases and workload characterization *User Roles* #UserRoles <table class="gray-table"> <tbody> <tr> <th align="center" height="40"><b>Use role</b></th> <th align="center" height="40"><b>% of Total</b></th> <th align="center" height="40"><b>Related Actions</b></th> </tr> <tr> <td>QE Manager</td> <td align="center">8</td> <td>Create test plan, Browse test plans and test cases, Browse test script, Simple test plan copy, Search defect, View dashboard</td> </tr> <tr> <td>Test Lead</td> <td align="center">19</td> <td>Edit test environment, Edit test plan, Create test case, Bulk edit of test cases, Browse test script, Test Execution, Search defect, Edit test suite</td> </tr> <tr> <td>Tester</td> <td align="center">68</td> <td>Create defect, Modify defect, Search defect, Edit test case, Create test script, Edit test script, Test Execution, Browse test execution record, Browse test case result, Browse test suite result</td> </tr> <tr> <td>Dashboard Viewer</td> <td align="center">5</td> <td>View dashboard(with login and logout)</td> </tr> </tbody> </table> *Test Cases* #TestCases <table class="gray-table"> <tbody> <tr> <th align="center" height="40"><b>Use Role</b></th> <th align="left" height="40"><b>Percentage of the user role</b></th> <th align="left" height="40"><b>Sequence of Operations</b></th> </tr> <tr> <td rowspan=6>QE Manager</td> <td align="center">1</td> <td>[[#CreateTestPlan][Create test plan]]:user creates test plan, then adds description, business objectives, test objectives, 2 test schedules, test estimate quality objectives and entry and exit criteria.</td> </tr> <tr> <td align="center">26</td> <td>[[#BrowseTestPlansAndTestCases][Browse test plans and test cases]]: user browses assets by: View Test Plans, then configure View Builder for name search; open test plan found, review various sections, then close. Search for test case by name, opens test case found, review various sections, then close.</td> </tr> <tr> <td align="center">26</td> <td>[[#BrowseTestScript][Browse test script]]: user search for test script by name, open it, reviews it, then closes.</td> </tr> <tr> <td align="center">1</td> <td>[[#SimpleTestPlanCopy][Simple test plan copy]]: user search test plan by name, then select one, then make a copy.</td> </tr> <tr> <td align="center">23</td> <td>[[#SearchDefect][Search defect]]: user searches for specific defect by number, user reviews the defect (pause), then closes.</td> </tr> <tr> <td align="center">20</td> <td>[[#ViewDashboard][View Dashboard]]: user views dashboard</td> </tr> <tr> <td rowspan=8>Test Lead</td> <td align="center">8</td> <td>[[#EditTestEnvironment][Edit Test Environment]]: user lists all test environments, and then selects one of the environments and modifies it.</td> </tr> <tr> <td align="center">15</td> <td>[[#EditTestPlan][Edit test plan]]: list all test plans; from query result, open a test plan for editing, add a test case to the test plan, a few other sections of the test plan are edited and then the test plan is saved.</td> </tr> <tr> <td align="center">4</td> <td>[[#CreateTestCase][Create test case]]: user create test case by: opening the Create Test Case page, enters data for a new test case, and then saves the test case.</td> </tr> <tr> <td align="center">1</td> <td>[[#BulkEditofTestCases][Bulk edit of test cases]]: user searches for test cases with root name and edits all found with owner change.</td> </tr> <tr> <td align="center">32</td> <td>[[#BrowseTestScript][Browse test script]]: user search for test script by name, open it, reviews it, then closes.</td> </tr> <tr> <td align="center">26</td> <td>[[#TestExecutionFor4Steps][Test Execution]]: selects View Test Execution Records, by name, starts execution, enters pass/fail verdict, reviews results, sets points then saves. </td> </tr> <tr> <td align="center">11</td> <td>[[#SearchDefect][Search defect]]: user searches for specific defect by number, user reviews the defect (pause), then closes.</td> </tr> <tr> <td align="center">2</td> <td>[[#EditTestSuite][Edit test suite]]: user expands search options, searches by name, sorts by modified, advances to next page, selects a random test suite and opens it, edits the summary, clicks test cases section, clicks button to add test case, selects a random test case from default test plan, and saves.</td> </tr> <tr> <td rowspan=10>Tester</td> <td align="center">8</td> <td>[[#CreateDefect][Create defect]]: user creates defect by: opening the Create Defect page, enters data for a new defect, and then saves the defect. </td> </tr> <tr> <td align="center">5</td> <td>[[#ModifyDefect][Modify defect]]: user searches for specific defect by number, modifies it then saves it.</td> </tr> <tr> <td align="center">14</td> <td>[[#SearchDefect][Search defect]]: user searches for specific defect by number, user reviews the defect (pause), then closes.</td> </tr> <tr> <td align="center">6</td> <td>[[#EditTestCase][Edit test case]]: user searches Test Case by name, the test case is then opened in the editor, then a test script is added to the test case (user clicks next a few times (server size paging feature) before selecting test script), The test case is then saved.</td> </tr> <tr> <td align="center">4</td> <td>[[#CreateTestScript][Create test script]]: user creates test case by: selecting Create Test Script page, enters data for a new test script, and then saves the test script. </td> </tr> <tr> <td align="center">8</td> <td>[[#EditTestScript][Edit test script]]: user selects Test Script by name. test script then opened for editing, modified and then saved.</td> </tr> <tr> <td align="center">42</td> <td>[[#TestExecutionFor4Steps][Test Execution]]: selects View Test Execution Records, by name, starts execution, enters pass/fail verdict, reviews results, sets points then saves. </td> </tr> <tr> <td align="center">7</td> <td>[[#BrowseTestCaseExecutionRecords][Browse test case execution record]]: user browses TERs by: name, then selects the TER and opens the most recent results.</td> </tr> <tr> <td align="center">2</td> <td>[[#BrowseTestCaseResults][Browse test case results]]: user views all test case results, expands search options, searches by name, advances to next page, selects and opens randomly selected result, then click section to view previous results.</td> </tr> <tr> <td align="center">2</td> <td>[[#BrowseTestSuiteResults][Browse test suite results]]: user views all test suite results, expands search options, searches by name and failed status, advances to next page, selects and opens randomly selected result.</td> </tr> <tr> <td>Dashboard Viewer</td> <td align="center">100</td> <td>[[#ViewDashboardWithLogin][View dashboard(with login and logout)]]: user logs in, views dashboard, then logs out. This user provides some login/logout behavior to the workload </td> </tr> </tbody> </table> ---+++ Execution Count The average number of executions per scenario by 1,000 users generating the workload described above is shown below: <img src="%ATTACHURLPATH%/distribution.png" /> ---+++ Response times The median response time is used to describe the page performance as it is not affected by some of the variation encountered during the test runs. ---++ Appendix C: Detailed performance results #DetailedResults ---+++ Median page response time breakdown #MedianRespTime In the response time charts that follow, the unit is in milliseconds. When interpreting the data, smaller is better. #BrowseTestCaseExecutionRecords Browse Test Case Execution Records <img src="%ATTACHURLPATH%/browseTERs.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #BrowseTestCaseResults Browse Test Case Results <img src="%ATTACHURLPATH%/browseTestCaseResults.PNG" /> [[#TestCases][Back to Test Cases & workload characterization]] #BrowseTestPlansAndTestCases Browse Test Plans & Test Cases <img src="%ATTACHURLPATH%/browseTPsTCs.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #BrowseTestScript Browse Test Scripts <img src="%ATTACHURLPATH%/browseTestScripts.PNG" /> [[#TestCases][Back to Test Cases & workload characterization]] #BrowseTestSuiteResults Browse Test Suite Results <img src="%ATTACHURLPATH%/browseTestSuiteResults.PNG" /> [[#TestCases][Back to Test Cases & workload characterization]] #BulkEditofTestCases Bulk Edit of Test Cases <img src="%ATTACHURLPATH%/bulkEditTC.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #CreateDefect Create Defect <img src="%ATTACHURLPATH%/createDefect.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #CreateTestCase Create Test Case <img src="%ATTACHURLPATH%/createTestCase.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #CreateTestPlan Create Test Plan <img src="%ATTACHURLPATH%/createTestPlan.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #CreateTestScript Create Test Script <img src="%ATTACHURLPATH%/createTestScript.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #EditTestCase Edit Test Case <img src="%ATTACHURLPATH%/editTestCase.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #EditTestEnvironment Edit Test Environment <img src="%ATTACHURLPATH%/editTestEnv.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #EditTestPlan Edit Test Plan <img src="%ATTACHURLPATH%/editTestPlan.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #EditTestScript Edit Test Script <img src="%ATTACHURLPATH%/editTestScript.PNG" /> [[#TestCases][Back to Test Cases & workload characterization]] #EditTestSuite Edit Test Suite <img src="%ATTACHURLPATH%/editTestSuite.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #ModifyDefect Modify Defect <img src="%ATTACHURLPATH%/modifyDefect.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #SearchDefect Search Defect <img src="%ATTACHURLPATH%/searchDefect.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #SimpleTestPlanCopy Simple Test Plan Copy <img src="%ATTACHURLPATH%/simpleCopy.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #TestExecutionFor4Steps Test Execution For 4 Steps <img src="%ATTACHURLPATH%/executeTestSteps4.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #ViewDashboard View Dashboard <img src="%ATTACHURLPATH%/viewDashboard.png" /> [[#TestCases][Back to Test Cases & workload characterization]] #ViewDashboardWithLogin View Dashboard with Login <img src="%ATTACHURLPATH%/viewDashboardDV.png" /> [[#TestCases][Back to Test Cases & workload characterization]] ---+++ RPT network transfer <img src="%ATTACHURLPATH%/RPT_network.png" /> ---+++ Resource utilization %TABLE{ sort="off" headerbg="#3399FF" cellpadding="2" cellspacing="2" dataalign="center" caption="OS Resource Utilization - Overview" tableborder="2" tableframe="border" tablerules="none"}% |CPU|<img src="%ATTACHURLPATH%/CPU.png" />| |Disk|<img src="%ATTACHURLPATH%/Disk.png" />| |Memory|<img src="%ATTACHURLPATH%/Memory.PNG" />| |Network|<img src="%ATTACHURLPATH%/Network.PNG" />| ---+++ Garbage collection - JVM GC Charts For JVM parameters, please refer to [[#Appendix_D_Key_configuration_par][Appendix D: Key configuration parameters]]. Verbose garbage collection (GC) is enabled by default to log the GC activities. Below is one example of the output from the GC log for each application. %TABLE{ sort="off" headerbg="#3399FF" cellpadding="2" cellspacing="2" dataalign="center" caption="WAS Liberty JVM Garbage Collection Charts for RQM and JTS Servers" tableborder="2" tableframe="border" tablerules="none"}% |QM|<img src="%ATTACHURLPATH%/GC_QM_Run1.png" />| |JTS|<img src="%ATTACHURLPATH%/GC_JTS_Run1.png" />| ---++ Appendix D: Key configuration parameters <table class="gray-table"> <tbody> <tr> <th align="left" width="200"><strong>Product</strong><br></th> <th align="left" width="100"><strong>Version</strong></th> <th align="left" width="400"><strong>Highlights for configurations under test</strong></th> </tr> <tr> <td style="vertical-align: top;">HAProxy</td> <td style="vertical-align: top;">1.5.18</td> <td style="vertical-align: top;">HAProxy functions as a reverse proxy server. Configuration details can be found from the [[http://haproxy.1wt.eu/download/1.4/doc/configuration.txt][HAProxy Configuration Manual]]. *haproxy.cfg*: * timeout http-request 20s * timeout connect 20s * timeout client 5m * timeout server 5m *OS Configuration*: * max user processes = 65536 </td> </tr> <tr> <td style="vertical-align: top;">IBM !WebSphere Application Server Liberty</td> <td style="vertical-align: top;">18.0.0.3</td> <td style="vertical-align: top;"><strong>JVM settings:</strong> * Memory settings specified in server.startup before line <strong>export JAVA_OPTS</strong>: <verbatim> JAVA_OPTS="$JAVA_OPTS -Xmx8g" JAVA_OPTS="$JAVA_OPTS -Xms8g" JAVA_OPTS="$JAVA_OPTS -Xmn2g" JAVA_OPTS="$JAVA_OPTS -XX:MaxDirectMemorySize=2G" </verbatim> *SDK version:* * IBM WebSphere SDK for Java Technology Edition Version 8.0.5.25 *OS Configuration:* System wide resources for the app server process owner: * max user processes = 65536 * open files = 65536 </td> </tr> <tr> <td>DB2</td> <td>ESE 10.5.0.9</td> <td>See [[#DBServTuning][Appendix E: Database Server Tuning]]</td> </tr> <tr> <td>LDAP server</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>License server</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>RPT workbench</td> <td>9.0.0.2</td> <td>Defaults</td> </tr> <tr> <td>RPT agents</td> <td>9.0.0.2</td> <td>Defaults</td> </tr> <tr> <td>Network</td> <td> </td> <td>Shared subnet within test lab</td> </tr> </tbody> </table> #DBServTuning ---++ Appendix E: Database server tuning ---+++ Indexes When we looked at some browse and search related queries from within the database, we determined to use additional indexes to help improve both of their response times as well as the memory usage. The indexes were created on the RQM DB using IBM DB2 v10.5 as below: <verbatim> ########## to improve default browse XXX pages ########## db2 'CREATE INDEX "EXECUTION"."TER_JZ_ARC_MOD_PRJ_ITM_ST" ON "EXECUTION"."EXECUTION_RECORD" ( "JZ_DISCRIMINATOR" ASC, "ARCHIVED" ASC, "MODIFIED" DESC, "PROJECT_AREA_ITEM_ID" ASC, "ITEM_ID" ASC, "STATE_ID" ASC )' db2 'CREATE INDEX "EXECUTION"."TCER_JZ_PRJ_ARC_ENT_ITM_ST" ON "EXECUTION"."EXECUTION_TASK_RESULT" ( "JZ_DISCRIMINATOR" ASC, "PROJECT_AREA_ITEM_ID" ASC, "ARCHIVED" ASC, "END_TIME" ASC, "ITEM_ID" ASC, "STATE_ID" ASC )' db2 'CREATE INDEX "EXECUTION"."TSER_JZ_ARC_ENT_PRJ_ITM_ST" ON "EXECUTION"."EXECUTION_TASK_RESULT" ( "JZ_DISCRIMINATOR" ASC, "ARCHIVED" ASC, "END_TIME" DESC, "PROJECT_AREA_ITEM_ID" ASC, "ITEM_ID" ASC, "STATE_ID" ASC )' ########## the following are function based indices for name search filters for large scale, only to add them to IBM DB2 v10.5 and above ################# ########## to improve searching via applying keywords in the Name filter ########## db2 'CREATE INDEX EXECUTION.NAMECOL_UPPER_TER on EXECUTION.EXECUTION_RECORD(UPPER(NAME_COL))' </verbatim> The sample script to add these indexes can be found in the Attachments section at the bottom of this report. ---+++ Runstats If data has been migrated from a previous release, ensure more optimal performance by updating the statistics of the database. Typically, we run the following: <verbatim>db2 REORGCHK UPDATE STATISTICS ON TABLE ALL</verbatim> For the series of tests covered for this report, we also ran: <verbatim> db2 'RUNSTATS ON TABLE "PLANNING"."TEST_CASE" ON COLUMNS ( "NAME_COL" ) AND SAMPLED DETAILED INDEXES ALL ALLOW WRITE ACCESS TABLESAMPLE BERNOULLI ( 40.0 ) REPEATABLE ( 50 ) UTIL_IMPACT_PRIORITY 50' db2 'RUNSTATS ON TABLE "PLANNING"."EXECUTION_SCRIPT" ON COLUMNS ( "NAME_COL" ) AND SAMPLED DETAILED INDEXES ALL ALLOW WRITE ACCESS TABLESAMPLE BERNOULLI ( 40.0 ) REPEATABLE ( 50 ) UTIL_IMPACT_PRIORITY 50' db2 'RUNSTATS ON TABLE EXECUTION.EXECUTION_RECORD ON COLUMNS ( "NAME_COL" ) AND SAMPLED DETAILED INDEXES ALL ALLOW WRITE ACCESS TABLESAMPLE BERNOULLI ( 40.0 ) REPEATABLE ( 50 ) UTIL_IMPACT_PRIORITY 50' db2 'RUNSTATS ON TABLE EXECUTION.EXECUTION_TASK_RESULT ON COLUMNS ( "NAME_COL" ) AND SAMPLED DETAILED INDEXES ALL ALLOW WRITE ACCESS TABLESAMPLE BERNOULLI ( 40.0 ) REPEATABLE ( 50 ) UTIL_IMPACT_PRIORITY 50' db2 'RUNSTATS ON TABLE PLANNING.TEST_SUITE ON COLUMNS ( "NAME_COL" ) AND SAMPLED DETAILED INDEXES ALL ALLOW WRITE ACCESS TABLESAMPLE BERNOULLI ( 40.0 ) REPEATABLE ( 50 ) UTIL_IMPACT_PRIORITY 50' db2 'RUNSTATS ON TABLE PLANNING.TEST_PLAN ON COLUMNS ( "NAME_COL" ) AND SAMPLED DETAILED INDEXES ALL ALLOW WRITE ACCESS TABLESAMPLE BERNOULLI ( 40.0 ) REPEATABLE ( 50 ) UTIL_IMPACT_PRIORITY 50' db2 'RUNSTATS ON TABLE "REPOSITORY"."VERSION" WITH DISTRIBUTION AND DETAILED INDEXES ALL' </verbatim> For more runstats commands, please refer to the [[https://jazz.net/help-dev/clm/index.jsp?topic=%2Fcom.ibm.jazz.install.doc%2Ftopics%2Froadmap_clm_upgrade.html][upgrade guide]] for details. -------------------- ---++++!! About the author(s): Main.SentellaCystrunk is a performance engineer focusing on the performance and scalability of products in the Collaborative Lifecycle Management family. ---++++!! Related topics: [[CollaborativeLifecycleManagementPerformanceReportRQM604Release][Collaborative Lifecycle Management performance report: Rational Quality Manager 6.0.4 release]] -------------------- ---+++++!! Questions and comments: * What other performance information would you like to see here? * Do you have performance scenarios to share? * Do you have scenarios that are not addressed in documentation? * Where are you having problems in performance? %COMMENT{type="below" target="PerformanceDatasheetReaderComments" button="Submit"}% %INCLUDE{"PerformanceDatasheetReaderComments"}% <sticky></div></sticky>
Attachments
Attachments
Topic attachments
I
Attachment
Action
Size
Date
Who
Comment
sh
createDB2Indices_606RQM_optout_sample.sh
manage
2.7 K
2019-06-20 - 04:55
UnknownUser
Edit
|
Attach
|
P
rintable
|
V
iew topic
|
Backlinks:
We
b
,
A
l
l Webs
|
H
istory
:
r32
<
r31
<
r30
<
r29
<
r28
|
More topic actions...
Deployment
Deployment web
Planning and design
Installing and upgrading
Migrating and evolving
Integrating
Administering
Monitoring
Troubleshooting
Community information and contribution guidelines
Create new topic
Topic list
Search
Advanced search
Notify
RSS
Atom
Changes
Statistics
Web preferences
NOTE: Please use the Sandbox web for testing
Status icon key:
To do
Under construction
New
Updated
Constant change
None - stable page
Smaller versions of status icons for inline text:
Copyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our
Terms of Use.
Please read the following
disclaimer
.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more
here
.