E
dit
A
ttach
P
rintable
r14 - 2014-06-18 - 21:34:13 - Main.gcovell
You are here:
TWiki
>
Deployment Web
>
DeploymentPlanningAndDesign
>
PerformanceDatasheetsAndSizingGuidelines
>
ReliabilityTestingOfCLM50
<div id="header-title" style="padding: 10px 15px; border-width:1px; border-style:solid; border-color:#FFD28C; background-image: url(<nop>https://jazz.net/wiki/pub/Deployment/WebPreferences/TLASE.jpg); background-size: cover; font-size:120%"> <!-- * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup, Main.TWikiDeploymentDatasheetsAuthorsGroup, Main.GrantCovell --> ---+!! <img src="https://jazz.net/wiki/pub/Deployment/WebPreferences/uc.png" alt="uc.png" width="50" height="50" align="right">CLM reliability report: CLM 5.0 release</br> %DKGRAY% Authors: [[Main.JeffPang][Jeff Pang]]</br> Last updated: June 2, 2014</br> Build basis: Collaborative Lifecycle Management 5.0 %ENDCOLOR%</div></sticky> <!-- Page contents top of page on right hand side in box --> <sticky><div style="float:right; border-width:1px; border-style:solid; border-color:#DFDFDF; background-color:#F6F6F6; margin:0 0 15px 15px; padding: 0 15px 0 15px;"> %TOC{title="Page contents"}% </div></sticky> <sticky><div style="margin:15px;"></sticky> ---++ Introduction Reliability testing is about exercising the CLM applications so that failures are discovered and removed before the system is deployed. There are many different combinations of alternate pathways through the complex CLM application, this test scenario exercises the most likely use cases (explained in detail below). The use cases are put under constant load for a seven day period to validate that the CLM application provides the expected level of service, without any downtime or degradation in overall system performance. This report is a sample of the results from a recent CLM 5.0 GA Reliability run. ---+++!! Disclaimer %INCLUDE{"PerformanceDatasheetDisclaimer"}% ---++ Findings ---+++ Performance goals ---++++Primary Goals - Detection * Test the CLM software performance under given conditions, and monitoring for any degradation in performance over the course of the 7 day run. * Monitor for increases in the overall page or page element averages. * Report on any specific page which increases over time. ---++++Secondary Goals - Analysis * To discover the main cause of any failure. * To find and correlate the structure of repeating failures. ---+++ Findings * Completion of 7 day run without any crashes or noticeable degradation in performance. * CCM accumulated thousands of sprite-related tmp files, the issue has been addressed in jazz.net workitem [[https://jazz.net/jazz/web/projects/Rational%20Team%20Concert#action=com.ibm.team.workitem.viewWorkItem&id=318899][318899]]. * RM encountered increased memory consumption during the 7-day period, the issue has been addressed in jazz.net workitem [[https://jazz.net/jazz/web/projects/Jazz%20Foundation#action=com.ibm.team.workitem.viewWorkItem&id=87921][87921]]. * The load setting/distribution fits our anticipated scalability needs. ---++ Topology The topology under test is based on [[RecommendedCLMDeploymentTopologies#CLM_E1_Enterprise_Distributed_Li][Standard Topology (E1) Enterprise - Distributed / Linux / DB2.]] <img src="https://jazz.net/wiki/pub/Deployment/ReliabilityTestingOfCLM50/ServerOverview.png" alt="Server Overview" width="70%" height="70%" /> The specifications of machines under test are listed in the table below. Server tuning details listed in *[[#AppendixA][Appendix A]]* <table class="gray-table"> <tbody> <tr> <th valign="top">Function</th> <th valign="top">Number of Machines</th> <th valign="top">Machine Type</th> <th valign="top">CPU / Machine</th> <th valign="top">Total # of CPU Cores/Machine</th> <th valign="top">Memory/Machine</th> <th valign="top">Disk</th> <th valign="top">Disk capacity</th> <th valign="top">Network interface</th> <th valign="top">OS and Version</th> </tr> <tr> <td>IBM HTTP Server and !WebSphere Plugin</td> <td>1</td> <td>IBM System x3250 M4</td> <td>1 x Intel Xeon E3-1240 3.4GHz (quad-core)</td> <td>8</td> <td>16GB</td> <td>RAID 1 -- SAS Disk x 2</td> <td>299GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Linux Server 2.6.32-431.1.2.el6.x86_64, !WebSphere Application Server 8.5.5.1 64-bit</td> </tr> <tr> <td>JTS Server</td> <td>1</td> <td>VMWare Image</td> <td>VMWare Hypervisor, Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz</td> <td>v4</td> <td>v8GB</td> <td>VMFS</td> <td>V100GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Linux Server 2.6.32-358.14.1.el6.x86_64, !WebSphere Application Server 8.5.5.1 64-bit</td> </tr> <tr> <td>CCM Server</td> <td>1</td> <td>VMWare Image</td> <td>VMWare Hypervisor, Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz</td> <td>v4</td> <td>v8GB</td> <td>VMFS</td> <td>v100GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Linux Server 2.6.32-358.14.1.el6.x86_64, !WebSphere Application Server 8.5.5.1 64-bit</td> </tr> <tr> <td>QM Server</td> <td>1</td> <td>VMWare Image</td> <td>VMWare Hypervisor, Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz</td> <td>v4</td> <td>v8GB</td> <td>VMFS</td> <td>v100GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Linux Server 2.6.32-358.14.1.el6.x86_64, !WebSphere Application Server 8.5.5.1 64-bit</td> </tr> <tr> <td>RM Server</td> <td>1</td> <td>VMWare Image</td> <td>VMWare Hypervisor, Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz</td> <td>v4</td> <td>v16GB</td> <td>VMFS</td> <td>v200GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Linux Server 2.6.32-358.14.1.el6.x86_64, !WebSphere Application Server 8.5.5.1 64-bit</td> </tr> <tr> <td>DB2 Server</td> <td>1</td> <td>IBM System x3550 M4</td> <td>2 x Intel Xeon E5-2640 2.5GHz (six-core)</td> <td>24</td> <td>64GB</td> <td>RAID 5 -- SAS Disk x 2</td> <td>2.4TB</td> <td>Gigabit Ethernet</td> <td>Red Hat Linux Server 2.6.32-358.14.1.el6.x86_64, DB2 Enterprise 10.5.1</td> </tr> <tr> <td>RPT Workbench</td> <td>1</td> <td>VMWare Image</td> <td>VMWare Hypervisor, Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz</td> <td>v4</td> <td>v8GB</td> <td>VMFS</td> <td>v100GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Linux Server 2.6.32-431.11.2.el6.x86_64, Rational Performance Tester 8.3.0.3 64-bit</td> </tr> <tr> <td>RPT Agent</td> <td>6</td> <td>VMWare Image</td> <td>VMWare Hypervisor, Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz</td> <td>v4</td> <td>v4GB</td> <td>VMFS</td> <td>v30GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Linux Server 2.6.32-431.17.1.el6.x86_64, Rational Performance Tester Agent 8.3.0.3 64-bit</td> </tr> <tr> <td>Network switches</td> <td>N/A</td> <td>Cisco 2960G-24TC-L</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>Gigabit Ethernet</td> <td>24 Ethernet 10/100/1000 ports</td> </tr> </tbody> </table> ---+++Network connectivity All CLM server machines and RPT test driver machines are located on the same subnet. The LAN has 1000 Mbps of maximum bandwidth, and the average latency between the IHS server and all distributed Application Servers is 1.12ms. The LDAP Server is also on the same subnet, but has a latency of less than 0.8ms with the CLM Servers. ---+++Data volume and shape At the beginning of the test the repository projects were populated to have the following size: * RM/DNG - 1 project with ~84K artifacts (imported from the medium project template for RM) * CCM - 1 project populated with 100K artifacts * QM - 1 project populated with 100K artifacts * Admin 1 CLM project with artifacts linked across all projects ---++Methodology Rational Performance Tester was used to simulate the CLM workload created using the web client. Each user is distributed across a set of user groups. Each user group represents a CLM product area, e.g. CCM, QM, RM. Each user group performs a random set of use cases from a set of available use cases. A Rational Performance Tester script is created for each use case. The scripts are organized by pages and each page represents a single user action. Based on real customer use, the CLM test scenario provides a ratio of 70% reads and 30% writes. The total CLM workload distribution is 45% CCM, 30% QM, 10% Integrations, 15% DNG Web and RM. Test cases with the higher weights will be run more frequently within the User Groups. ---+++CLM test cases and weighted distribution #TestCases <table class="gray-table"> <tbody> <tr> <th align="center">Use case</th> <th align="center">Description</th> <th align="center">User Group</th> <th align="center">Weight within User Group loop</th> </tr> <tr> <td>RTC_CreateDefect</td> <td>CCM User logs in to their project area, creates a defect, then logs out.</td> <td>CCM</th> <td align="center">5</td> </tr> <tr> <td>RTC_QueryAndEditDefect</td> <td>CCM User logs in to their project area, queries for a defect, edits defect, then logs out.</td> <td>CCM</td> <td align="center">10</td> </tr> <tr> <td>RTC_QueryAndViewDefect</td> <td>CCM User logs in to their project area, queries for a defect, views defect, then logs out.</td> <td>CCM</td> <td align="center">60</td> </tr> <tr> <td>RTC_DashboardInitialization</td> <td>CCM User logs in to their project area, non-cached first time dashboard access, then logs out.</td> <td>CCM</td> <td align="center">5</td> </tr> <tr> <td>RTC_DefaultDashboard</td> <td>CCM User logs in to their project area, open default dashboard, then logs out.</td> <td>CCM</td> <td align="center">5</td> </tr> <tr> <td>RTC_FeedsDashboard</td> <td>CCM User logs in to their project area, open dashboard with feed only, then logs out.</td> <td>CCM</td> <td align="center">5</td> </tr> <tr> <td>RTC_WorkItemGraphDashboard</td> <td>CCM User logs in to their project area, open dashboard with workitem statistics graphs, then logs out.</td> <td>CCM</td> <td align="center">5</td> </tr> <tr> <td>RTC_WorkitemQueryDashboard</td> <td>CCM User logs in to their project area and open dashboard with project shared queries, then logs out.</td> <td>CCM</td> <td align="center">5</td> </tr> <tr> <td>QM_CreateTestCase</td> <td>QM User logs in to their project area and creates a test case.</td> <td>QM</td> <td align="center">6</td> </tr> <tr> <td>QM_VewDashboard_DV</td> <td>QM User logs in to their project area and view dashboard.</td> <td>QM</td> <td align="center">2</td> </tr> <tr> <td>QM_BrowseTPsTCs</td> <td>QM User logs in to their project area, then browses test plans and test cases.</td> <td>QM</td> <td align="center">60</td> </tr> <tr> <td>QM_BrowseTERs</td> <td>QM User logs in to their project area, then browses test case execution records.</td> <td>QM</td> <td align="center">10</td> </tr> <tr> <td>QM_editTestScript</td> <td>QM User logs in to their project area, opens and edits a test script.</td> <td>QM</td> <td align="center">18</td> </tr> <tr> <td>QM_editTestPlan</td> <td>QM User logs in to their project area, opens and edits a test plan.</td> <td>QM</td> <td align="center">5</td> </tr> <tr> <td>QM_editTestCase</td> <td>QM User logs in to their project area, opens and edits a test case.</td> <td>QM</td> <td align="center">18</td> </tr> <tr> <td>QM_createTestScript</td> <td>QM User logs in to their project area and creates a test script.</td> <td>QM</td> <td align="center">6</td> </tr> <tr> <td>QM_executeTestSteps4</td> <td>QM User logs in to their project area, searches for TER, executes a 4 step TER then shows results.</td> <td>QM</td> <td align="center">60</td> </tr> <tr> <td>QM_browseTestScript</td> <td>QM User logs in to their project area and browses a test script.</td> <td>QM</td> <td align="center">32</td> </tr> <tr> <td>QM_bulkEditTC</td> <td>QM User logs in to their project area, selects and edits multiple test cases.</td> <td>QM</td> <td align="center">1</td> </tr> <tr> <td>QM_createDefect</td> <td>QM User logs in to their project area and creates a defect.</td> <td>QM</td> <td align="center">8</td> </tr> <tr> <td>QM_createTestPlan</td> <td>QM User logs in to their project area and creates a test plan.</td> <td>QM</td> <td align="center">1</td> </tr> <tr> <td>QM_editTestEnv</td> <td>QM User logs in to their project area and edits a test environment.</td> <td>QM</td> <td align="center">8</td> </tr> <tr> <td>QM_fullTextSearch</td> <td>QM User logs in to their project area and does text search.</td> <td>QM</td> <td align="center">3</td> </tr> <tr> <td>QM_modifyDefect</td> <td>QM User logs in to their project area and modifies a defect.</td> <td>QM</td> <td align="center">5</td> </tr> <tr> <td>QM_searchDefect</td> <td>QM User logs in to their project area and searches for a defect.</td> <td>QM</td> <td align="center">14</td> </tr> <tr> <td>QM_simpleCopy</td> <td>QM User logs in to their project area and copy a test plan.</td> <td>QM</td> <td align="center">1</td> </tr> <tr> <td>QM_viewDashboard</td> <td>QM User logs in to their project area and refreshes the dashboard.</td> <td>QM</td> <td align="center">20</td> </tr> <tr> <td>QPlan to Req Collection</td> <td>QM User logs in, creates a new test plan, and then creates a new quality task and links it to an RTC task, then logs out.</td> <td>Int</td> <td align="center">5</td> </tr> <tr> <td>Test case links Dev item</td> <td>QM User logs in, creates a new test case and links it to an RTC workitem, then logs out.</td> <td>Int</td> <td align="center">5</td> </tr> <tr> <td>TER Links</td> <td>QM User logs in to their project, creates and runs a new test case, then creates a linked RTC defect to the TER, user logs out.</td> <td>Int</td> <td align="center">5</td> </tr> <tr> <td>Test script Links</td> <td>QM User logs in to their project, creates a new test script and links it to an RTC workitem, then logs out.</td> <td>Int</td> <td align="center">5</td> </tr> <tr> <td>Test plan Links</td> <td>QM User logs in to their project, creates a new test plan and links it to a new RM collection, then logs out.</td> <td>Int</td> <td align="center">5</td> </tr> <tr> <td>RM artifact links to test case</td> <td>RM User logs in to their project area, creates a new artifact then links to a new test case in QM, then logs out.</td> <td>Int</td> <td align="center">1</td> </tr> <tr> <td>RM artifact links to workitem</td> <td>RM User logs in to their project area, creates a new artifact then links to a new workitem in CCM, then logs out.</td> <td>Int</td> <td align="center">1</td> </tr> <tr> <td>RM collection links to test plan</td> <td>RM User logs in to their project area, creates a new collection then links to a new QM test plan, then logs out.</td> <td>Int</td> <td align="center">1</td> </tr> <tr> <td>RM collection links to release plan</td> <td>RM User logs in to their project area, creates a new collection then links to a new CCM release plan, then logs out.</td> <td>Int</td> <td align="center">1</td> </tr> <tr> <td>CreateArtifact</td> <td>RM user logs in to their project area and creates a new artifact.</td> <td>RM/DNG</td> <td align="center">3</td> </tr> <tr> <td>CreateCollection</td> <td>RM user logs in to their project area and creates a new collection.</td> <td>RM/DNG</td> <td align="center">1</td> </tr> <tr> <td>EditModuleArtifact</td> <td>RM user logs in to their project area, opens a module and edits a module artifact.</td> <td>RM/DNG</td> <td align="center">3</td> </tr> <tr> <td>OpenCollection</td> <td>RM user logs in to their project area and opens a collection.</td> <td>RM/DNG</td> <td align="center">1</td> </tr> <tr> <td>ManageFolders</td> <td>RM user logs in to their project area and creates/moves/deletes folders.</td> <td>RM/DNG</td> <td align="center">1</td> </tr> <tr> <td>OpenProjectDashboard</td> <td>RM user logs in to their project area and opens the dashboard</td> <td>RM/DNG</td> <td align="center">5</td> </tr> <tr> <td>OpenAndScrollLargeModule</td> <td>RM User logs in, opens a large module, then scrolls through the entire 1500 artifacts.</td> <td>RM/DNG</td> <td align="center">10</td> </tr> <tr> <td>ModuleViewTraceReportPDF</td> <td>RM User logs in, opens a large module, then generates and publishes a traceability report.</td> <td>RM/DNG</td> <td align="center">2</td> </tr> <tr> <td>OpenRMAdmin</td> <td>RM User logs in, then opens the administration page.</td> <td>RM/DNG</td> <td align="center">39</td> </tr> <tr> <td>ReviewModule</td> <td>RM User logs in, opens and reviews a large module.</td> <td>RM/DNG</td> <td align="center">1</td> </tr> <tr> <td>CreateModuleArtifactComment</td> <td>RM User logs in, opens a large module, then adds a comment to a module artifact.</td> <td>RM/DNG</td> <td align="center">15</td> </tr> <tr> <td>DisplayModuleHistory</td> <td>RM User logs in, opens a module and shows the module history.</td> <td>RM/DNG</td> <td align="center">1</td> </tr> <tr> <td>Upload4MBNewArtifact</td> <td>RM User logs in, then uploads new artifacts from a 4MB file.</td> <td>Long Operations</td> <td align="center">1</td> </tr> <tr> <td>ImportRegIF</td> <td>RM User logs, then imports new artifacts from a file in requirements interchange format.</td> <td>Long Operations</td> <td align="center">1</td> </tr> <tr> <td>ImportCSV</td> <td>RM User logs, then imports new artifacts from a CSV file.</td> <td>Long Operations</td> <td align="center">1</td> </tr> <tr> <td>ImportWordDoc</td> <td>RM User logs, then imports new artifacts from a word document.</td> <td>Long Operations</td> <td align="center">1</td> </tr> </tbody> </table> ---++Results *IHS Thread Analysis* IHS threads usage is monitored to determine how well IHS server was keeping up with the load, and determine if IHS was tuned properly to move the threads through the system most efficiently. The following chart shows how well the IHS server was handling it's threads, and of note are the threads in WAS module which do not accumulate over time which indicates the !WebSphere Application servers are keeping up with the load. <img src="https://jazz.net/wiki/pub/Deployment/ReliabilityTestingOfCLM50/IHSThreads.PNG" width="50%" height="50%"/> *RPT Page Hit Rate* RPT Page Hit Rate is captured to monitor the load and help with correlation of any incidences where load changes. For purposes of this test the load is consistent. <img src="https://jazz.net/wiki/pub/Deployment/ReliabilityTestingOfCLM50/RPT_Page_HitRate.PNG" width="50%" height="50%"/> *RPT Page Element Response over Time* Page Element Response Rate is captured to monitor the average page element response over time. <img src="https://jazz.net/wiki/pub/Deployment/ReliabilityTestingOfCLM50/RPT_Page_Element_Response.PNG" width="50%" height="50%"/> *RPT Average Page Response Time* RPT Average Page Response Time shows the average of all pages over time. This is very useful in determining which if/any pages are unexpectedly increasing in response time over the seven day period. <img src="https://jazz.net/wiki/pub/Deployment/ReliabilityTestingOfCLM50/RPT_Page_Response.PNG" width="50%" height="50%"/> *RPT Byte Transfer Rate* RPT Byte Transfer Rate shows us if bytes being sent or received are changing over time, indicating a drop in throughput. <img src="https://jazz.net/wiki/pub/Deployment/ReliabilityTestingOfCLM50/RPT_Byte_Xfer.PNG" width="50%" height="50%"/> ---++ Appendix A #AppendixA <table class="gray-table"> <tbody> <tr> <th align="left" width="300"><strong>Product</strong><br></th> <th align="left" width="200"><strong>Version</strong></th> <th align="left" width="400"><strong>Highlights for configurations under test</strong></th> </tr> <tr> <td style="vertical-align: top;">IBM HTTP Server for !WebSphere Application Server</td> <td style="vertical-align: top;">8.5.5.1</td> <td style="vertical-align: top;">IBM HTTP Server functions as a reverse proxy server implemented via Web server plug-in for !WebSphere Application Server. Configuration details can be found from the [[http://pic.dhe.ibm.com/infocenter/clmhelp/v4r0/index.jsp?topic=%2Fcom.ibm.jazz.install.doc%2Ftopics%2Ft_config_reverse_proxy_ihs.html][CLM infocenter]]. *HTTP server (httpd.conf)*: !MaxClients: increase value for high-volume loads [[http://publib.boulder.ibm.com/httpserv/ihsdiag/ihs_performance.html][(adjust value based on user load)]] * !Timeout 150 * !MaxKeepAliveRequests 100 * !KeepAliveTimeout 8 * !ThreadLimit 25 * !ServerLimit 100 * !StartServers 1 * !MaxClients 800 * !MinSpareThreads 25 * !MaxSpareThreads 100 * !ThreadsPerChild 25 * !MaxRequestsPerChild 0 *Web server plugin-in (plugin-cfg.xml)*: * !ServerIOTimeout="900" *OS Configuration*: * max user processes = unlimited </td> </tr> <tr> <td style="vertical-align: top;">IBM !WebSphere Application Server Network Deployment for JTS, Admin, and CLMHelp applications</td> <td style="vertical-align: top;">8.5.5.1</td> <td style="vertical-align: top;"><strong>JVM settings:</strong> * GC policy and arguments, max and init heap sizes: <verbatim> -Xmx4g -Xms4g -Xmn1g -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000 -Xverbosegclog:AppNodeXXX_gc.log -XX:MaxDirectMemorySize=1G -Xgc:classUnloadingKickoffThreshold=2000 -Xgcpolicy:balanced -Xloaminimum0.02 </verbatim> *Thread pools:* * Maximum !WebContainer = Minimum !WebContainer = 200 *OS Configuration:* System wide resources for the app server process owner: * max user processes = unlimited * open files = 65536 </td> </tr> <tr> <td style="vertical-align: top;">IBM !WebSphere Application Server Network Deployment for CCM application</td> <td style="vertical-align: top;">8.5.5.1</td> <td style="vertical-align: top;"><strong>JVM settings:</strong> * GC policy and arguments, max and init heap sizes: <verbatim> -Xmx4g -Xms4g -Xmn1g -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000 -Xverbosegclog:AppNodeXXX_gc.log -XX:MaxDirectMemorySize=1G -Xgc:classUnloadingKickoffThreshold=2000 -Xgcpolicy:balanced -Xloaminimum0.02 </verbatim> *Thread pools:* * Maximum !WebContainer = Minimum !WebContainer = 200 *OS Configuration:* System wide resources for the app server process owner: * max user processes = unlimited * open files = 65536 </td> </tr> <tr> <td style="vertical-align: top;">IBM !WebSphere Application Server Network Deployment QM application</td> <td style="vertical-align: top;">8.5.5.1</td> <td style="vertical-align: top;"><strong>JVM settings:</strong> * GC policy and arguments, max and init heap sizes: <verbatim> -Xmx4g -Xms4g -Xmn1g -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000 -Xverbosegclog:AppNodeXXX_gc.log -XX:MaxDirectMemorySize=1G -Xgc:classUnloadingKickoffThreshold=2000 -Xgcpolicy:balanced -Xloaminimum0.02 </verbatim> *Thread pools:* * Maximum !WebContainer = Minimum !WebContainer = 200 *OS Configuration:* System wide resources for the app server process owner: * max user processes = unlimited * open files = 65536 </td> </tr> <tr> <td style="vertical-align: top;">IBM !WebSphere Application Server Network Deployment for RM, Converter applications</td> <td style="vertical-align: top;">8.5.5.1</td> <td style="vertical-align: top;"><strong>JVM settings:</strong> * GC policy and arguments, max and init heap sizes: <verbatim> -Xmx8g -Xms8g -Xmn2g -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000 -Xverbosegclog:AppNodeXXX_gc.log -XX:MaxDirectMemorySize=1G -Xgc:classUnloadingKickoffThreshold=2000 -Xgcpolicy:balanced -Xloaminimum0.02 </verbatim> *Thread pools:* * Maximum !WebContainer = Minimum !WebContainer = 200 *OS Configuration:* System wide resources for the app server process owner: * max user processes = unlimited * open files = 65536 </td> </tr> <tr> <td>DB2</td> <td>DB2 10.5</td> <td> </td> </tr> <tr> <td>RPT Workbench</td> <td>8.3.0.3</td> <td>Defaults</td> </tr> <tr> <td>RPT agents</td> <td>8.3.0.3</td> <td>Defaults</td> </tr> <tr> <td>Network</td> <td> </td> <td>Shared subnet within test lab</td> </tr> </tbody> </table> *Garbage collection* Verbose garbage collection analysis helps determine how the memory and garbage collection are performing for the CLM applications running our use cases. Optimal JVM configuration depends very greatly on the usage pattern. Our goal is to provide guidance on where to start and what key tuning factors impact performance. *JTS* <img src="https://jazz.net/wiki/pub/Deployment/ReliabilityTestingOfCLM50/AppNode_JTS_gc_chart.PNG" width="50%" height="50%"/> <img src="https://jazz.net/wiki/pub/Deployment/ReliabilityTestingOfCLM50/Legend.PNG" width="20%" height="20%"/> *CCM* <img src="https://jazz.net/wiki/pub/Deployment/ReliabilityTestingOfCLM50/AppNode_CCM_gc_chart.PNG" width="50%" height="50%" /> <img src="https://jazz.net/wiki/pub/Deployment/ReliabilityTestingOfCLM50/Legend.PNG" width="20%" height="20%"/> *QM* <img src="https://jazz.net/wiki/pub/Deployment/ReliabilityTestingOfCLM50/AppNode_QM_gc_chart.PNG" width="50%" height="50%"/> <img src="https://jazz.net/wiki/pub/Deployment/ReliabilityTestingOfCLM50/Legend.PNG" width="20%" height="20%"/> *RM* <img src="https://jazz.net/wiki/pub/Deployment/ReliabilityTestingOfCLM50/AppNode_RM_gc_chart.PNG" width="50%" height="50%"/> <img src="https://jazz.net/wiki/pub/Deployment/ReliabilityTestingOfCLM50/Legend.PNG" width="20%" height="20%"/> ---++++!! For more information * [[PerformanceDatasheetsAndSizingGuidelines#CLM_5_0][Collaborative Lifecycle Management 5.0 Performance Datasheets]] ---++++!! About the authors * [[Main.JeffPang][Jeff Pang]] -------------------- ---+++++!! Questions and comments: * What other performance information would you like to see here? * Do you have performance scenarios to share? * Do you have scenarios that are not addressed in documentation? * Where are you having problems in performance? %COMMENT{type="below" target="PerformanceDatasheetReaderComments" button="Submit"}% %INCLUDE{"PerformanceDatasheetReaderComments"}% <sticky></div></sticky>
E
dit
|
A
ttach
|
P
rintable
|
V
iew topic
|
Backlinks:
We
b
,
A
l
l Webs
|
H
istory
: r14
<
r13
<
r12
<
r11
<
r10
|
M
ore topic actions
Deployment
Deployment web
Planning and design
Installing and upgrading
Migrating and evolving
Integrating
Administering
Monitoring
Troubleshooting
Community information and contribution guidelines
Create new topic
Topic list
Search
Advanced search
Notify
RSS
Atom
Changes
Statistics
Web preferences
NOTE: Please use the Sandbox web for testing
Status icon key:
To do
Under construction
New
Updated
Constant change
None - stable page
Smaller versions of status icons for inline text:
Copyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our
Terms of Use.
Please read the following
disclaimer
.