Edit
Attach
P
rintable
r21 - 2014-01-16 - 23:10:18 - Main.gsven
You are here:
TWiki
>
Deployment Web
>
DeploymentPlanningAndDesign
>
PerformanceDatasheetsAndSizingGuidelines
>
RationalRequirementsComposer405PerformanceReport
<div id="header-title" style="padding: 10px 15px; border-width:1px; border-style:solid; border-color:#FFD28C; background-image: url(<nop>https://jazz.net/wiki/pub/Deployment/WebPreferences/TLASE.jpg); background-size: cover; font-size:120%"> <!-- * Set ALLOWTOPICCHANGE = Main.TWikiDeploymentDatasheetsAuthorsGroup, Main.TWikiAdminGroup, Main.GrantCovell --> ---+!! <img src="https://jazz.net/wiki/pub/Deployment/WebPreferences/new.png" alt="new.png" width="50" height="50" align="right">Collaborative Lifecycle Management performance report: RRC 4.0.5 release</br> %DKGRAY% Authors: Main.GustafSvensson</br> Last updated: December 6, 2013</br> Build basis: Rational Requirements Composer 4.0.5 %ENDCOLOR%</div></sticky> <!-- Page contents top of page on right hand side in box --> <sticky><div style="float:right; border-width:1px; border-style:solid; border-color:#DFDFDF; background-color:#F6F6F6; margin:0 0 15px 15px; padding: 0 15px 0 15px;"> %TOC{title="Page contents"}% </div></sticky> <sticky><div style="margin:15px;"></sticky> ---++ Introduction This report compares the performance of an unclustered Rational Requirements Composer version 4.0.5 deployment to the previous 4.0.4 release. The test objective is achieved in three steps: * Run version 4.0.4 with standard 1-hour test using 400 concurrent users. * Run version 4.0.5 with standard 1-hour test using 400 concurrent users. * The test is run three times for each version and the resulting six tests are compared with each other. Three tests per version is used to get a more accurate picture since there are variations expected between runs. ---+++!! Disclaimer %INCLUDE{"PerformanceDatasheetDisclaimer"}% ---++ Findings ---+++ Performance goals * Verify that there are no performance regressions between current release and prior release with 400 concurrent users using the workload described below. * The challenge we experience is that there is a high variance of execution time within a single run. Depending on the load that the server is under in any given moment the execution time will vary considerably. This could be avoided by testing the performance in a single user environment but the purpose of this test is to ensure performance is acceptable also when the server is under heavy load. Using the median rather than the average when calculating the execution times within each test run will lessen the impact of a few slow running performance numbers. * When comparing the three old runs with the three new runs we use the average of the three runs. Because of the difference in execution times of pages between runs, as can be observed in the charts below, a margin of error of 10% is generally tolerated although faster running pages typically have a higher margin of errors (larger variation between runs). ---+++ Findings * RPT report shows similar response times for 4.0.5 and 4.0.4. * Comparing nmon data for both 4.0.5 and 4.0.4 show similar CPU, memory and disk utilization on application servers and database server. ---++ Topology The topology under test is based on [[https://jazz.net/library/article/820#Enterprise_Distributed_Linux__DB2][Standard Topology (E1) Enterprise - Distributed / Linux / DB2.]] <img src="%ATTACHURLPATH%/ServerOverview.png" alt="Server Overview" width="70%" height="70%" /> The specifications of machines under test are listed in the table below. Server tuning details listed in *[[#AppendixA][Appendix A]]* <table class="gray-table"> <tbody> <tr> <th valign="top">Function</th> <th valign="top">Number of Machines</th> <th valign="top">Machine Type</th> <th valign="top">CPU / Machine</th> <th valign="top">Total # of CPU Cores/Machine</th> <th valign="top">Memory/Machine</th> <th valign="top">Disk</th> <th valign="top">Disk capacity</th> <th valign="top">Network interface</th> <th valign="top">OS and Version</th> </tr> <tr> <td>IBM HTTP Server and !WebSphere Plugin</td> <td>1</td> <td>IBM System x3250 M4</td> <td>1 x Intel Xeon E3-1240 3.4GHz (quad-core)</td> <td>8</td> <td>16GB</td> <td>RAID 1 -- SAS Disk x 2</td> <td>279GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 6.3 (Santiago)</td> </tr> <tr> <td>JTS Server</td> <td>1</td> <td>IBM System x3550 M4</td> <td>2 x Intel Xeon E5-2640 2.5GHz (six-core)</td> <td>24</td> <td>32GB</td> <td>RAID 5 -- SAS Disk x 4</td> <td>279GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 6.3 (Santiago)</td> </tr> <tr> <td>RRC Server</td> <td>1</td> <td>IBM System x3550 M4</td> <td>2 x Intel Xeon E5-2640 2.5GHz (six-core)</td> <td>24</td> <td>32GB</td> <td>RAID 5 -- SAS Disk x 4</td> <td>279GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 6.3 (Santiago)</td> </tr> <tr> <td>Database Server</td> <td>1</td> <td>IBM System x3650 M4</td> <td>2 x Intel Xeon E5-2640 2.5GHz (six-core)</td> <td>24</td> <td>64GB</td> <td>RAID 10 -- SAS Disk x 16</td> <td>279GB</td> <td>Gigabit Ethernet</td> <td>Red Hat Enterprise Linux Server release 6.3 (Santiago)</td> </tr> <tr> <td>RPT Workbench</td> <td>1</td> <td>VM image</td> <td>2 x Intel Xeon X7550 CPU (1-Core 2.0GHz 64-bit)</td> <td>2</td> <td>6GB</td> <td>SCSI</td> <td>80GB</td> <td>Gigabit Ethernet</td> <td>Microsoft Windows Server 2003 R2 Standard Edition SP2</td> </tr> <tr> <td>RPT Agent</td> <td>1</td> <td>xSeries 345</td> <td>4 x Intel Xeon X3480 CPU (1-Core 3.20GHz 32-bit)</td> <td>4</td> <td>3GB</td> <td>SCSI</td> <td>70GB</td> <td>Gigabit Ethernet</td> <td>Microsoft Windows Server 2003 Enterprise Edition SP2</td> </tr> <tr> <td>RPT Agent</td> <td>1</td> <td>xSeries 345</td> <td>4 x Intel Xeon X3480 CPU (1-Core 3.20GHz 32-bit)</td> <td>4</td> <td>3GB</td> <td>RAID 1 - SCSI Disk x 2</td> <td>70GB</td> <td>Gigabit Ethernet</td> <td>Microsoft Windows Server 2003 Enterprise Edition SP2</td> </tr> <tr> <td>RPT Agent</td> <td>1</td> <td>Lenovo 9196A49</td> <td>1 x Intel Xeon E6750 CPU (2-Core 2.66GHz 32-bit)</td> <td>2</td> <td>2GB</td> <td>SATA</td> <td>230GB</td> <td>Gigabit Ethernet</td> <td>Microsoft Windows Server 2003 Enterprise Edition SP2</td> </tr> <tr> <td>Network switches</td> <td>N/A</td> <td>Cisco 2960G-24TC-L</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>Gigabit Ethernet</td> <td>24 Ethernet 10/100/1000 ports</td> </tr> </tbody> </table> N/A: Not applicable. ---+++Network connectivity All server machines and test clients are located on the same subnet. The LAN has 1000 Mbps of maximum bandwidth and less than 0.3ms latency in ping. ---+++Data volume and shape The artifacts were distributed between 40 small projects, 10 medium projects and one large project for a total of 200,589 artifacts. The repository contained the following data: * 141 modules * 14,500 module artifacts * 3,203 folders * 434 collections * 274 reviews * 40,798 comments * 520 public tags * 110 private tags * 15,350 terms * 22,778 links * 360 views * Database size = 20 GB * JTS index size = 9 GB The large project contained the following data: * 11 modules * 1,160 folders * 78,842 requirement artifacts * 106 collections * 64 reviews * 16,897 comments * 300 public tags * 50 private tags * 9,006 terms * 15,594 links * 200 views ---++Methodology Rational Performance Tester was used to simulate the workload created using the web client. Each user completed a random use case from a set of available use cases. A Rational Performance Tester script is created for each use case. The scripts are organized by pages and each page represents a user action. Based on real customer use, the test scenario provides a ratio of 70% reads and 30% writes. The users completed use cases at a rate of 30 pages per hour per user. Each performance test runs for 60 minutes after all of the users are activated in the system. ---+++Test cases and workload characterization #TestCases <table class="gray-table"> <tbody> <tr> <th align="center">Use case</th> <th align="center">Description</th> <th align="center">% of Total Workload</th> </tr> <tr> <td>Login</td> <td>Connect to the server using server credentials.</td> <td align="center">None</td> </tr> <tr> <td>[[#CreateCollection][Create a collection]]</td> <td>Create collections with 10 artifacts.</td> <td align="center">6</td> </tr> <tr> <td>[[#FilterQuery][Filter a query]]</td> <td>Run a query that has100 results and open 3 levels of nested folders.</td> <td align="center">8</td> </tr> <tr> <td>[[#OpenNested][Open nested folders]]</td> <td>Create review and complete review process.</td> <td align="center">4</td> </tr> <tr> <td>[[#ManageFolders][Manage folders]]</td> <td>Create a folder, move it to a new location, and then delete the folder.</td> <td align="center">1</td> </tr> <tr> <td>[[#QueryBy][Query by ID]]</td> <td>Search for a specific ID in the repository.</td> <td align="center">8 </td> </tr> <tr> <td>[[#ViewCollections][View collections]]</td> <td>View collections that contain 100 artifacts from the collections folders.</td> <td align="center">13</td> </tr> <tr> <td>[[#CheckSuspect][Check suspect links]]</td> <td>Open an artifact that has suspect links.</td> <td align="center">6</td> </tr> <tr> <td>[[#AddComments][Add comments to an artifact]]</td> <td>Open a requirement that has 100 comments and creates a comment addressed to 8 people on the team.</td> <td align="center">8</td> </tr> <tr> <td>[[#OpenProject][Open the project dashboard]]</td> <td>Open the project and dashboard for the first time.</td> <td align="center">7</td> </tr> <tr> <td>[[#CreateMulti][Create a multi-value artifact]]</td> <td>Create a multi-value artifact and then add a multi-value attribute.</td> <td align="center">2</td> </tr> <tr> <td>[[#CreateArtifact][Create an artifact]]</td> <td>Create a requirement that contains a table, an image and rich text. Edit an artifact that has 100 enumerated attributes and modify an attribute.</td> <td align="center">2</td> </tr> <tr> <td>[[#ShowArtifacts][Show artifacts in a Tree view]]</td> <td> Open a folder that contains artifacts with links and show the data in a tree view.</td> <td align="center">8</td> </tr> <tr> <td>[[#OpenGraphical][Open graphical artifacts]]</td> <td>Open business process diagrams, use cases, parts, images, sketches and story boards.</td> <td align="center">5</td> </tr> <tr> <td>[[#CreateEdit][Create and edit a storyboard]]</td> <td>Create and edit a storyboard.</td> <td align="center">4</td> </tr> <tr> <td>[[#DisplayHover][Display the hover information for a collection]]</td> <td>Open a collection that contains 100 artifacts and hover over the Artifacts page.</td> <td align="center">4</td> </tr> <tr> <td>[[#QueryString][Query by String]]</td> <td>Search for a string that returns 30 matched items.</td> <td align="center">10</td> </tr> <tr> <td>[[#CreateReport][Create a PDF report]]</td> <td>Generate a 50-artifact PDF report. </td> <td align="center">2</td> </tr> <tr> <td>[[#CreateWord][Create a Microsoft Word report]]</td> <td>Generate a 100-artifact Microsoft Word report.</td> <td align="center">2</td> </tr> </tbody> </table> ---++++Response time comparison The median response time provided more even results than the average response time. The nature of the high variance between tests where some tasks at time takes a longer time to run, such as when the server is under heavy load, makes the average response time less predictive. Both the median and average values are included in the following tables and charts for comparison. In the repository that contained 200,000 artifacts with 400 concurrent users, no obvious regression was shown when comparing response times between runs. The numbers in the following charts include all of the pages for all of the scripts that ran. ---++Results <img src="%ATTACHURLPATH%/Overview_Chart_5.png" width="90%" height="90%"/> <img src="%ATTACHURLPATH%/Overview_Chart_7.png" width="90%" height="90%"/> <img src="%ATTACHURLPATH%/Overview_Chart_15.png" width="90%" height="90%"/> *Garbage collection* Verbose garbage collection is enable to create the GC logs. The GC logs shows very little variation between runs. There is also no discernible difference between versions . Below is an example of what the graph from the GC log looks like from the first test run for each application. *RM* * Observation: The graph looks the same as in 4.0.4 and 4.0.5. <img src="%ATTACHURLPATH%/RM_gc.png" width="100%" height="100%" /> *JTS* * Observation: The graph looks the same as in 4.0.4 and 4.0.5. The high frequency of collections is normal behavior for JTS. <img src="%ATTACHURLPATH%/JTS_gc.png" width="100%" height="100%"/> #CreateCollection ---++++*Create a collection* * Observation: Two of the requests are slower, "Save the collection" (in average 29ms slower) and "Click Done and close the collection" (in average 16ms slower). This is offset by four other requests being faster. <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_22.png" width="90%" height="90%" /> [[#TestCases][Back to Test Cases & workload characterization]] #FilterQuery ---++++*Filter a query* * Observation: None <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_23.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #OpenNested ---++++*Open nested folders* * Observation: Open project request is marginally slower but is offset by other requests being faster. <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_24.png" width="90%" height="90%" /> [[#TestCases][Back to Test Cases & workload characterization]] #ManageFolders ---++++*Manage folders* * Observation: Here open the project is an example of the high variation we can see at times between runs. Note that open projects are not always slower. <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_25.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #QueryBy ---++++*Query by ID* * Observation: "Open the project" has an average slower response of 8% (40ms). This particular scenario is where opening the project has the slowest performance. <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_26.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #ViewCollections ---++++*View collections* * Observation: "Open the project" has an average slower response of 6% (30ms) with small improvements in opening folder and viewing artifacts. <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_27.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #CheckSuspect ---++++*Check suspect links* * Observation: Similar results between runs with small variations <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_28.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #AddComments ---++++*Add comments to an artifact* * Observation: Ideal comparison with small variance <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_29.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #OpenProject ---++++*Open the project dashboard* * Observation: None <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_30.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #CreateMulti ---++++*Create a multi-value artifact* * Observation: None <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_31.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #CreateArtifact ---++++*Create an artifact* * Observation: Edit the artifact is 16ms slower which is offset by other gestures being faster <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_32.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #ShowArtifacts ---++++*Show artifacts in a Tree view* * Observation: Displaying the artifacts in a tree view is in average 6% faster (92ms difference) <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_33.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #OpenGraphical ---++++*Open graphical artifacts* * Observation: Our "best" graph. Almost everything is a little bit faster <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_34.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #CreateEdit ---++++*Create and edit a storyboard* * Observation: None <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_35.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #DisplayHover ---++++*Display the hover information for a collection* * Observation: Although none of the open project requests for the three runs in 4.0.5 are slower than in 4.0.4 the average is 5% (29ms) slower. <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_36.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #QueryString ---++++*Query by string* * Observation: None <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_37.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #CreateReport ---++++*Create a PDF report* * Observation: None <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_38.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] #CreateWord ---++++*Create a Microsoft Word report* * Observation: None <img src="%ATTACHURLPATH%/All_pages_sorted_by_scripts_Chart_21.png" width="90%" height="90%"/> [[#TestCases][Back to Test Cases & workload characterization]] ---++ Appendix A #AppendixA <table class="gray-table"> <tbody> <tr> <th align="left" width="200"><strong>Product</strong><br></th> <th align="left" width="100"><strong>Version</strong></th> <th align="left" width="600"><strong>Highlights for configurations under test</strong></th> </tr> <tr> <td style="vertical-align: top;">IBM HTTP Server for !WebSphere Application Server</td> <td style="vertical-align: top;">8.5.0.1</td> <td style="vertical-align: top;">IBM HTTP Server functions as a reverse proxy server implemented via Web server plug-in for !WebSphere Application Server. Configuration details can be found from the [[http://pic.dhe.ibm.com/infocenter/clmhelp/v4r0/index.jsp?topic=%2Fcom.ibm.jazz.install.doc%2Ftopics%2Ft_config_reverse_proxy_ihs.html][CLM infocenter]]. *HTTP server (httpd.conf)*: * !MaxClients: increase value for high-volume loads [[http://publib.boulder.ibm.com/httpserv/ihsdiag/ihs_performance.html][(adjust value based on user load)]] * !ThreadsPerChild = 50 *Web server plugin-in (plugin-cfg.xml)*: * !ServerIOTimeout="900" *OS Configuration*: * max user processes = unlimited </td> </tr> <tr> <td style="vertical-align: top;">IBM !WebSphere Application Server Network Deployment</td> <td style="vertical-align: top;">8.5.0.1</td> <td style="vertical-align: top;"><strong>JVM settings:</strong> * GC policy and arguments, max and init heap sizes: <verbatim> -XX:MaxDirectMemorySize=1g -Xgcpolicy:gencon -Xmx4g -Xms4g -Xmn512m -Xcompressedrefs -Xgc:preferredHeapBase=0x100000000 </verbatim> *Thread pools:* * Maximum !WebContainer = Minimum !WebContainer = 50 *OS Configuration:* System wide resources for the app server process owner: * max user processes = unlimited * open files = 65536 </td> </tr> <tr> <td>DB2</td> <td>DB2 10.1</td> <td> </td> </tr> <tr> <td>LDAP server</td> <td> </td> <td> </td> </tr> <tr> <td>License server</td> <td> </td> <td>Hosted locally by JTS server</td> </tr> <tr> <td>RPT workbench</td> <td>8.2.1.5</td> <td>Defaults</td> </tr> <tr> <td>RPT agents</td> <td>8.2.1.5</td> <td>Defaults</td> </tr> <tr> <td>Network</td> <td> </td> <td>Shared subnet within test lab</td> </tr> </tbody> </table> ---++++!! For more information * [[SizingReportCLM2012][Collaborative Lifecycle Management 2012 Sizing Report (Standard Topology E1)]] ---++++!! About the authors Main.GustafSvensson -------------------- ---+++++!! Questions and comments: * What other performance information would you like to see here? * Do you have performance scenarios to share? * Do you have scenarios that are not addressed in documentation? * Where are you having problems in performance? %COMMENT{type="below" target="PerformanceDatasheetReaderComments" button="Submit"}% %INCLUDE{"PerformanceDatasheetReaderComments"}% <sticky></div></sticky>
Edit
|
Attach
|
P
rintable
|
V
iew topic
|
Backlinks:
We
b
,
A
l
l Webs
|
H
istory
:
r25
|
r23
<
r22
<
r21
<
r20
|
More topic actions...
Deployment
Deployment web
Planning and design
Installing and upgrading
Migrating and evolving
Integrating
Administering
Monitoring
Troubleshooting
Community information and contribution guidelines
Create new topic
Topic list
Search
Advanced search
Notify
RSS
Atom
Changes
Statistics
Web preferences
NOTE: Please use the Sandbox web for testing
Status icon key:
To do
Under construction
New
Updated
Constant change
None - stable page
Smaller versions of status icons for inline text:
Copyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our
Terms of Use.
Please read the following
disclaimer
.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more
here
.