Have you ever noticed sluggishness with a CLM web page and wondered whether you are hitting some product issues, or just having a bumpy connection with the server? Or while you were waiting for your content to show up in the browser, have you ever wondered “are they there yet”?
There is a little tool coming in the CLM 2012 release to ease such wondering called “Performance Health Check”. This performance measuring tool is available as a dashboard widget that any user can use from anywhere (browser) at anytime.
Where to find it
- From any dashboard you have permission to edit, click on Add Widget.
- Search for widget Performance Health Check and add it. Here is an example of my personal dashboard.
- Be sure that all other dashboard widgets have finished updating, then click Run Test on the widget.
- Observe the results.
Tests explained
The tests on this widget provide two performance measurements: latency and throughput of your connection with the server.
Latency measures how fast for a simple request to travel from your browser to your server and back. When there’s database operation involved (which is true in most cases), database latency tells how fast for a database request to travel from your application server to the database server and back. When you see one or both latencies fall in the red, expect poor performance in general because of the sluggish travel time in the network: slower overall UI response, for example delays in loading plans and dashboards.
Throughput is measured by Download and Upload Speed tests. It tells how fast data move between the server and your browser. Slower values indicate poor performance of data-transfer operations, such as SCM operations and loading large plans, dashboards, and work item attachments.
Tips and Tricks
- Add this widget to an empty dashboard if possible to reduce noise from other widgets.
- Run the tests alone in the browser. The less concurrent requests, the more accurate the results will be.
- When adding the widget to a dashboard, be sure to add it from the local catalog. Adding the widget from a remote catalog may result in unreliable test results and is not recommended.
- Feel free to run the test multiple times for measurement at different times.
- The results do not reflect measurement of any single node in a clustering environment. Use it as a reflection of the overall connection to a clustered server.
Bianca Jiang
Software Developer
Rational Team Concert
Is this widget also available for RRC and Design Manager?
Would there be a way to get this widget into a 3.0.1 system?
Hi, Andrew, re comment #2, this widget is a new feature in 4.0 release, it wasn’t available in any of the earlier releases.
Thank you. Great little widget.
An enhancemt idea I got, it would be nice to schedule the test to run e.g. every 60 minutes and have a small history, to see if the performance is depending from the daytime etc..
I think I will write an enhancement request.
Thanks schneidg!
Question:
What exactly is measured for the Database Latency? You write the Latency between the Application Server and the Database. How is this done? What is the influence of the connection from the browser to the server to this DB latency?
For me it looks like the client is involved in this measurement. Not only the application server and the database server. I have the Applicationserver (WAS) and the DB (DB2) on the same physical machine. But the DB latency is often 2-3 ms, but most of the time 0ms, and from remote locations I also saw 7-8ms.
hi, schneidg,
the Database Latency test is done by sending a HTTP request from your browser to the server under test (JTS, CCM, QM, RM etc.) like this one:
https://jazz.net/jazz/service/com.ibm.team.repository.service.internal.IServerConnectionStatusRestService/databasePingTime?testId=1360015966561&sequenceId=6
This request invokes a REST service IServerConnectionStatusRestService on the server which will execute a “databasePingTime” method against the DB server. This method measures the time a min. db query takes with a db connection. The result then gets returned via the HTTP response to the client. The widget runs 10 tests in a row and displays the average.
So this test is the only one among the four that doesn’t involve time from the browser to the server. A big number indicates a slow DB response time from the DB server to the application server.
Our DB Latency from jazz.net servers are usually within 0 – 1 ms. Try it out on this dashboard:
https://jazz.net/jts/dashboards/1227
It is very useful for our team to show the client what’s the bottleneck
of the system environment.
Bianca, thanks for the information. Do you have any suggestions on improving performance or “things” we could look into that could be the cause?
We have questions on database performance and how to interpret the results; what they are doing, what results are “acceptable” and what we have the capability to control. With regards to the performance health check widget, specifically the database piece) we have been able to see that there are spikes in the millisecond values coming back. These results, though, could hit the “unacceptable” value of 2ms + once out of 10 consecutive clicks. We were told 2ms+ in this test is not good. Is this reason for any concern? What is the database test actually doing? Is it a simple sql statement? Other?
There are other performance measurements in DOORS NG that lead us to believe all is well with database. The “Diagnostics” monitoring has the following results: “The database performance measurement took 172 ms, which is within acceptable limits.”
In that test 172ms is acceptable. Why? What is it measuring and why is it that much different from the performance widget testing which I should note has a disclaimer ( Disclaimer: The ‘Performance Health Check’ widget is provided for informational purposes only. Licensee should not rely on this feature for any purpose, and is encouraged to continue to rely on any existing tests of Licensee’s system that may be in place.)