It's all about the answers!

Ask a question

What is exactly measured by the "Round Trip Latency" in the "Performance Health Check" widget?

Guido Schneider (3.4k1485115) | asked Nov 22 '12, 10:29 a.m.

In V.4.0 there exists a nice new widget to measure the performance from a client to the server and the rough performance of the DB connection and the download rate.

This widget is found in "General" category and called "Performance Health Check"

Question: What is exactly measured by the "Round Trip Latency"? What is mean by "a simple request"?
The Help text shows following:

The round trip latency measures the time in milliseconds for a simple request to travel from your browser to your server and back. Higher numbers indicate a slower overall UI response and delays in loading large plans and dashboards

If I run this locally on a Jazz Allin-In-1 server I get around 60-70ms. And this looks for me quite slow.

Guido Schneider commented Dec 05 '12, 10:45 a.m.

I'm still interest to get more information about the new performance measure widget. If we get bad results, it's essential to know where to tune. Only seeing nice colours doen't helps.

Somebody has developed it. He should know what he measures.

Please provide me this information or point me to the correct document.

Accepted answer

permanent link
Mike Pawlowski (6861) | answered Dec 05 '12, 5:53 p.m.
edited Dec 05 '12, 5:54 p.m.
Hi Guido,

In this case, "a simple request" means a HTTP / HTTPS GET request or XHR (XMLHttpRequest) GET request is sent from the browser directly to the web server and a subsequent response is sent from the web server back to the browser with an optional data payload.

Specifically, an XHR GET request is made to the following REST service:
The server response contains an empty payload (minus the SOAP wrapper).

First, 10 warm-up unmeasured simple requests are executed simultaneously.
Then, 20 (default - user-configurable via preferences) sequential simple requests are made.
The start time, just before a simple request is invoked is recorded.
The end time, just after the server response is received is recorded.
The measured latency is comprised of the difference between the start and end times.
The final result is the average latency over 20 requests.

My average round trip latency against (which is a local server for me) falls within the 60-70 ms time range.

For more information, check out the Jazz Blog post:


Guido Schneider selected this answer as the correct answer

Guido Schneider commented Dec 09 '12, 8:49 a.m.

Thank you Mike.

I missed this blog.

Your answer

Register or to post your answer.