It's all about the answers!

Ask a question

What is the best way to be confident our baselines are correct with JazzMon


Michael Walker (99215201157) | asked Oct 10 '12, 6:12 p.m.
We support between 20 - 30 RTC projects within IBM.  Some of the projects have over 500 users with SCM and others are smaller projects with 50 users only using Planning. 

I've been working with a subset of those projects in JazzMon and have recorded some baselines over a 24 hour period.  My concern now is how can I be fairly confident that these baselines represent a "healthy" repository?

For example, apt.internal.common.rcp.llterationPlanService.fetchPlannedWorkItems has a baseline of 2.5s on one of our projects.  How can I reasonably tell this is in a good range and not supposed to be under 1s in a good environment?

I understand it depends a lot on the environment, # of users, etc. but wondering if there could be a good "range" we should expect for these web services?  I see in some documents like the Server Tuning doc that there are some estimates suggested for a few, but curious about the others.

Accepted answer


permanent link
Dave Schlegel (331167) | answered Oct 19 '12, 12:32 p.m.
JAZZ DEVELOPER
At this time we don't have absolute references to what is "healthy" for any given web service on a server.  The use of reference baselines like snapshots of jazz.net is a starting point to provide some comparison to an actively used and closely watched example server but the numbers can change day by day with shifting usage patterns. Seeing what stands out most dramatically can suggest where the pain points are.

For example, drilling in on something like fetchPlannedWorkitems or other Plan web services may show much larger volumes of data in the web service results (bytesSent) that somewhat explains longer response times. If users consider the overall response time for plans too slow it may be worth looking at why their plans return dramatically larger results. Sometimes, very large plans reflect local practices that include entire backlogs within plans, something that might be reconsidered if the performance tradeoff is unacceptable.
Michael Walker selected this answer as the correct answer

2 other answers



permanent link
sam detweiler (12.5k6195201) | answered Oct 19 '12, 1:27 p.m.
edited Oct 19 '12, 1:28 p.m.
and we also see affects on avg response time of large files on the IVersionedContentGet web service..  14 seconds of file data even at wan speeds is large. (normal around 50ms) and really messes up the rest of the stats.

Comments
Michael Walker commented Oct 19 '12, 5:20 p.m.

Thank you both for the responses. We've been gathering data from multiple repositories and comparing them.  There's also a couple of us who have used RTC for since v1 and we have an idea if a certain function in RTC is working slower than it should and can use that to determine if the baselines we're recording are "good".

It would be nice to have a baseline that shows by hour over a 24 hour period rather than the average.  Some of the web services have peaks and valleys during the day and night since our customers are worldwide, especially in the scm area, and it would be nice to see if those are expected based on past data. 


permanent link
sam detweiler (12.5k6195201) | answered Oct 19 '12, 8:02 p.m.
I would like to talk to you.  my email is sdetweil
at gmail.

Sam

Your answer


Register or to post your answer.


Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.