Jazz Forum Welcome to the Jazz Community Forum Connect and collaborate with IBM Engineering experts and users

What is the best way to be confident our baselines are correct with JazzMon

We support between 20 - 30 RTC projects within IBM.  Some of the projects have over 500 users with SCM and others are smaller projects with 50 users only using Planning. 

I've been working with a subset of those projects in JazzMon and have recorded some baselines over a 24 hour period.  My concern now is how can I be fairly confident that these baselines represent a "healthy" repository?

For example, apt.internal.common.rcp.llterationPlanService.fetchPlannedWorkItems has a baseline of 2.5s on one of our projects.  How can I reasonably tell this is in a good range and not supposed to be under 1s in a good environment?

I understand it depends a lot on the environment, # of users, etc. but wondering if there could be a good "range" we should expect for these web services?  I see in some documents like the Server Tuning doc that there are some estimates suggested for a few, but curious about the others.

0 votes


Accepted answer

Permanent link
At this time we don't have absolute references to what is "healthy" for any given web service on a server.  The use of reference baselines like snapshots of jazz.net is a starting point to provide some comparison to an actively used and closely watched example server but the numbers can change day by day with shifting usage patterns. Seeing what stands out most dramatically can suggest where the pain points are.

For example, drilling in on something like fetchPlannedWorkitems or other Plan web services may show much larger volumes of data in the web service results (bytesSent) that somewhat explains longer response times. If users consider the overall response time for plans too slow it may be worth looking at why their plans return dramatically larger results. Sometimes, very large plans reflect local practices that include entire backlogs within plans, something that might be reconsidered if the performance tradeoff is unacceptable.
Michael Walker selected this answer as the correct answer

0 votes


2 other answers

Permanent link
and we also see affects on avg response time of large files on the IVersionedContentGet web service..  14 seconds of file data even at wan speeds is large. (normal around 50ms) and really messes up the rest of the stats.

0 votes

Comments

Thank you both for the responses. We've been gathering data from multiple repositories and comparing them.  There's also a couple of us who have used RTC for since v1 and we have an idea if a certain function in RTC is working slower than it should and can use that to determine if the baselines we're recording are "good".

It would be nice to have a baseline that shows by hour over a 24 hour period rather than the average.  Some of the web services have peaks and valleys during the day and night since our customers are worldwide, especially in the scm area, and it would be nice to see if those are expected based on past data. 


Permanent link
I would like to talk to you.  my email is sdetweil
at gmail.

Sam

0 votes

Your answer

Register or log in to post your answer.

Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.

Search context
Follow this question

By Email: 

Once you sign in you will be able to subscribe for any updates here.

By RSS:

Answers
Answers and Comments
Question details
× 12,122
× 7,596
× 42

Question asked: Oct 10 '12, 6:12 p.m.

Question was seen: 6,905 times

Last updated: Oct 19 '12, 8:02 p.m.

Confirmation Cancel Confirm