It's all about the answers!

Ask a question

Sizing of a proxy for RTC


Georg Kellner (840481109) | asked Jun 21 '13, 6:48 a.m.
Some years ago I've played with Squid as caching proxy in a little home network, so sizing the cache wasn't a problem.
Regarding caching of development stuff I know the built-in mechanism of ClearCase.
My rule of thumb (to sleep well) is to have at least 20% free disk-space for caches on the ClearCase server.
Having 1 TB of source pools, we'll need 200 GB space for caching.

I don't know how RTC stores all artifacts in the database, and if the 20% are useful for thinking about caching, but what about performance of a proxy.
Is it possible to have a good performance proxy with ten-thousands of files with tons of GigaBytes?

Having the minimum file size to be cached to high, all the small source files have to travel the long way from database to user, but it'll reduce the cache index.
Having this parameter to low, the cache index will grow extremely.

Has someone any recommendations regarding the sizing with the background of huge repositories?

greetings georg.

Comments
Krzysztof Kaźmierczyk commented Jun 24 '13, 4:02 p.m.

Hi Georg,
I found only this document in jazz.net library: https://jazz.net/library/article/325/.
Anyway it does not answer your question :(

Accepted answer


permanent link
sam detweiler (12.5k6195201) | answered Jun 24 '13, 5:43 p.m.
RTC stores files from SCM in a unique UUID for each changeset.  whenever the file changes, the UUID changes.
all builds would ask for the latest, and get the current UUID.

In my prior company we were deploying proxies shortly. disk & memory is cheap, processors are fast,
but loading the cache is slower than doing direct access, so you only want to do it in bulk once.
(unless you have Solid State Disk).

we were watching about 7million file extracts and 350gig per day.  one build alone was upwards of 140,000 files
(3 times a day) (maybe 200 of those changed a day)

I think the last proxy design was 32 or 64gig ram, 2 terabytes disk. on a gigabit network.
Aging was set to 30 days I think. we cached everything, in a two tier model

multiple proxies as peers (for cpu performance) in front of the servers in the HQ location, and a second proxy in each of the remote offices where most of the access occurred.

three objectives..
1. reduce to a bare minimum actual file extracts from RTC.. they go ALL the way to the SQL database in the back (and blob fetch for binaries is really bad)
2. reduce all phone line load, leave capacity for RTC interactive traffic. (wan acceleration also used)
3. provide as as close to local lan speed as possible to build systems and users.

we had 4 CCM servers containing source


Georg Kellner selected this answer as the correct answer

Comments
Georg Kellner commented Jul 02 '13, 4:35 a.m.

Thanks for the details Sam.


sam detweiler commented Jul 02 '13, 7:58 a.m.

also see this prior topic where I posted architecture pics
https://jazz.net/forum/questions/112889/caching-server-useful-for-rtc-workitems

Your answer


Register or to post your answer.


Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.