Does RTC have issues with a very large number fo files?
![](http://jazz.net/_images/myphoto/1824e3eaea652941f8be5f20034ec938.jpg)
Wehave a project with two million files (~200Gb) and 100K directories. Developer workspaces require approximately 200K files (six to eight Gb) in 25K directories. The developers are in remote locations, and we had planned to use the ClearCase remote client. However, updating a snapshot view with this many files takes > 30 min, even when there have been no changes. We now want to consider using RTC instead.
Does anyone have a project of this size? Can you tell me if RTC would have similar issues with a very large no. of files?
Accepted answer
![](http://jazz.net/_images/myphoto/1824e3eaea652941f8be5f20034ec938.jpg)
2 other answers
![](http://jazz.net/_images/myphoto/1824e3eaea652941f8be5f20034ec938.jpg)
![](http://jazz.net/_images/myphoto/1824e3eaea652941f8be5f20034ec938.jpg)
basically consumed our entire network bandwidth.
distr SCM is not easy to manage, it requires a complete RTC complex install in the remote location too.
and double license usage during cross repository synch operations.
see my design to mitigate this problem here
https://jazz.net/forum/questions/112889/caching-server-useful-for-rtc-workitems
Comments
![](http://jazz.net/_images/myphoto/a36d1dcfd3e1e1e00aeb18c860d1443d.jpg)
Sam: You are talking about the performance of actually downloading the file content from the server to the local sandbox, right? I believe Judy is asking about the performance of detecting changes in the sandbox and on the server (based on her comment of "even when there have been no changes").
Comments
Judy Beegle
Aug 02 '13, 4:14 p.m.For ClearCase (may be different with RTC) we verified that network bandwidth and/or latency was not the issue. Rather it is checking done on the very large number of files in the workspace.
We considered re-structuring the source, but found we could not sufficiently reduce the number of files without consolidating/combining them. This would make updates cumbersome and time consuming, as the fragments would have to be extracted and then re-inserted in the consolidated files.