It's all about the answers!

Ask a question

Investigating size of backups


Marshall Schor (1012624) | asked Jul 24 '12, 4:26 p.m.
 We run daily backups, and have noticed that their size has grown a lot since we began, and are now approaching 20 GB.  

What tooling is available to give insight into what's taking the space?  We'd like to do some investigation, and and based on the results, do some reasonable modifications to our processes.  

For example, if such an investigation showed we were accidentally checking into RTC various more-or-less very large temporary build logs, we could alter our build process.  Or, if we found that some "summer interns"  accidentally were checking in multi-GB-sized data sets into RTC, we could put in some kind of filter to prevent that.  

We're currently flying blind, having no insight into what's making our backups grow.  I'm hoping there's some tooling that can help us here.  

One answer



permanent link
Jia Jia Li (8057131186) | answered Sep 23 '13, 10:54 a.m.
Do you have any solution? Can you share it?

Recently I am also finding the ways to reduce the db size. Following is my finding:
1. Run the "Latest Repository Metric(Namespace)" report, it shows what RTC component take more space. You can adjust your process according to the metric reports. But I do not understand some information in the chart, you can submit PMR asking supporter help to analysis. 
2. Scrub the change event, but I guess RTC4.x has enhance it. 
3. If you have large single file in SCM, there is file version content delete from RTC4.x
4. You can check the db table space size, I guess it is similar with item1. 
Some one says For CLM 3.x there is an internal tablesizes tool which can be fetched by request in the PMR.
5.  Some one said "One common candidate is if large artifacts from builds are published back to the main server. If this is the case, another option is to publish links instead." I did not record the link...

Hope can helpful.

Your answer


Register or to post your answer.