It's all about the answers!

Ask a question

Check-in of large file (more than 2GB) in RTC

Vaijayanti Vaidya (044) | asked Aug 10 '12, 6:52 a.m.
edited Aug 10 '12, 4:58 p.m. by DJ Houghton (2663)

I am trying to check-in large files in RTC. Each file is more than 2GB. Everytime I get the error while check-in and deliver and it fails at 26%.

Following is the error
Error during upload
  Failed to upload File /ModelException/Data Imports/ESG/3 - ESG Main.a.igy
  CRJAZ1247I The request to the server failed.  The server returned the http error 500 with error text "Internal Server Error".  Examine any further details here or look in the server log files for more information on how to resolve the issue.

To fix this I did increase the "Jazz Repository Connection timeout" to 9999 seconds (That's the maximum). Also  I added files in ~\.jazz-scm folder and have setup repository.timeout to 60000 sec.

But it's still failing. Any idea if I need to change any other settings?


3 answers

permanent link
sam detweiler (12.5k6195201) | answered Aug 10 '12, 8:48 a.m.
AND if you are running Websphere as the front server, make sure to configure it into Synchronous transport mode, else you can get cases where large files will overwhelm the JVM heap, and crash the app server.

our users are over remote links from the server.. rtc->websphere is 1gig speed,  websphere to user is <1meg speed.

we crashed the server once a month for about 6 months randomly til IBM found all this data in a heap dump.

do NOT put large binaries into the source control system. put them in Asset Manager or Artifactory or ANY PLACE outside the source system.

permanent link
Karl Weinert (2.0k52736) | answered Aug 10 '12, 8:33 a.m.
edited Aug 10 '12, 8:34 a.m.
Go to https://server:port/ccm/admin  select advanced properties and search for the "Versioned Content Maximum Size"  setting. Make sure that is set to zero which should equal unlimited.

permanent link
Kevin Ramer (4.5k8180199) | answered Aug 10 '12, 10:14 a.m.
There's also the case that there were space issues on the server itself.  We had this happen yesterday on a server whose scm temporary location was in the "root" directory and that file system became full while a delivery was being made.   We created a separate file system for the temporary files.

I agree with not putting large binaries in SCM (of any type), what we may disagree upon is the precise definition of "large" :-D


sam detweiler commented Aug 11 '12, 8:51 a.m. | edited Aug 11 '12, 8:52 a.m.

yes, an interesting topic..

in our prefered model, all binaries are in some asset repository. IBM uses Asset Manager (in their Devops model) to store all build outputs and to reference build inputs (dependencies). At the moment we are using Artifactory.

once you decide on ALL build outputs, then size is no longer the discussion. you need the common repository to do the full traceability of where everything is consumed. But you then have to put business process over that to decide what to do (and how) with the build artifacts on intermediate builds after you decide that last test version + 20 builds is the next test version.. do you keep the 19 intermediate results, and for how long, and does prune this happen automagically, or is it manual. (I'm not a big fan of manual!)..

and as you mentioned, you have to have capacity planning discipline in place to watch you repository servers, source, build, asset. nothing is infinite.

N Z commented Aug 13 '12, 1:06 a.m.

Yes, an interesting point. While Asset Manager, Artifactory or any other system can be used successfully, if it HAS to be used because of the limits of RTC, then there is a problem.

The days when version control systems only stored source are long over. Other version control systems can successfully and efficiently store any file types of any size, the fact that RTC has a limit in my opinion is a design flaw, quite a serious one.

Your answer

Register or to post your answer.