SCM process created by daemon start and registered with daemon register not usable
Hello RTC users,
I have started an scm process daemon in a powershell window with a specific port number.
I then run some scm commands to use it, but the commands result in a new scm process being started and when I come to deliver it fails because the scm I fired deliberately blocks the share/ checkin
What could be the problem.
Some powershell attached. I want to have scm open all the time too deliver tons of files one at a time, having it close and open after every operation is costly in time, can't seem to avoid it though - any pointers appreciated
Feel like there should be some way of connecting each scm command to a particular scm process, being in a particular directory seems temperamental, might be some security thing I'm thinking.
|
Accepted answer
Before you use it in your scripts I would recommend you run these commands manually to understand how the scm cli tool behaves. It also helps if you paste that command run output here to diagnose the issue.
1) I see that you have provided the sandbox location two times once while creating the daemon (args[2]) and once while registering ($itemCacheDir). Does these two inputs point to the same path? If it is just one sandbox you want the daemon to track you can directly provide it in the start daemon call (and skip the register call). There is nothing wrong in how you have done just thought of pointing it out.
2) When you say subsequent run of scm commands, does that mean you are running the commands using 'scm' or 'lscm'. Only if you run the commands using 'lscm' it will try to reuse the daemon you have started earlier. Note: The commands must be run from within the sandbox or command should specify the -d/--directory option (where ever applicable) and value as sandbox path.
Since 'lscm login' command does not support -d option and if it is run outside the sandbox path it will create a new daemon.
Richard Good selected this answer as the correct answer
Comments
Richard Good
commented Mar 25 '19, 3:45 a.m.
Thanks very much for your reply, it helps.
I am an old hand at this sort of thing, but lost all my examples as I have moved clients.
I think my exact issue came from firing scm daemon start on the same directory as the subsequent commands. If I move it up and fire it from a parent directory, my life improves ;-)
What I am after here is speed. My current client wishes to add thousands, sometimes many thousands of files and have each file share/delivery attached to a single changeset, such that baselines of an old Change Control tool can be recreated in RTC. This works fine, but it is dog slow. My idea here is to have a SCM process running for each component, even maybe for directories inside a component and do a load of shares and delivers in parallel. Speed is of the essence here, week long migrations need to take days or hours ideally. Any pointers here would help.
Richard Good
commented Mar 25 '19, 3:48 a.m.
I also have the complication where I may have 10 file revisions eg: - fred.doc, would be weirdName;1, weirdName;2 ....
The idea is to rename weirdName;1 to fred.doc, share/ deliver, then replace it with weirdName;2 check in/ deliver etc, but make sure the history is maintained.
Need some sort of super streamlined script that does these action in a split second not 7 or 8 seconds.
Is there a reason why you deliver immediately after committing every change? Probably you could batch it up. Say you commit a change for a file and complete the change set. You could repeat this step for every change and once you have say 20 or 25 change sets as outgoing changes you could call 'lscm deliver' from the sandbox root to deliver all the outgoing change sets. You could also create baselines and deliver the baselines. This should give you some speed improvements.
Richard Good
commented Mar 25 '19, 5:06 a.m.
The script could hopefully
for each revision 1 files in a directory/workspace
create changeset
share file
Next
Deliver all Files
for each revision 2...n files in a dir/wkspace
copy rev 2 over rev 1 file
create changeset
check in file
Next
Deliver all Files
Is it possible to script this using scm. Note I would also need to capture any errors and attempt to recover from them. Grabbing and processing JSON output seems doable from powershell,
Yes, it is possible. You can check the return code for any failures.
By the way, you do not have to create change sets before share/checkin as it will be created automatically. Note: You should not have any open change sets otherwise it will be added to the existing open change set.
Richard Good
commented Mar 26 '19, 7:09 a.m.
The pseudo code is meant to indicate that I deliver after sharing multiple files, I'm sharing them one at a time.
Thanks for the share tip. I want to keep track of the changeset used to share/ check in a file so I can create baselines of a collection of change sets later on, not immediately obvious how to do that if I don't create them myself, creating one immediately before checking in a file seems to create the desired association automatically and I get the uuid of the changeset for use later on. Is there some better way of doing it? May add an example script, seems like this might be of interest to others
I believe the share and checkin command json output (when run with --json option) could be parsed to get the change set uuid. But then you need to parse two separate outputs.
Your solution of creating a change set prior to share or checkin also sounds good just that there would be two server calls... one for creating a change set and another for share/checkin of the change. Creating a change set is not an expensive operation and it should be quick.
Richard Good
commented Apr 03 '19, 11:31 a.m.
Thanks very much for your help on this, couple of your tips allowed me to solve my problems
showing 5 of 8
show 3 more comments
|
One other answer
the way you guys told is not working for me please provide some other method. Comments
Richard Good
commented Mar 25 '19, 7:27 a.m.
Not a sensible question, answer or comment ;-( |
Your answer
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.
Comments
sorry about formatting, I try and fix it and it flags it as spam, weird ;-(