It's all about the answers!

Ask a question

SCM process created by daemon start and registered with daemon register not usable


Richard Good (872861) | asked Mar 20 '19, 12:38 p.m.
edited Mar 20 '19, 12:41 p.m.

Hello RTC users,

I have started an scm process daemon in a powershell window with a specific port number.

I then run some scm commands to use it, but the commands result in a new scm process being started and when I come to deliver it fails because the scm I fired deliberately blocks the share/ checkin

What could be the problem.
Some powershell attached. I want to have scm open all the time too deliver tons of files one at a time, having it close and open after every operation is costly in time, can't seem to avoid it though - any pointers appreciated

Feel like there should be some way of connecting each scm command to a particular scm process, being in a particular directory seems temperamental, might be some security thing I'm thinking.

logon and start one daemon for each process, note that we may need a seperate directory/ sandbox for each of these operations to do this in a parallel fashion

start-job -scriptblock { Set-Location "$($args[2])" &"$($args[0])" "daemon" "start" "$($args[2])" "--port" "61934" } -ArgumentList $scmLoc, $configDir, $itemCacheDir

fire up scm then wait for 30 seconds to let daemon start up

Start-Sleep -Seconds 10

Set-Location "$($itemCacheDir)" &"$($scmLoc)" "daemon" "register" "61934" "$($itemCacheDir)"

set current location to itemcache directory

&"$($lscmLoc)" "login" "-r" "$($rtcURL)" "-u" "$($rtcUser)" "-P" "$($rtcPass)" "-n" "jazzhost"


Comments
Richard Good commented Mar 20 '19, 12:42 p.m. | edited Mar 20 '19, 12:43 p.m.

sorry about formatting, I try and fix it and it flags it as spam, weird ;-( 


the scm login command seems like it needs to be associated with a running scm process too

Accepted answer


permanent link
Shashikant Padur (4.3k27) | answered Mar 20 '19, 11:38 p.m.
JAZZ DEVELOPER
Before you use it in your scripts I would recommend you run these commands manually to understand how the scm cli tool behaves. It also helps if you paste that command run output here to diagnose the issue.

1) I see that you have provided the sandbox location two times once while creating the daemon (args[2]) and once while registering ($itemCacheDir). Does these two inputs point to the same path? If it is just one sandbox you want the daemon to track you can directly provide it in the start daemon call (and skip the register call). There is nothing wrong in how you have done just thought of pointing it out.

2) When you say subsequent run of scm commands, does that mean you are running the commands using 'scm' or 'lscm'. Only if you run the commands using 'lscm' it will try to reuse the daemon you have started earlier. Note: The commands must be run from within the sandbox or command should specify the -d/--directory option (where ever applicable) and value as sandbox path.
Since 'lscm login' command does not support -d option and if it is run outside the sandbox path it will create a new daemon.

Richard Good selected this answer as the correct answer

Comments
Richard Good commented Mar 25 '19, 3:45 a.m.

Thanks very much for your reply, it helps.


I am an old hand at this sort of thing, but lost all my examples as I have moved clients.

I think my exact issue came from firing scm daemon start on the same directory as the subsequent commands. If I move it up and fire it from a parent directory, my life improves ;-)

What I am after here is speed. My current client wishes to add thousands, sometimes many thousands of files and have each file share/delivery attached to a single changeset, such that baselines of an old Change Control tool can be recreated in RTC. This works fine, but it is dog slow. My idea here is to have a SCM process running for each component, even maybe for directories inside a component and do a load of shares and delivers in parallel. Speed is of the essence here, week long migrations need to take days or hours ideally. Any pointers here would help.


Richard Good commented Mar 25 '19, 3:48 a.m.

 I also have the complication where I may have 10 file revisions eg: - fred.doc, would be weirdName;1, weirdName;2 ....


The idea is to rename weirdName;1 to fred.doc, share/ deliver, then replace it with weirdName;2  check in/ deliver etc, but make sure the history is maintained.

Need some sort of super streamlined script that does these action in a split second not 7 or 8 seconds. 

 


Shashikant Padur commented Mar 25 '19, 4:55 a.m.
JAZZ DEVELOPER
Is there a reason why you deliver immediately after committing every change? Probably you could batch it up. Say you commit a change for a file and complete the change set. You could repeat this step for every change and once you have say 20 or 25 change sets as outgoing changes you could call 'lscm deliver' from the sandbox root to deliver all the outgoing change sets. You could also create baselines and deliver the baselines. This should give you some speed improvements.

Richard Good commented Mar 25 '19, 5:06 a.m.

The script could hopefully 

for each revision 1 files in a directory/workspace
      create changeset
      share file
Next
Deliver all Files

for each revision 2...n files in a dir/wkspace
   copy rev 2 over rev 1 file
   create changeset
   check in file
Next
Deliver all Files

Is it possible to script this using scm. Note I would also need to capture any errors and attempt to recover from them. Grabbing and processing JSON output seems doable from powershell,


Shashikant Padur commented Mar 25 '19, 6:45 a.m.
JAZZ DEVELOPER
Yes, it is possible. You can check the return code for any failures.
By the way, you do not have to create change sets before share/checkin as it will be created automatically. Note: You should not have any open change sets otherwise it will be added to the existing open change set.

Richard Good commented Mar 26 '19, 7:09 a.m.

The pseudo code is meant to indicate that I deliver after sharing multiple files, I'm sharing them one at a time. 

Thanks for the share tip. I want to keep track of the changeset used to share/ check in a file so I can create baselines of a collection of change sets later on, not immediately obvious how to do that if I don't create them myself, creating one immediately before checking in a file seems to create the desired association automatically and I get the uuid of the changeset for use later on. Is there some better way of doing it? May add an example script, seems like this might be of interest to others


Shashikant Padur commented Mar 26 '19, 11:22 p.m.
JAZZ DEVELOPER
I believe the share and checkin command json output (when run with --json option) could be parsed to get the change set uuid. But then you need to parse two separate outputs.
Your solution of creating a change set prior to share or checkin also sounds good just that there would be two server calls... one for creating a change set and another for share/checkin of the change. Creating a change set is not an expensive operation and it should be quick.

Richard Good commented Apr 03 '19, 11:31 a.m.

 Thanks very much for your help on this, couple of your tips allowed me to solve my problems

showing 5 of 8 show 3 more comments

One other answer



permanent link
misha rajpoot (341) | answered Mar 25 '19, 5:46 a.m.

 the way you guys told is not working for me please provide some other method.


Comments
Richard Good commented Mar 25 '19, 7:27 a.m.

Not a sensible question, answer or comment ;-( 

Your answer


Register or to post your answer.


Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.