RegisterLog In to Jazz.net dW

It's all about the answers!

Ask a question

Autobuilds / scm changeset associate command


Jeff Foege (4174617) | asked May 24 '12, 3:34 p.m.
Part of my automated builds modifies a header file to increment the build number. I want to deliver this modification after the builds are complete.

After the builds are complete I run the 'scm status' command. I've figured out then I need to do a 'scm checkin' command. We have a precondition to associate a work item to the changeset. I know I have to run the 'scm changeset associate' command. My question is how do I know what the changeset is so I can run that command?

scm changeset associate -r ${repositoryAddress} -u ${userId} -P pasword -d ${basedir} -w ${workspaceId} changeset

According to the documentation the changeset is a name, alias, or UUID of the changeset to operate on.

13 answers



link
Tim Mok (5.6k28) | answered May 24 '12, 4:37 p.m.
JAZZ DEVELOPER
Part of my automated builds modifies a header file to increment the build number. I want to deliver this modification after the builds are complete.

After the builds are complete I run the 'scm status' command. I've figured out then I need to do a 'scm checkin' command. We have a precondition to associate a work item to the changeset. I know I have to run the 'scm changeset associate' command. My question is how do I know what the changeset is so I can run that command?

scm changeset associate -r ${repositoryAddress} -u ${userId} -P pasword -d ${basedir} -w ${workspaceId} changeset

According to the documentation the changeset is a name, alias, or UUID of the changeset to operate on.
When you run scm status, you should see the outgoing change set. There will be a number next to it. This is the alias. It is a temporary id given to the change set for the duration of your session.

link
Jeff Foege (4174617) | answered May 24 '12, 5:35 p.m.
Except how is my script supposed to know this during an automated build?

I finally was able to get the checkin command to work. I thought I could define my own alias when I called the checkin command. If I could define what the alias is then I would know what to pass 'scm changeset' command.

link
Tim Mok (5.6k28) | answered May 25 '12, 9:33 a.m.
JAZZ DEVELOPER
Except how is my script supposed to know this during an automated build?

I finally was able to get the checkin command to work. I thought I could define my own alias when I called the checkin command. If I could define what the alias is then I would know what to pass 'scm changeset' command.
You could grep the output from the status to get the alias.

link
Jeff Foege (4174617) | answered May 25 '12, 1:01 p.m.
You could grep the output from the status to get the alias.


Ok. So what you are telling me is RTC doesn't have a way to easily deliver a file that was modified during the build process and I have to write my own code to accomplish this task. This is something RTC should be able to do without me writing my own code!! All I want to do is deliver a file that was modified during the build through the CLI and I should be able to accomplish this task in 2 steps or less. I just can't believe the process to deliver a file is so complex in RTC CLI. If I were using Perforce SCM I wouldn't have spent a couple days working on this and would have been done in less than 30 minutes.

The status output has roughly 9 aliases listed. How would grep be able to tell me the alias I need so I can deliver it? This could have been easy if you allowed the user to assign whatever alias they want.

I guess I'll have to write a perl script to get around the insufficienties of the RTC SCM.

link
Tim Mok (5.6k28) | answered May 25 '12, 2:48 p.m.
JAZZ DEVELOPER
You could grep the output from the status to get the alias.


Ok. So what you are telling me is RTC doesn't have a way to easily deliver a file that was modified during the build process and I have to write my own code to accomplish this task. This is something RTC should be able to do without me writing my own code!! All I want to do is deliver a file that was modified during the build through the CLI and I should be able to accomplish this task in 2 steps or less. I just can't believe the process to deliver a file is so complex in RTC CLI. If I were using Perforce SCM I wouldn't have spent a couple days working on this and would have been done in less than 30 minutes.

The status output has roughly 9 aliases listed. How would grep be able to tell me the alias I need so I can deliver it? This could have been easy if you allowed the user to assign whatever alias they want.

I guess I'll have to write a perl script to get around the insufficienties of the RTC SCM.The build feature wasn't written to allow the ability to check-in and deliver a change. It's building what's in the stream so the main concern was providing a good work flow to get changes. Building doesn't really involve changing what's in the stream. Plus, the CLI was to provide a way to script any extra functionality. It isn't that RTC SCM isn't providing what you want. It doesn't provide a tight integration for checkin and delivery during a build. The SCM component isn't responsible for the features of RTC build.

I'm not sure what you've been checking-in during the build process. If you're delivering all outgoing changes from the build process, the doc says specifying a change set is optional. By default, it delivers all outgoing changes so you might not even need the alias.

I would suggest opening an enhancement if you want build to provide something more specific. Or perhaps you want the CLI to provide an easier way to script check-in and deliver if the status output isn't easily parsed for the change set alias.

link
Evan Hughes (1.6k17) | answered May 25 '12, 3:16 p.m.
JAZZ DEVELOPER
My question is how do I know what the changeset is so I can run that command?


Hey Jeff,

Have you had a chance to look at the article on using the CLI in builds? It has a section that describes how to get the change set UUID from checkin with the run_and_filter.pl script that's attached to the article.

Alternatively, you could allow the build user to deliver changes without a linked work item. I realize not all teams are free to do that, but I don't know the specifics of your situation.

I hope that helps.

link
Jeff Foege (4174617) | answered May 25 '12, 5:38 p.m.
The build feature wasn't written to allow the ability to check-in and deliver a change. It's building what's in the stream so the main concern was providing a good work flow to get changes. Building doesn't really involve changing what's in the stream. Plus, the CLI was to provide a way to script any extra functionality. It isn't that RTC SCM isn't providing what you want. It doesn't provide a tight integration for checkin and delivery during a build. The SCM component isn't responsible for the features of RTC build.

I'm not sure what you've been checking-in during the build process. If you're delivering all outgoing changes from the build process, the doc says specifying a change set is optional. By default, it delivers all outgoing changes so you might not even need the alias.

I would suggest opening an enhancement if you want build to provide something more specific. Or perhaps you want the CLI to provide an easier way to script check-in and deliver if the status output isn't easily parsed for the change set alias.


I surprised you guys never thought the customers build process might want to deliver a properties file or a file that has the build number in it? The fact that I can't easily deliver a modified file through an automation is a sign of short-sightedness on what is important to the customer in implementing scm/builds. You say, "Building doesn't really involve changing whats in the stream" According to who? IBM? That is really an astounding point of view to tell a customer how they should be doing a software build. RTC SCM-CLI isn't providing what I need to do and I have to write scripts to get around this problem. You say the CLI was to provide a way to script extra functionality yet you didn't think someone might want to deliver a file during the build process. Customers dont always work the way IBM wants/does things and the tool should be flexible/powerful enough accommodate the way the customer does software builds. Ive certainly worked with other SCM tools that were far easier to work with than what Im going through with RTC. One of the projects that I support I will be moving to a different source control system sometime next and using RTC to track work items due to all the troubles some developers have had with RTC source control.

There is only one outgoing changeset but I can't deliver it cause 1) it has no comments & 2) there is no work item associated with it which we need to do for our preconditions. According to the documentation I need the alias to edit comments and attach a work item.

You know I would file an enhancement as you suggested if I thought it would do anything. I've come across some bugs that were filed a year ago and some a couple years ago that address issues but are nowhere close to being put in the product.

I'm just very frustrated with RTC right now. I keep running into things that either don't work right, don't work at all, features not in the product that should be and bad documentation.

@ Evan, Yes I did see that article. I only recently found it as I tried to get past this problem. Unfortunately I cant get that article to work. The articles Limitation section states that its inherently risky pushing change(s) back to the stream. This limitation is only the way RTC is designed and not with other SCM tools. The file I modify in the build process is not risky and you guys need to trust in the fact the customer knows what they are doing. Question, you dont see anything wrong with the fact you need to run a secondary script to get an uuid out so a change can be delivered?

Also you guys treat a request build like a personal build. Even though there is continuous integration builds there will be times when you want to rebuild the product for a release effort. Frankly IMO, instead of having scheduledBuild & PersonalBuild properties it should be BuildType which can have 1 of 3 values; scheduled, requested or personal.

link
Tim Mok (5.6k28) | answered May 28 '12, 10:56 a.m.
JAZZ DEVELOPER
I still recommend filing the enhancement. It's a way to track interest in a particular feature. The more pressure for a feature from customers, the greater chance it will be implemented.

link
Evan Hughes (1.6k17) | answered May 28 '12, 11:59 a.m.
JAZZ DEVELOPER
Unfortunately I cant get that article to work.


Can you explain what isn't working? Please include the command-line that you're running as well as the output from the CLI.


The articles Limitation section states that its inherently risky pushing change(s) back to the stream. This limitation is only the way RTC is designed and not with other SCM tools.


This limitation is present in all optimistic SCM systems. If another user has modified the file that you are trying to deliver to, there will be a conflict. That conflict must be resolved before delivery, or there's a risk of data loss. Other SCM systems handle the conflict in different ways, but it remains present.


The file I modify in the build process is not risky and you guys need to trust in the fact the customer knows what they are doing.


Aside from the "Limitations" section in that document, there's nothing stopping users from delivering changes back to the stream during a build.

As to your question about running a script, I'll answer that in a separate message. It's getting a bit long. ;)

link
Evan Hughes (1.6k17) | answered May 28 '12, 2:23 p.m.
JAZZ DEVELOPER
Question, you dont see anything wrong with the fact you need to run a secondary script to get an uuid out so a change can be delivered?


The user doesn't need to know the UUID of the change set to deliver: they can run 'scm deliver' without any arguments to deliver all changes from the workspace back to the stream. The build user should only have outgoing changes if they've committed something, so only the modified resources will be delivered.

Personally, I wouldn't go that route because I prefer the precision of specifying exactly what I want to happen. The build user should only have extra outgoing changes if there are multiple builds running at once or change sets have been removed from the stream over the course of the build. Those are both unlikely events, but they can happen. I prefer the cost of ensuring that doesn't happen to the cost of trying to figure out where unexpected changes are coming from. But that's just my point of view - users are free to do whatever they want.


Do I think it's lame that we require a script to scrape the change set UUID to be picky? Not for this use case.

I have a few reasons:

Builds are already script intensive. In my experience they always boil down to writing some kind of script/makefile/definition and eventually doing something funky with sed/awk/perl. Asking the build engineer to do one more thing doesn't seem to be very expensive, especially when we show them exactly how to do it with an article.

We try to follow the Unix philosophy. When I was working on the CLI, I tried to follow the Unix philosophy of "Make each program do one thing well" and "Write simple parts connected by clean interfaces". I tried to provide a lot of subcommands that would each do one thing that could be easily tied together. We're primarily trying to support the developer use case. It is: checkin, accept, resolve conflicts, deliver. That's why those map to top-level subcommands in the CLI - those are things that the user cares about and needs to be able to run easily.

When using the CLI for day-to-day work, I found that cycle worked pretty well for me. It was easy to get my changes off my hard drive and into our streams.

But your use case is different. It sounds like you want a monolithic subcommand:

scm create-new-change-set-then-check-in-and-deliver --comment "Foo" --work-item 999 <path> <path> <path> ...


which makes sense for build meisters, given that: builds should not have incoming changes when they try to deliver; and should have a well defined set of files to commit. But when you remove those restrictions, you get a large number of failure modes:

  • the user may not be allowed to create a change set,
  • the paths may already be present in another open change set,
  • the comment may be invalid,
  • the work item may not exist,
  • linking to the work item may be forbidden,
  • the paths may not exist,
  • if a failure occurs after the change set it created, items will be left in the users remote workspace that the user may not expect.


Given that this only saves the user a shell script like:
CS_ID=`run_and_filter.pl "    \\(\\d{4})\\)" scm checkin foo.txt`

scm cs assoc $CS_ID <work>
scm cs comment $CS_ID "Comment text"
scm deliver $CS_ID


I'm not convinced that it's worth the high degree of complexity it would introduce into our code base. Of course, I'm no longer responsible for the CLI, so you may receive a different response if you file a feature request.

Your answer


Register or log in to post your answer.