It's all about the answers!

Ask a question

Best practices for implementing "vendor branch strategy"

James Monschke (835) | asked Jul 24 '14, 5:47 p.m.
What are the recommendations for "best practices" for implementing a "vendor branch strategy" with RTC source control?

We are currently using 4.0.3 and will be upgrading in the next few days to 4.0.6 RTC server and client.

One member of our team (with the most experience with RTC among us) believes that we should do it with workspaces and base-lines without creating an explicit stream for the "vendor branch", but I am concerned about the long-term maintenance of that compared to creating a proper stream.  I see the potential for problems when we want to look at  the history of changes and I am worried that while it may be easier in the short-term, it will introduce complexity in long term maintenance e.g. depending on naming conventions on base-line labels..

Also, if we implement a "vendor stream" is it better to have a single stream that is for all vendor drops, or separate streams for each vendor / component?

I am looking for advice on "best practices" from people with more experience than we have here with RTC which is very lacking here (we recently migrated from perforce and this group has not used a vendor branch in the past.


One answer

permanent link
Davyd Norris (20221014) | answered Jul 24 '14, 7:27 p.m.
I contribute code to an open source project, which is written in Python and is hosted on LaunchPad, using Bazaar as the SCM system. I, however, use RTC as my SCM system, where I fix bugs, work on core enhancements, and also  localise my own implementation. The open source project is quite active so there are lots of changes in between releases.

I have set up two users - myself and a 'core' user:
- The core user owns a workspace, a core_dev stream and a core_rel stream. On a semi regular basis, I delete the contents of the project in core's local workspace, then use the Bazaar client to pull down all recent changes and export them into the same project structure in core's local workspace. I then start the RTC client as the core user and RTC automatically detects all changes to the files since the last import, showing them in the change explorer. I then check these in and deliver them to the core_dev stream, labelling the change set with the Bazaar revision number. All imports like this result in trivial merges because there is only one contributor to the stream. When the open source project does a release, I bring in all changes as above, then cut a baseline and deliver it to the core_rel stream.

This results in my core_dev stream containing versions that represent the open source project's work in progress, and the core_rel stream containing it's major releases. This is very useful when I am doing bug fixes and enhancements as I can instantly create diffs between key revisions and my code, export them and then send them to the project maintainer.

For my localised project, I have exactly the same set up, and user dave owns a workspace, a local_dev stream and a local_rel stream. I populate the 'dave' local workspace by changing the flow target to be either core_dev or core_rel, depending what I am doing, accept all changes, and resolve any merges. I then set the flow target back to local_dev, do any local builds and testing and then deliver my changes. When I release my project (which usually coincides with the open source project's release), I cut a baseline and deliver to local_rel.

The the open source project core code lives in a single component, and my local project uses that as well as a couple of other components for build artefacts and for an admin UI of my own.

If I find a bug in the code, it's always in the 'dave' workspace (core is only ever imported, it's never modified or built as such). I fix the bug, deliver it to local_dev and test it and if it's all ok, I then create a temporary workspace off an appropriate baseline of core_dev or core_rel, change the workspace flow target to local_dev, and compare the changes in the files affected by the changeset(s) where the bug was fixed. I then send the deltas to the maintainer, and delete the temporary workspace. Eventually my bug fix makes it into the core code and I simply merge it with my local fix when it comes through.

I've been doing this for over a year now and it's extremely simple and very effective.

James Monschke commented Jul 24 '14, 7:45 p.m.

Thanks for describing your practices. 

I would still like to hear from some people on (very) large projects with many components, many developers and long project lifetimes/histories and maintenance histories.


Davyd Norris commented Jul 24 '14, 7:51 p.m.

In the context of your original question, the sizes you mention here make it a no brainer to use streams in some way.

BTW, I have implemented this same approach quite a few times with my clients, for both code and non code type projects. I do a lot of work with people who are using industry UML models and this is a seamless approach for managing these

Your answer

Register or to post your answer.

Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.