Blogs about Jazz

Blogs > Jazz Team Blog >

Countdown to the next Rational Team Concert: Part II – Source control enhancements

This is a follow-up to Part I and the next in a series of posts highlighting what’s coming in our next version of Rational Team Concert. In this entry, I will focus on the Source Control enhancements*. I’ll talk about enhancements in the core Source Control engine that enable distributed source control, a native client on z/OS, context-aware search, and finally some additions to the source control Web UI.

Distributed Source Control

My first draft of this blog entry was too long as I tried to explain how distributed source control fits into our master plan, so I pulled that discussion into a new article, Jazz Source Control: Design Objectives. In a nutshell, our vision is related to something that Eric Sink mentioned last year: that distributed and enterprise version control don’t mix. I don’t agree at all. In fact, it’s quite the opposite, and our tools show why. We’ve managed to bring collaborative development to version control, empowering developers with repository workspaces and personal builds, by using a change set based version control model with a nice user interface with full ALM integration.

It has been our intent from day one to extend our model to work across servers. We want to bring the best of a fast collaborative source control system to everyone, whether you are a small team or a corporate environment with strict privacy or change control rules. Allowing operations to span servers is a natural extension.

From a user’s perspective, the easiest explanation of distributed source control is that you can now deliver and accept across repositories. It’s that simple. The concepts you are familiar with will continue to work, but we’ve extended the user interface in a couple of small places as follows:

  • When creating a repository workspace you can select the repository where the new workspace should be created. This is the simplest case for moving source code to another server: select a stream and then create a new repository workspace that is targeted at another server.
  • When changing the flow target of a repository workspace you can select a stream in another repository. The Pending Changes view continues to work like before, but magically when you accept/deliver/compare it’s all going to another server.
  • It’s also possible to have a mix of flow targets at the component level that span repositories. We display a “remote” flow using the “[~]” decoration and show the target server name in brackets. In this example all components are flowing to the JUnit stream on SERVER1 while the UI component is flowing to the Remote Team Stream on SERVER2.

As explained in Does Rational Team Concert support MultiSite?, allowing teams on different servers to collaborate by flowing change sets between streams complements our existing WAN support and provides many deployment options for your teams. Here are the most common reasons for wanting to use distributed source control:

  • You require component-based development across your enterprise without having everyone on the same server.
  • You don’t have a great network connection between major sites, and would prefer to have sub-teams on their own servers. Once an a while we bring the source together for a build or baseline, or for others to consume their components in other streams.
  • You want to outsource development of part of your software and would prefer to have them on their own server for security reasons. At regular intervals you can accept changes from their server without requiring the outsourced team to have permission on your server. You can keep history and not lose valuable historical data about the component being developed.

Now you’re probably wondering, “can I use this new support to work disconnected on my laptop?”. The answer is yes and no. While the main intent and supported configuration is for distributed teams, it’s technically possible to run a server on your laptop, replicate a component or two, and work disconnected then deliver your changes when you connect again. It’s not a scenario that we support.

Turning it on

Allowing source code to replicate between servers isn’t something that you would want to happen by accident. In the open source world it may be OK, but most enterprise customers I know would cringe at the thought of having their source code accidentally replicated to another server. For that reason, the server must be explicitly configured to allow distributed source control and then each project has a permission that controls which roles can replicate.

First, in the advanced properties page of the administrative Web UI, search for the Distributed SCM option and set it to “true”. This has to be enabled on the source and target servers.

Then for each project, you can control which roles can flow change sets across repositories. Having role-based support for who can replicate change sets can be combined with role-based deliver permissions for the specific streams, and you can additionally configureĀ component-level write access control.

Then finally the use of distributed source control requires a user to have an appropriate license for both servers and be authorized on both servers.

Linking to work items

Change sets without linked work items aren’t that useful in a fully integrated ALM scenario. So we’ve made it possible to link change sets to work items on other servers. And you can link change sets to work items on any server that supports the OSLC Change Management specification. This allows teams to have a central work item repository but distributed source code. The inverse is also supported: a team that is mostly isolated on their own server can link to their own work items, and when they deliver, the links remain in the other direction.

There is quite a bit of technology behind allowing linking and delegated user interfaces from other services. What we are seeing in Team Concert 3.0 is the realization of allowing distributed tools, not just for source control, but across the lifecycle should work across servers. But it’s not always appropriate to replicate data to every server and allowing linking and traceability between servers is as important.

For the nerds, the mechanism to link artifacts in other repositories in called delegated uis and is in the core OSLC specification. You’ll be seeing a lot more of this used in all our applications, such as modeling, requirements, and testing.

Users and sizing

When changes are replicated, the users that created the changes have to move as well. We can’t lose the audit trail. However, just because a user’s audit trail is maintained we don’t automatically allow those users to log in to the other servers. The simplest approach for managing users in two collaborating servers is to share a directory service. This ensures that user IDs and credentials are the same everywhere. However, if this isn’t possible you can replicate changes to a server then use the SCM command line to assign user IDs to the users in the audit trail to allow login credentials. See the “scm users set” command. This allows you to have the same identity for audit trail purposes but use different credentials on the different servers.

And finally, how much time does all this take? Once a stream or component is replicated, everything is incremental. As such, working with your day-to-day “accept, deliver, and baseline” operations should be as fast as usual. The actual setup and initial replication takes a bit more planning for an integration stream with a large code base. In our own experience with replicating ourĀ development stream, we found that it took around 8 hours to set up, configure, and replicate our stream with four years of history, 120K files and folders, and roughly 1.2GB current configuration footprint of the latest on disk. We hope to write up best practices in the future with more concrete guidance on how to succeed with large code bases. Every setup is different so your timings may vary.

It’s not easy being green

Every year, z/OS systems are responsible for transporting tens of thousands of shipping containers, caring for millions of patients, processing a majority of the point-of-sales transactions, and connecting hundreds of millions of mobile phone users. We want to bring Rational Team Concert to all those z/OS, COBOL, PL/1 developers, so we have added support for an ISPF client. For those too young to remember what a “green screen” looks like, I’ve included a screenshot below of Team Concert in action:

We’ve made significant changes to the source control core to support native MVS filesystem on z/OS. This enables the ISPF client, the build, and deployment support on z/OS. There is a ton of new functionality in the z/OS and Power fronts in Team Concert this year that I’ll talk about in a later blog posting.

Context-Aware Search

The Rational Team Concert server has heaps of interesting data about your project. In our research labs we’ve been working for some time on how to make that information more useful in day-to-day work. The first outcome of this work is a feature that allows you to index and search source code in combination with work items to find relevant files for a given task. Task relevance is based on allowing searches using keywords from the contents of a work item, hence the name context search. But you can also search source files by keyword without using a work item. This is extremely useful for impact analysis, reducing code duplication, and understanding how changes you make in your source code can impact other teams. In this upcoming release the feature is available from the Team Concert client for the Eclipse IDE.

To offload the server from having to spend cycles indexing, to get started with context-aware search you must first install and configure a search engine on a separate machine and configure the scope and context for what should be indexed. This is very similar to how build engines are run and configured. In general, you would index your integration stream and possibly the main maintenance streams. In the example below we are indexing everything in the JUnit stream.

When searching you have the option of a text search that looks at source only or the quick search that you can configure to search work items, source, and specific elements of the source such as class names, methods, or comments. You can also configure the components to search to narrow or broaden the search scope.

The results of the search come in two forms: the regular source search will return a standard search result and let you open the remote files to see the search hits.

When configured to also search work items, the results view will show them grouped by type of artifact type.

Web UI

We are also continuously improving the Web interface, and in this version we support comparing snapshots and made it easier to navigate and search for items in the repository.

I’ve noticed that many people haven’t discovered that you can search for change sets. This feature exists in the RTC client for the Eclipse IDE and the RTC client for Microsoft Visual Studio IDE and now it’s in the Web client. One of the ways I use this is when I want to find a change set that deleted a particular file or a change set that I created last week and I forgot what streams I flowed it to. You can also extrapolate how we could (in the future) mix context-aware search with change set searching … no promises, but it could be really cool!

Conclusion

There is a rather important omission from this post, but it was on purpose. All the source control enhancements to the RTC client for Microsoft Visual Studio IDE will have their own blog next week. Enjoy the new features and look forward to more articles outlining these in a lot more detail.

Jean-Michel Lemieux
Chief Architect for Rational Team Concert

* Our lawyers would like me to remind you that these are not finalized plans or commitments…just work in progress and plans are subject to change without notice. See the Terms of Use.