I joined the Jazz project while it was still in its very early days, roughly two and a half years ago. At that time, the Raleigh team was just finishing up a short-lived experiment of trying to build an extensible platform using J2EE. This experiment demonstrated that the J2EE classloader hierarchy was not the right technology to use when building an extensible server platform. The isolation of the classloaders for each of the WAR files fights against the desire for extensibility. The primary alternative was to package all components in the same WAR file. This mega-WAR option was a deployment nightmare, not to mention exposing the implementation of components to one another. What was really needed was a viable server-side component story.
Fortunately, around this time the Equinox project was experimenting with the start of what would become known as the OSGi Bridge Servlet. This allowed an OSGi runtime to execute within a servlet container. The bridge servlet became a fundamental building block to the Jazz server design. The details of the development of the bridge servlet work and the development of the subsequent services architecture will make great fodder for a future post. For the purpose of this little history lesson, the important part of the bridge servlet is that it allowed us to solve our extensibility problem by providing the bundle and extension point based programming model of OSGi and Eclipse rather than relying exclusively on the J2EE programming model. Once this model was in place, we moved quickly into self-hosting on the components. We started self-hosting on the Team Build System, then on Work Items, and later on Source Control and the other components.
Once we started self-hosting, a couple of interesting data points started to show up in our server stats. The most interesting had to do with the cacheability of collaborative data. For performance reasons, the Jazz server has two write-through internal caches that reside in-memory. One tracks the current state of an item while the other caches a complete state of an item. We started observing current cache hit rates around 90% and state cache hit rates around 85%. This is substantially higher than can be expected from standard web data, which according to literature hover around 50%. This fact got us thinking that if we could find a way to expose our data using GET operations and ETag and LastModified headers, that we could leverage proxy servers to achieve tremendous performance gains. We made the observation that since the developers at a given location tended to work on the same component(s), that they could achieve even higher cache hit rates. This observation got us interested in exploring a move to a RESTful server architecture. However, cache hit rates weren’t the sole thing that pushed us towards a more RESTful server design.
The server interface for the Jazz Team Server that ships with the Rational Team Concert Express Betas is primarily a remote procedure call (RPC), marshaled object system. One of the key problems with this type of system is that it hides the user data in custom formats behind programmer-defined services. Hiding user data is not a desirable feature for an architecture on which you want to build a new platform for all Application Lifecycle Management tools. Rational started an effort to create a set of architectural principles as a guiding philosophy that recognizes the value of dealing with user data in an open way. The principles were validated against several experimental servers, one written in Python. An embodiment of the principles envisions a RESTful platform built with Atom and Http, and published resource formats that do not lock-in user data. Combined with our caching statistics, the architecture principles provided a strong impetus for the Jazz Platform development team to investigate building a RESTful platform in anger.
As with any design change, our experiments with REST started small. The Team Concert Betas contains code from these first experiments with REST. There are several interesting pieces that take baby steps from the existing implementation towards a RESTful approach. However, it is very difficult to move piecemeal from a RPC-based system with complex marshaling of objects to a resource-centric approach. This realization led to the establishment of the Jazz REST Services (JRS) incubator. The goal of this incubator is to develop a RESTful platform for the Jazz Team Server that embodies the architecture principles. I’ll be posting more about JRS in the near future. If you can’t wait and want to learn more now, you can check out my teammate Simon Johnston’s developerWorks blog. Simon has done a good job of providing a nice overview of JRS and its capabilities.
Jazz REST Services Team