Gone are the days when development teams develop monolithic applications completely from scratch. Reusing open source components, frameworks, and services is quite usual. It is also common that a single application is composed with packages that are delivered by different teams. The deployment of these applications has to take into account which package level to use for a particular deployment. These teams (in-house or sub-contractor) are often driven by diverse development processes and iteration schedules.
It is hard to track or make sense which components are used in a particular team’s build, or which package levels should be deployed across various deployment environments (development, test, and production). Source control systems are not designed to manage and share these packages, which are typically binary. This is because they are not geared to manage the version explosion of delivered packages, nor do they simplify usage tracking for these packages across diverse consumer projects and deployment environments. Software libraries, sometime referred to as repositories, are more suitable in brokering the consumption of these development deliveries. They provide facilities to search and link to the right package (and its dependencies), and provide the means to collaborate around these packages with notifications, discussion, and optional links for engaging the package development team through a change request.
During development, developers typically deploy their application to local/workgroup level environments. These environments are often quite different from the production environments the applications are going to end up in. Applications usually pass through a sequence of deployment environments on the way to production (development, test, acceptance, and production). The deployment process of these applications into these environments requires special handling. This is because each environment is often under different compliance rules and dependent on specific components, configuration, and topology. These artifacts are not are not part of the bill of materials (BOM) that is delivered by the development teams. This results in a slow, manual progression into production.
“DevOps” is an emerging set of principles that come as a response to the realization of the disconnect between development teams and operations teams and their processes. The essence of the DevOps approach is as following:
• Build a symbiotic relationship between development and operation
• Think about applications not projects
• Use automation in place of documentation (and more automation)
• Create a self-service infrastructure for teams
• Decrease the skill level required to deploy platforms
• Reduce maintenance costs by keeping systems “young”
• Deploy more frequently
• Replace instead of update
The idea is that the combination of frequent and automated deployments will lead these activities to be as reliable as the “recompile” process today. Frequent from scratch deployments will ensure that what’s deployed and how it is configured is well understood, re-creatable, and can be managed just like software change is managed today. This approach is a revolution from what is typically customary today, where a manual process for migrations, updates, or patches is delegated across various roles. It may take some time for many applications and environments to adapt to more standard configurations/deployment processes. There are many applications and environments today where the DevOps theme can bring about huge efficiencies in reducing the time it take to deliver application changes.
A formal software library is the basis from which one can drive re-creatable automation across the various deployment environments. A library can help keep these environments deployed with the same configuration/automation artifacts (runtime as well as data). Libraries can also help with tracking of the BOM that was used across the various deployment automation activities.
There are many artifact-specific repositories out there (like OBR for OSGi bundles, Yellowdog Updater for RPM packages, Eclipse p2 for Eclipse plugins etc.) that are used by various automation/install processes. Rational Asset Manager 7.5.1 (RAM) is an enterprise library that can catalog and manage any type of application package. RAM is well integrated with development tools and automation frameworks. RAM is already being used as a DML for approved deliverables. But, it can also be used for auditing and managing automation of BOM usage across various deployment environments. This is key because it will facilitate consistency and reuse of proven artifacts (packages, scripts, configurations, patterns etc.).
Doing more of the same increases the probabilities of success. Driving reuse with a library is the basis for that success.
Dr. Gili Mendel
Senior Technical Staff Member, Rational Asset Manager
Hi Gili,
I’d like to challenge what you said about artifact-specific repositories. Java-centric application projects for instance are most often using an infrastructure of Apache Maven [1] (or Apache Ivy [2] as a surrogate), Sonatype Tycho [3] and Sonatype Pro for Nexus [4] as an artifact repository.
How easy would it be to replace Sonatype Pro for Nexus by Rational Asset Manager without having to write gluecode (i.e. Maven mojos which are internally calling the RAM-provided Ant tasks)?
If such a transition from Sonatype Pro for Nexus to RAM is possible, do I have to understand your statement above as such that it would then be possible to implement release processes such as those found at [5] and [6]?
Pretend it would be required to write gluecode to integrate RAM with a Maven+Tycho infrastructure to replace Sonatyp Pro for Nexus – What’s your experience in this regards? How well would such gluecode provided by development accepted by IT operations departments? In other words how well are IT operations departments in enterprises such as IBM able to embrace the DevOps paradigm you described above nowadays?
Many thanks and best wishes!
Regards,
Daniel S. Haischt
PS: We can have the discussion as well offline if you like via my IBM email account.
[Resource links stripped cause they were flagged as SPAM]
Hey Dan,
A challenge is a good thing; we should to continue and discuss this in the open. If it gets more detailed, we can move it to the RAM newsgroup (https://jazz.net/forums/viewforum.php?f=12&sid=1d6c3fc2d4925a37e7b59b31982f41b4) where we can engage more RAM specific audience.
In regard to RAM vs. Maven (et al), I was not suggesting that RAM can or should replace Maven. Though it is true that the implementation of RAM as a “DevOps” library has been mostly based on Apache Ant, I am no advocating or taking side in the Apache Ant, vs. Apache Maven religious discussions (http://ant.apache.org/ivy/m2comparison.html). If Maven is able to drive the specifics of projects builds across an enterprise, then the consistency and declarative nature of Maven is a great thing. RAM is not a build tool and hence will never replace a Maven environment. What I was saying is that the aspects for a formal library are key for DevOps in a similar manner that they are for development. The Maven framework includes a repository to reference dependencies to common (mostly open) components. RAM focus is not on just the repository aspect. RAM’s value add is in the way it integrates with ALM tools to enable a simple way to propose/approve usage of components (just because a component or a component level is available does not mean that it should be used) and the ability to link these components to right level of documentations, designs, and collaboration facilities (forums, change management, test plans / results). RAM also allows one to understand which packages (.war, RPM, OBA) is using which fine grained components like a .jar.
Take a look at the movie that is going to be publish at the end of next week on RAM as an Enterprise library (shoule be here when RAM 7.5.1 is made available https://jazz.net/projects/rational-asset-manager/).
Though RAM does not have a Maven plugin (to serve as an internal repository), it is something that has been asked for by a few customers (tracked here: https://jazz.net/jazz02/resource/itemName/com.ibm.team.workitem.WorkItem/24839)
Hi all: I wrote a Maven plugin for RAM last year. Search for “RAM Maven Dependency Plugin” in IRAM (IBM Internal only)
Hi Gili, Hi hcushing,
great to hear that there is a Maven plug-in for RAM. I will have a look. So concerning RAM=Maven or even RAM+Ivy integration I am looking — besides a plug-in — for some articles or best practices that are describing how a system setup could look like that is making use of both RAM and a dependency manager sych as Maven. For instance should Maven publish releases to it’s own repository manager and should such a binary release be additionally be published to RAM so that an appropriate approval workflow for such a binary could be triggered. Or as an alternative would it make sense to just publish links to RAM each pointing to published binary artifacts located in a Maven repository manager (e.g. Nexus).
Cheers
Daniel