It's all about the answers!

Ask a question

What about Jazz Project Move capability?


6
3
Daniel Toczala (87211414) | asked Jun 03 '12, 2:49 p.m.
FORUM MODERATOR / JAZZ DEVELOPER
edited Jun 14 '12, 4:11 p.m. by Rob Retchless (31125)
This is not really a question - it is a collection of use cases and observations that were captured during the VoiCE sessions held on June 3, 2012 at Innovate 2012.  I hope that we can have a running conversation on the use cases and concerns that are critical for any useful implementation of a Project Move capability.  If you have other use cases for Project Move, or other observations, please comment on this forum thread and share your ideas.

Observations
  • Project Move may be implemented in stages, with us providing some limited or tool specific capabilities at first, as the solution is expanded to become more robust and cover more of the following use cases 
  • Initially Project Move may not address some of the following areas:
          1. Moving history data - work item history, SCM object history, test execution history, requirements change history
          2. User Authentication - user identity needs to be preserved, or should we ignore it?
          3. Attachments - Can 
          4. Links - links between objects should not be broken, but how important is it to update links to other integrated tools?
          5. Reporting - do the reports and the data warehouse data move as well?  Do I need to recreate them in the "new" project?
        • Project Move has layers of complexity
        1. If I move a project from one area to another, do my metrics move as well?  Do I just begin collecting new metrics at the new location, or do I move all of the data in the data warehouse to the new project name/location, so it looks as if the project has always been there?
        2. If I move a project from on place to another, what do I do with the "old" project?  Should I delete it?  Archive it?  Leave it as is (effectively cloning the data)?
        3. If I move to another JTS, how do I move the users (which may not be part of the same LDAP domain at the new location), their licenses, and their permissions?
        4. If I move the project, do I also move the project template, the work item types, the categories, the category associations, the teams, the timelines, the dashboards, and the SCM content?  Do I just move a subset of these things?  Do I need to allow Administrators the capability to choose which of these they would like moved?
        5. What do we do with the links?  If I have links to other non-Jazz tools, do I need to go out and update those links as well?  Do I need to translate requests to the old project name/location, and reroute them for some period of time?
      • Will Project Move fail if I have "bad" data in my repository?  (bad character set, unprintable characters, bad references, etc.)

Potential Use Cases
  • Move of a Jazz Project from one environment to another, with a change to the utilization of a different JTS for the project.  (example: Move an RQM project from one JTS, to a second JTS which is supporting the project development using RTC)
  • Move of a Jazz Project from one environment to a more/less secure environment
  • Bulk move of a series of work items from one project to another project, or to a completely new project
  • Move of a series of projects from one Jazz application server to another, in order to address performance concerns on a currently saturated system or repository
  • I want to do a Project Move to move a project to an "archive" environment, and delete it from my production environment.


Comments
sam detweiler commented May 22 '14, 6:10 p.m. | edited May 22 '14, 6:12 p.m.

On the engagement I ended last month, we had to do a project move from one RTC 3.0.1 to another 4.0.4.  we wrote two utilities to make this possible,

1. move all the project config stuff
     timelines & iterations, users, teams (& roles), categories, plans, source, &  builds

x.  use the builtin workitem export/import to move the data and fix workitem   differences (going from non shared to shared process template and different  workitem types)

2. copy & fixup all the links, comments, descriptions, attachments, approvals.

we did not resolve history, or fixup source changesets to workitem links, or reports/user queries.

we also had not completed migration of any custom plan views (ran out of time and there weren't many on the source system)

we combined 3 source projects into a single resultant project.


sam detweiler commented May 22 '14, 6:15 p.m.

I believe they intended to use this process to handle onboarding teams from other tools.

build project in sandbox, migrate data, customize/resolve issues, get signoff
then replicate project to production system.
We did this in test a few times to validate that it worked.

5 answers



permanent link
Markus Schille (7611) | answered Jun 06 '12, 2:53 a.m.
A very good and valid point. The question might be allowed if this will support an enterprise scope?

But first I would like to point out that - from my point of view - two things have been put together, which should be handled separately. On one hand the project move and on the other hand bulk operations on work items. The last one is something which is needed even without the ability of project move.

But getting back to the project move. My initial question would be: "What are the needs of an Enterprise dimension?" Is moving data from one physical installation to another the goal? What happens if a project area is moved from one environment to another? In the first place all artifacts will get new work item ID's. Based on this initial fact all information that is to some extend tight to the ID information has to be changed. Every place where for example the string "Task 101298" has been used it has to be replaced by the new ID and by this a new URL has to be inserted. In any description, discussion, history trail  or other possible location. Lets assume that it was possible to clean all this references up in the sending and the receiving environment. Be aware that this has to be done on both sides.

Now we have all the references related to our exmaple wok item "Task 101298". Which have to be corrected now as well. So not only the work item references have to be correct the references in all the impacted work items has to be corrected a well. Now assuming that we have at the end cross server relationships. There seams a lot of places where something can get wrong.

After reflecting about this with our engineering team we came up on the last years Innovate with the totally different proposal.  The main requirements to such a solution were very simple.
  • support a physical distributes cluster
  • use a common name space
  • ability to move the project content dynamically within the physical cluster
Taking those three simple requirements into consideration the bass word "cloud" was on the table.

After having now reflected on all this the assumption would be that a simple project move will never support the complex scenarios which will happen within a enterprise solution which has to support a world wide distributed organization. Taking the cloud cluster into consideration aspects like
  • fail over
  • load balancing
  • disaster recovery
can be seen as functionality integrated in such a solution.

I hope it was possible for me to express the view we have and contribute in some way to your post.

permanent link
sam detweiler (12.4k6171200) | answered Aug 15 '12, 4:55 p.m.
another use case, we have multiple repositories, is to co-locate teams of projects on the same repositoy so that they can effectively share data (links between workitems), which can't be done on the current product function.

Comments
Markus Schille commented Aug 16 '12, 4:44 a.m.

I am absolutely with you on this. Therefore my proposal to have a cluster that is using the same name space which would be a prerequisite combined with project move. A little bit like a combination of distributed infrastructure with the way virtual guest can be moved around on their physical hosts. Here the virtual guest would be the project area where the physical host is part of the distributes cluster.


sam detweiler commented Aug 16 '12, 8:06 a.m. | edited Aug 16 '12, 8:07 a.m.

ok, now I get it.. but.. how does this help me for the current situation? for new installations I could define the data model differently to add some virtual namespace label. I suppose you define a new url model with the namespace id, and all those without it get special handling (redirect).

But you could probably avoid all that url naming impact with an internal location mapping indirection service.
Basically you register a project (in a central catalog) on a repository.. the locator service always looks them up, and the URL is only to the catalog. This is all repositories connected to the same jts namepace have a single registry of projects. you could initial build that as part of version migration and maintain it going forward with minimal code change. this would eliminate the exposed URL use impact. and you wouldn't have to fix the links except that the number changed..
always thought the workitem number should have been a handle to prevent this impact too.

This amount of data is pretty small (1,000 project names, and <~100 server names), and so could be maintained in multiple server instance caches (to eliminate cross system api requests to search)


permanent link
Anders Truelsen (1621820) | answered Jun 27 '13, 1:36 p.m.
Rather than trying to solve all problems in the world I'd suggest to look at which set of features would make a minimal viable solution.

We would be happy to see a project move that would 
  • handle current work item data, no history
  • maintain links between CLM artifacts, nothing outside
  • require all users to be in the same LDAP
  • use the same process for source and destination projects
  • replicate timeline, releases etc. as required
  • ignore the datawarehouse

permanent link
Bruno Di Giandomenico (50814) | answered Oct 30 '13, 6:33 a.m.
I find the scenarios you have described to be valid ones.
I could add another one, but it probably will fall under one of the heading you already listed.
Security problems, or security update to a project could compel you to move a project out of an existing server to a different one, of course preserving the data and history.
It probably could be just a variation on item #1 of your list.

Question is, is anything moving on this subject, if it comes up regularly ? Any update we could find anywhere ?
Regards

Comments
Ralph Schoon commented Oct 30 '13, 6:44 a.m.
FORUM ADMINISTRATOR / FORUM MODERATOR / JAZZ DEVELOPER

Hi Bruno, I think there is nothing consumable as of now.

The issue is probably that there is a huge amount of data part of a project. However, as mentioned, I think a lot of users would like to have a project move capability.

Some might want them across CLM applications - which again describes how complex this is. Do do you want to move only a RTC project, do you want a lifecycle project including other applications?

I think this is probably all contributing to the slow progress.


Bruno Di Giandomenico commented Oct 30 '13, 6:57 a.m.

Thanks for the prompt answer.

My suggestion would be to pragmatically start with something, just to gauge the difficulties. I am aware of the things which you would have to consider, links, users, roles, permissions, and I am aware that there is no easy answer.
Anders' requirements would be a start, even if losing history is quite bad. But it would be a start.


permanent link
Ralph Schoon (55.1k23642) | answered Jun 27 '13, 8:55 a.m.
FORUM ADMINISTRATOR / FORUM MODERATOR / JAZZ DEVELOPER
This comes up on a regular basis and I can see these major reasons for it:
  1. One of the servers was set up in an now undesirable way (e.g. with it own JTS) and users would now like to move their project to a new consolidated server
  2. Export and re-import projects to create test and staging environments
  3. Export and re-import projects and their data as a deliverable to a customer

There might be more scenarios I didn't recognize. Some with slightly different background e.g. template projects for seeding new projects that I would not see in the project move domain.


Comments
Markus Schille commented Oct 30 '13, 11:23 a.m.

Do you know what the actual status on this discussion/investigation within IBM is?
Is there a functional improvement planned?
Is there on mid or long term a architectural change planned together with the final vision of the jazz hub scenario?


sam detweiler commented Oct 30 '13, 11:54 a.m. | edited Oct 30 '13, 11:56 a.m.

the last status I saw was 'on the backlog', not scheduled for any release.
https://jazz.net/jazz/web/projects/Jazz%20Foundation#action=com.ibm.team.workitem.viewWorkItem&id=93151

we wrote our own tool using Pentaho, and it does much of the work.
history is the biggest problem. We would like to maintain the dates of the events, not just the sequence.  the fact that all the workitem numbers change is also a problem.

I proposed a design change (have JTS service provide the next WI number) that would help this for all the systems under a single JTS (almost all implementations). but it still won't help for cross JTS implementations (should be very small set). 

this is a 25 line of code change, with no new logic, only moving the service from CCM to JTS, and allowing remote access.


Markus Schille commented Oct 30 '13, 12:11 p.m.

Sound very simple. But there is more behind the scene ongoing. If it would be that simple we have not discussed the issue for several years.

To name only some of the issues like common user base, common artifact name space and securing artifact content within the services and across integrations. Not sure if the needed use cases/requirements have been already written. Or if they have been all identified so far.

Would be great to get a vision/strategy on this presented by the IBM CLM architect.


sam detweiler commented Oct 30 '13, 1:50 p.m.

if you use the same JTS, then you have one user base across all CCMs, one security model, ...

I never said this was ALL that had to be done. they would also have to change the UUID calculation model. and I suggested that they go to an indirection model, where the UUID points to a table and the table has the final resolving info. (the UUID algorithm routine would also be remoted to the JTS server so you could encapsulate all this exactly like is (should be) done today.
the lookup table size is puny. a million workitems times 16 bytes is noise in the grand scheme of todays data, and its access model works great for caching, as its 99% read.

the design really needs to be a single logical database, with multiple (re-entrant) engines in front for horsepower/scalability.  one logical CCM.
you could still keep the current separated server change request model for separated complexes (unique JTS's) required by security, etc..
if you need this kind of firewall separation, its unlikely the business would ever change its rules to allow combining.  (where you would be stuck by the current design limitations)


Markus Schille commented Oct 31 '13, 2:35 a.m.

I am fully in line with what you describe. One other specialty is that we have found out over the years that the architecture and the way things have been implemented in RTC, RQM, RRC and JTS have not been aligned at all. In some case even the name is used and not the UUID, which brings other problems on the table. Search functionalists in one service are different implemented and by this bring different results instead of having one search routine reused. There are many of those examples.

At the moment the impression is, that the provided solution based on four architectures which have been glued together. We are disusing needed changes now since years and there is no progress been seen so far. The jazz hub is a new nice interface to the same underlying problems.

If there will be no change supporting the described needs in a common way this solution will fail not because it has not the functional capabilities it will fail because it has not the application operation capabilities needed to operate it in most likely complex Enterprise environment with its limitations.

This brings me back to the point who is the person with the IBM CLM strategy?


Ralph Schoon commented Oct 31 '13, 3:20 a.m.
FORUM ADMINISTRATOR / FORUM MODERATOR / JAZZ DEVELOPER

Sam, with respect to the work item ID's, you can set the number, provided it is not already taken, in the API.


Ralph Schoon commented Oct 31 '13, 3:48 a.m.
FORUM ADMINISTRATOR / FORUM MODERATOR / JAZZ DEVELOPER

I forwarded the question to a developer, who hopefully can point the right persons to it.

Disclaimer: The following is my personal opinion and not necessary the opinion of IBM

I can't really comment on the Architectural issues you describe, but I have done my share of development. Please keep in mind that the solution is growing over time. It is an evolutionary approach. You cant know everything upfront and there are always other constraints and pressures as well. Sometimes you have to take an informed decision and go with that decision. Quite often you realize later that the solution might not be optimal. Changing/consolidating to something better might be not easy, especially under constraints of different interests of users, and if you start developing new solutions, you might try to adopt to what you have learned in the past.

I think it is to be expected, that parts of the solution continue evolving, including the architecture.


sam detweiler commented Oct 31 '13, 8:20 a.m.

Ralph, re setting the ID.. in practice this rarely is possible as all the servers start with the same base value.

and I agree this is an evolutionary product. and I commend the folks that have done so much already.  we do need some additional attention to the practical problems we face however.  batch support (can't do everything thru the UI) including security (certs for userid instead of clear passwords).  project movement, performance scaling changes, faster equalization of the admin UIs,
additional monitoring and debugging tools (mostly performance, scalability).

these inhibit customer acceptance and deployment. and burden the development staff with day to day work, vs improvements.


Georg Kellner commented Oct 31 '13, 11:16 a.m.

Ralph, of course you can't know everything at the beginning, but I think IBM Rational isn't the first company developing the first SCM tool or state machine in the world.

Rational should know some base requirements regarding those tools, and moving data from one repository to another repository is such a base requirement I think.

showing 5 of 9 show 4 more comments

Your answer


Register or to post your answer.