Jazz Forum Welcome to the Jazz Community Forum Connect and collaborate with IBM Engineering experts and users

Status of BF builds started in RTC doesn't get updated.

I am  launching BF builds via RTC/BF integration and after a few successful builds I am  running into a problem of build status not being updated after the start of a job, the initial request to start the job is successful (verified by checking directly in BF console) but there are no updates afterwards, the build status  in RTC stays in "In Progress" even after the job shows as completed in BF console. I am using RTC 4.0 client accessing RTC 4.0 server. The BF is at version 7.1.3.2.  It seems the restart of the RTC server temporarily "fixes" the problem but after a while the issue comes back.
I've asked the RTC admin to take a look at log on RTC server and he found the following entry at the approx time of the failure, not sure if it's related or not.

2012-09-26 15:34:19,544 [jazz: AsynchronousTaskRunner-3] ERROR nternal.service.BuildForgeEventPollerScheduledTask - Error running event poller task.
java.util.ConcurrentModificationException
at java.util.HashMap$AbstractMapIterator.checkConcurrentMod(HashMap.java:122)

I already looked at the similar posts but suggestion for the possible causes didn't seem to apply to my case.

Any suggestions as what might be causing this?



Thanks,

0 votes


Accepted answer

Permanent link
The log is a bit incomplete but it sounds like a known defect: ConcurrentModificationException during BuildForgeDataMaps.reapStaleCache (227986)
Bogdan Adamiak selected this answer as the correct answer

1 vote

Comments

I have reviewed content of 227986 and the symptoms in general indeed seem to describe my case, i.e. following a reboot of our RTC server my builds have their statuses updated for about a day and then things stop working. What I am not clear about is how do I get the issue resolved... there was a suggestion in 227986 that this could be caused by sharing  a BF userid between different RTC instances  but I verified that is not a case in my scenario. Is it still possible that I could be running into the problem described in 227986 without using shared userids? If so, is the only solution to upgrade to a release containing fix for 227986? We are currently using RTC 4.0,what would be the "closest" release containing the fix?

In case it may be relevant... in our environment we have several build engines defined in two RTC instances (one RTC instance is at 4.0 and the other at 3.0.1.1) which are  accessing a single BF server (7.1.3.2). We don't share userids between the instances but we share userids between build engines within a single instance.

There is not really any workaround for this.  The problem is we can hit an exception during the cache reaping that blocks further event processing.  The fix for the defect will be in 4.0.1. 

In the release notes for 4.0.0.1  I found the following work item listed as one of fixes included in the release:

228151: [BACKPORT to 4.0.0.1] ConcurrentModificationException during BuildForgeDataMaps.reapStaleCache (227986)

The  4.0.0.1 release is already available for download and it may be simpler for my group to upgrade to 4.0.0.1 vs 4.0.1 . Can you verify the fix is indeed included in 4.0.0.1 ?

FYI, recently we upgraded our RTC server to 4.0.0.1 and that resolved the issue, thanks for pointing out the cause of the problem.


One other answer

Permanent link
 I would suggest getting a PMR opened for this problem and someone from the development/support team can look into it in more detail.

0 votes

Your answer

Register or log in to post your answer.

Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.

Search context
Follow this question

By Email: 

Once you sign in you will be able to subscribe for any updates here.

By RSS:

Answers
Answers and Comments
Question details

Question asked: Sep 26 '12, 7:48 p.m.

Question was seen: 5,122 times

Last updated: Oct 19 '12, 1:33 p.m.

Confirmation Cancel Confirm