It's all about the answers!

Ask a question

Reset RationalBuildForgeConnector Build Engine


Thomas Dunnigan (13114126) | asked May 11 '10, 12:32 p.m.
I have noticed that for some unknown reason the RationalBuildForge Connector changes to a warning state hindering us from executing builds. Currently the only workaround that I have found is restarting the RTC server, which won't fly in a production envitronment.

Is there another way to do this that won't upset our entire RTC environment?

We cannot move this to production without a solution or workaround that does not involve restarting RTC.

5 answers



permanent link
Peter Birk (501145) | answered May 11 '10, 2:27 p.m.
JAZZ DEVELOPER
I have noticed that for some unknown reason the RationalBuildForge Connector changes to a warning state hindering us from executing builds. Currently the only workaround that I have found is restarting the RTC server, which won't fly in a production envitronment.

Is there another way to do this that won't upset our entire RTC environment?

We cannot move this to production without a solution or workaround that does not involve restarting RTC.


Hi.

The Connector shouldn't be getting into a warning state to begin with, so we need to figure out why. Can you report the version of the connector that you are using (look at the server/buildforgeconnector-update-site/plugins dir and report the plugin names)? Do you see any errors in server/logs/jazz.log? How many build definitions do you have configured and do they all use the same or different IDs. Do you have multiple BF servers configured or just one. If one, do you use the same hostname format for all of the build definitions? Answers to these will help understand if a problem might be known versus new.

To really determine what's going on, you can enable the following trace in server.startup.bat on the JAVA_OPTS line: -Dcom.buildforge.rtc.plugin.debug=true. This will produce a buildforge_service.log in the same directory as server.startup.bat. Keep the trace running until you see the error. The trace will grow unbounded so keep an eye on it and disable it once you've captured the problem. It does require a server restart to enable/disable the trace, although this will be changing soon.

Regards,
Pete

permanent link
Thomas Dunnigan (13114126) | answered May 11 '10, 5:33 p.m.

Hi.

The Connector shouldn't be getting into a warning state to begin with, so we need to figure out why. Can you report the version of the connector that you are using (look at the server/buildforgeconnector-update-site/plugins dir and report the plugin names)? Do you see any errors in server/logs/jazz.log? How many build definitions do you have configured and do they all use the same or different IDs. Do you have multiple BF servers configured or just one. If one, do you use the same hostname format for all of the build definitions? Answers to these will help understand if a problem might be known versus new.

To really determine what's going on, you can enable the following trace in server.startup.bat on the JAVA_OPTS line: -Dcom.buildforge.rtc.plugin.debug=true. This will produce a buildforge_service.log in the same directory as server.startup.bat. Keep the trace running until you see the error. The trace will grow unbounded so keep an eye on it and disable it once you've captured the problem. It does require a server restart to enable/disable the trace, although this will be changing soon.

Regards,
Pete


Hi,
Thanks for the rapid response. I went through your suggestions and reread the wiki for set up. While it still may be premature to say so, I think this might be a red herring, but I want to define the environment that may have created this situation along with my observations.

Environment:
Our entire env runs in VMWare (RTC, DB2, Buildforge etc.)

RTC 2.0.0.2 ifix1 on WAS 7.2 (With Project Conductor) 64 bit RHEL

Buildforge 7.1.1.4 set up in standalone mode 64 bit RHEL

BF Client Connector 7.1.1.4020139

BF Server Connector 7.1.1.4020139

All BF software was up to the latest. I have since enable the buildforge_service.log to capture any issues that may arise in the interim.

After my last posting, I went on the fact finding mission and found a few things that may have caused the warning and hope you can shed some light on the situation

1. CPU spike on the BF server due to a vmware_guest daemon that needed to be forcibly killed.
2. We have two builds as of now, they are sharing an env. A refresh of the environment reset the rtc build user id of one build to the wrong build user id.
3. We were changing buildforge user id's that were being used to execute the builds as well.

I don't want to discourage help, but it seems we may have had the perfect storm. But, any insight in how the above could have affected this situation would be greatly appreciated.


Thanks,

Tom

permanent link
Peter Birk (501145) | answered May 12 '10, 12:59 p.m.
JAZZ DEVELOPER
I have noticed that for some unknown reason the RationalBuildForge Connector changes to a warning state hindering us from executing builds. Currently the only workaround that I have found is restarting the RTC server, which won't fly in a production envitronment.

Is there another way to do this that won't upset our entire RTC environment?

We cannot move this to production without a solution or workaround that does not involve restarting RTC.


Hi.

The Connector shouldn't be getting into a warning state to begin with, so we need to figure out why. Can you report the version of the connector that you are using (look at the server/buildforgeconnector-update-site/plugins dir and report the plugin names)? Do you see any errors in server/logs/jazz.log? How many build definitions do you have configured and do they all use the same or different IDs. Do you have multiple BF servers configured or just one. If one, do you use the same hostname format for all of the build definitions? Answers to these will help understand if a problem might be known versus new.

To really determine what's going on, you can enable the following trace in server.startup.bat on the JAVA_OPTS line: -Dcom.buildforge.rtc.plugin.debug=true. This will produce a buildforge_service.log in the same directory as server.startup.bat. Keep the trace running until you see the error. The trace will grow unbounded so keep an eye on it and disable it once you've captured the problem. It does require a server restart to enable/disable the trace, although this will be changing soon.

Regards,
Pete

Tom,

Make sure when you change the userid in the build definition, that it is a dedicated user (not used elsewhere). Also, make sure that the hostname used for a user is always specified in the same way (ie., host.domain everywhere, instead of maybe host for userA in one def and host.domain for userA in another def). What this does is cause one build def to authenticate as user A and then the other build def does not reuse the same connection since the host format doesn't match. This is done to prevent too many connections being created, so there's a hostname/port/userid lookup for a connection and that connection gets reused. Due to the way BF user sessions work, if a different connection is opened for the same user, the other one is closed. That could get into a wicked loop knocking each other off.

Not sure if this is the cause, but could be if you have recently message around with the build defs users and/or hosts. If you happen to get more information from the buildforge_service.log, I could use that to confirm the above situation or maybe something else going on.

Regards,
Pete

permanent link
Thomas Dunnigan (13114126) | answered May 24 '10, 1:27 p.m.
Hi Peter,

It seems I have the same issue again with the RationalBuildForgeConnector now in a warning state. Below is the additional information that you requested.

I do not have a log file at this time.

We have 1 buildforge server connected.
All hostnames are configured as FQDN's that securely connect to buildforge.

We have 5 Build definitions.
Of the 5 build definitons, we have 2 build defs that use the same RTC userid. One is a test integration build with no RTC userid and the remaining two are unique RTC user id's.

In addition, of the 5 build definitions, 2 use the same build forge user id's and the remaining 3 are unique.

The following information from the wiki is unclear. Should we have 1 BF user id per build definition, or can it be shared? If shared, then the statement, "Make sure the userid/password specified in the Build Definition is not used elsewhere." is misleading. I would take this to mean, that this user id should not be used to log in to Buildforge.

Do we need to have one unique RTC build user per build definition or can that be shared across build definitions.

I surmise the following:
We can have 1 BuildForge build user serve the RTC instance and n number of build definitions.
However that build user should not be used even to log into the BF console.

In addition, an RTC user can be shared across multiple build definitions without issue.

Can you confirm?


From Wiki

Make sure the userid/password specified in the Build Definition is not used elsewhere. It must be a dedicated user or there will be session errors during connection and authentication from the server-side connector. This could cause problems obtaining build events.


When using the same Build Forge userid and password in multiple Build Definitions, you must make sure that the Hostname used is in exactly the same format for the same server. For example, if you use hostname "myserver.austin.ibm.com" with BF user build/build in one server definition, then use hostname "myserver" with BF user build/build, there can be session problems as they will be treated as different "connections". This means that the first connection (to myserver.austin.ibm.com) will auth with build/build and then the second connection (to myserver) will also auth with build/build but will invalidate the session from the first connection. When the first connection tries again, it will invalidate the second and so on.


I have noticed that for some unknown reason the RationalBuildForge Connector changes to a warning state hindering us from executing builds. Currently the only workaround that I have found is restarting the RTC server, which won't fly in a production envitronment.

Is there another way to do this that won't upset our entire RTC environment?

We cannot move this to production without a solution or workaround that does not involve restarting RTC.


Hi.

The Connector shouldn't be getting into a warning state to begin with, so we need to figure out why. Can you report the version of the connector that you are using (look at the server/buildforgeconnector-update-site/plugins dir and report the plugin names)? Do you see any errors in server/logs/jazz.log? How many build definitions do you have configured and do they all use the same or different IDs. Do you have multiple BF servers configured or just one. If one, do you use the same hostname format for all of the build definitions? Answers to these will help understand if a problem might be known versus new.

To really determine what's going on, you can enable the following trace in server.startup.bat on the JAVA_OPTS line: -Dcom.buildforge.rtc.plugin.debug=true. This will produce a buildforge_service.log in the same directory as server.startup.bat. Keep the trace running until you see the error. The trace will grow unbounded so keep an eye on it and disable it once you've captured the problem. It does require a server restart to enable/disable the trace, although this will be changing soon.

Regards,
Pete

Tom,

Make sure when you change the userid in the build definition, that it is a dedicated user (not used elsewhere). Also, make sure that the hostname used for a user is always specified in the same way (ie., host.domain everywhere, instead of maybe host for userA in one def and host.domain for userA in another def). What this does is cause one build def to authenticate as user A and then the other build def does not reuse the same connection since the host format doesn't match. This is done to prevent too many connections being created, so there's a hostname/port/userid lookup for a connection and that connection gets reused. Due to the way BF user sessions work, if a different connection is opened for the same user, the other one is closed. That could get into a wicked loop knocking each other off.

Not sure if this is the cause, but could be if you have recently message around with the build defs users and/or hosts. If you happen to get more information from the buildforge_service.log, I could use that to confirm the above situation or maybe something else going on.

Regards,
Pete

permanent link
Peter Birk (501145) | answered May 24 '10, 2:02 p.m.
JAZZ DEVELOPER

Do we need to have one unique RTC build user per build definition or can that be shared across build definitions.

I surmise the following:
We can have 1 BuildForge build user serve the RTC instance and n number of build definitions.
However that build user should not be used even to log into the BF console.

In addition, an RTC user can be shared across multiple build definitions without issue.

Can you confirm?


Correct, you can use the same BF user in multiple build definitions, provided the hostname format is exactly the same across all of them. Once you configure a BF user in a build definition, you cannot use that user to login to BF from any other source (other than another build definition on the same RTC instance). I'll clarify the Wiki article to make this more clear.

Regarding why you are getting a warning for the engine, this could be many things. The first place to look is jazz.log for any exceptions that include the BuildForge code in the stack. If you don't see anything there, then you'll need to enable the trace I spoke about last posting.

Regards,
Pete

Your answer


Register or to post your answer.


Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.