EditAttachPrintable
r32 - 2022-10-23 - 11:07:40 - PaulEllisYou are here: TWiki >  Deployment Web > DeploymentInstallingUpgradingAndMigrating > UnderstandingDOORSNextSizingsin6X > TroubleshootingDOORSNextUpgrades

Troubleshooting DOORS Next 6.x to 7.x Upgrades

Authors: PaulEllis, IanGreen, VaughnRokosz, MadanKumar, WillChatham, GerardPerrin
Build basis: Engineering Lifecycle Management 7.x, DOORS Next 6.x upgrading to DOORS Next 7.x

This article is to help troubleshoot your DOORS Next 6.x to DOORS Next 7.x upgrade.
This article can also be used to troubleshoot other applications within the Engineering Lifecycle Management(ELM) suite such as Rhapsody Model Manager, and where concurrent upgrades are attempted.

The article is intended to be used in conjunction with Understanding DOORS Next sizings in ELM 6.x to estimate timings when upgrading to DOORS Next 7.x which discusses what tuning will be required for your datasize, with the assumption being that the database administrator (DBA) is a key component in interpreting the advice.

What to monitor during the upgrade

If you are still planning the upgrade, then the pre-gather statistics are a very useful way to gauge if your system is large, medium, or small, in terms of what IBM and other clients have tested against. Your size is somewhat dependent upon your hardware, so also use DB Server used for IBM testing to extrapolate your size and upgrade experience.

It is imperative that you use a staging area and that the hardware is comparable with your production server if you are to ascertain useful timings for your planning.

nmon and other monitoring tools

Many tools are available including IBM Tivoli Monitoring (ITM), third party tools like helpsystems-MPG; and open-source like njmon. However, in terms of monitoring the upgrade, the preferred method for AIX and Linux is to use NMON. There are some very useful videos for setting up NMON and NJMON/NIMON. This level of data allows for a more accurate correlation of the system at the time of any issues in the logs.

Windows users can setup the Windows Performance Monitor, perfmon to monitor the upgrade.

Depending on your topology and network then the importance of each of these metrics will change. For example, you may collocate your database with the application server so will have 0ms latency vs competing resources for I/O. You would then need to ensure you have Disk benchmarking statistics available to understand the impact.

Other must gather information

A quick checklist of helpful data if you experience a performance problem during upgrade on your staging system is below. It is important to understand how to then use these outputs for the planning of the production upgrade, but also for comparison in case the production run shows any deviations, which could indicate a problem.


Both servers:

  • System definition for the DOORS Next application server and the DB server - how many vCPU, how much RAM is allocated, Virtualized or Physical machine? etc.

Application server:

  • During the test, did the DB server and DOORS Next server get their full allocation of CPU cores and RAM? Are the CPU cores and RAM resources allocated to their VMs dedicated?
    • Monitor machine performance counters of DB Server and DOORS Next server during migration.
    • nmon (Linux), perfmon (Windows) - in 5-minute intervals
  • Basic volumetrics from the DOORS Next repository before the upgrade (SPARQL to count artifacts)
  • Number of versions, number of concepts: SQL provided Gathering pre-upgrade repository statistics
  • Collect JVM verbose GC logs on repotools-rm
  • Elapsed time to upgrade (this will be in the DOORS Next logs).

Database server:

  • Measure the size of the DB before and after the upgrade (overall size is only 1 indicator and less relevant than the Gathering pre-upgrade repository statistics
  • AWR report from DB server - collect hourly
  • During test, was the DB server only supporting the upgrade or were their other DB activities, e.g. maintenance or other applications being serviced at that time?
  • During the test, what was the CPU and Memory utilization of the DB server?
  • What is the network latency between the system running repotools and the database server?
Note: When you are ready to contact IBM Support, use the must gather tool ISADC to create a zip package to attach to your new case.

Db2: If you experience high CPU on the database server, use the following command and contact Db2 Support:

> db2support -d <dbname> -s
> df -Ph >>FSutil.txt
> /getsnaps -database RM -global -vmstat -iostat -app -dbm -dyn -utilities -everything -active -stack all -latch -osstack all -period 2880 -max 48


Also note that if you have applied a fix pack to Db2, then you need to run the following command to ensure that db2mon is able to avail of all possible metrics:
For Db2 11.1:
db2updv111 -d Sample

For Db2 11.5:
 db2updv115 -d Sample

Planning your ELM upgrade to version 7.0 also contains additional information which may be required when contacting IBM Support. A list of Must gather information is at the bottom of this article.


Settings in advance of starting the upgrade

These settings are not mandatory for an upgrade to complete successfully. However, for the first Staging system run, it would be helpful to set the following to understand more fully what is occurring in the system at any point in time should an issue arise.

* Javacores - set your javacores to run every 15 minutes. If you experience an issue during the upgrade where you think that the upgrade has hung or stopped responding, javacores are very useful to determine if that is correct or what is actually occurring. Set the following in the teamserver.properties file:

com.ibm.rdm.service.migration.jvmDumpPeriodMinutes=15
  • As described above, we advise running the following NMon command during the upgrade (Staging and production). Run nmon collection every 5 minutes on DOORS Next servers prior to running the upgrade scripts.
# nmon -f somefile.txt -s 300 -c 200
Note: The nmon -c option would run this for 200 iterations, so 200 x 300 seconds = 16 hrs 40 mins. It may be necessary on very large repositories to extend this.

Options if you have already started the upgrade

Once the upgrade (addTables command) has started then it is not possible to add additional logging to the application. However, it is still possible to monitor using nmon, Windows perfmon; collect Javacores and monitor the repotool* logs.

How much progress has my upgrade made?

If you are upgrading to a release prior to 7.0.2, then you will need to perform a manual grep to find out how far the upgrade has progressed. If you are using Windows, then use the get-content command. Since DOORS Next 7.0.2, you will be able to see a /server/liberty/servers/clm/logs/repotools_rm_phase.log which records what stage last started/completed. Note Since V7.0.2 SR1 (ifix015) the log is no longer populating, see defect: APAR: PH49683 Repotools_rm_phase.log is not populating when upgrading to V7.0.2 SR1.

There are several phases to complete for this upgrade, but the key 3 phases, where the most significant time will be spent are:

  • Initialize CM subsystem
  • Phase 1 (core data)
  • Phase 3 (Configuration finalization)

Although there may be a significant amount of time between the last entry in this log, to detect if the upgrade operation is still ongoing, check that there is activity in /server/liberty/servers/clm/logs/repotools_rm.log. This log is the main log for Phase 1 (core data) and the intent is for troubleshooting in the event of an issue. Once the upgrade enters phase 3 - Configuration finalization, then the /server/repotools-rm_addTables.log is used to record the Foundation checks.

If there is no output in either the repotools_rm.log or the reportools-rm_addTables.log then check your operating system to ensure there is activity for the Java process (application server), as well as the database server.

Db2 errors experienced during upgrade

Issues related to Db2 have occurred when the guidance stated in Db2 database tuning and guidance were not followed. Considerations for larger imports is also important, as any the DOOR Next 6.x -> 7.x upgrade should be treated in much the same way as a large import of data. Therefore log primary, log secondary and logfilsiz, are all important settings for your DBA to size appropriately.

It is worth remembering that the DOORS Next 6.x data is not deleted, so expect up to a doubling of your database size too and size the database files appropriately for a doubling of RM (DOORS Next) data.

APPLHEAPSZ - SQLCODE=-954, SQLSTATE=57011

To resolve the SQL0954C error, set the application heap size (APPLHEAPSZ) database configuration parameter to AUTOMATIC, which allows it to grow as needed up to appl_memory limit for the database or instance_memory limit for the database partition.

db2 connect RM
db2 update db cfg for <database_name> using applheapsz <new_numeric_value> automatic
For additional details, please refer to the Db2 help.

Transaction Log full - SQLCODE=-964, SQLSTATE=57011

If the Transaction log settings are set too small you may encounter the SQL0964C error as per below:

2021-09-23 18:04:16,998 CRJAZ1036E The following content could not be saved: StoredContent(_5zf20ByHEeyxvYy_8abVJQ, x-com.ibm.rdm.module.structure/octet-stream, 12448, None, HashCode(0x15aa894f))
com.ibm.team.repository.common.ContentStorageException: com.ibm.team.repository.common.DatabaseResourceExhaustedException: CRJAZ1318E The server could not connect to the database. Try the operation again.
!Db2Provider.VirtualStorageIsNotAvailable!
SQL: insert into REPOSITORY.CONTENT_STORAGE (UUID, CLAIMED, OWNER_ITEM_UUID, OWNER_ITEM_TYPE_DBID, FILE_SIZE, MIME_TYPE, MODIFIED, DELTA_PARENT, COMPRESSION_METHOD, LINE_DELIMITER, CONTENT_HASH_CODE, CHARACTER_ENCODING, STORED_LENGTH, CONTENT_BYTES) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
SQL Exception #1
SQL Message: DB2 SQL Error: SQLCODE=-964, SQLSTATE=57011, SQLERRMC=null, DRIVER=4.26.14
SQL State: 57011
Error Code: -964
It is important to work with your DBA to make sure there is available space in your DB2 transaction logs. Please reference - Db2 How to determine what DB2 transaction log size to set to avoid DB2 transaction log being full - for further details on how to calculate the transaction log size.

ERRORCODE=-4499, SQLSTATE=08001

This error is seen due to an APAR in the underlying JDBC driver.

Message: Connection timed out (Read failed). ERRORCODE=-4499, SQLSTATE=08001
com.ibm.db2.jcc.am.DisconnectNonTransientConnectionException: [jcc][t4][2030][11211][4.26.14] 
A communication error occurred during operations on the connection's underlying socket, socket input stream,
or socket output stream

Workaround: Use a connection string in the teamserver.properties file that includes:

com.ibm.team.repository.db.jdbc.location=//databaseURL\:51000/RM\:user\=user;password\={password};progressiveStreaming=no;fullyMaterializeLobData=false;streamBufferSize=1048576;

TEMP space - SQLCODE=-968, SQLSTATE=57011

It is also important that you ensure that you have sufficient TEMP spaces, which have the ability to auto-extend if necessary. Failure to do so will result in a compromised upgrade and likely to require a rollback and re-attempt at the upgrade.

Insufficient temp space in Db2 will report the error in the DOORS Next logs as:

Problem details: problem: com.ibm.team.repository.common.RetryableDatabaseException   There was a potentially serious failure identifying states to migrate.
com.ibm.team.repository.common.RetryableDatabaseException: CRJAZ1318E The server could not connect to the database. Try the operation again.
!Db2Provider.VirtualStorageIsNotAvailable!
12:22 PM
SQL Exception #1
SQL Message: DB2 SQL Error: SQLCODE=-968, SQLSTATE=57011, SQLERRMC=null, DRIVER=4.26.14
SQL State: 57011
Error Code: -968

The Db2 Diaglog for Diskfull error is related to Temp space, for example:

MESSAGE : ZRC=0x850F000C=-2062614516=SQLO_DISK "Disk full."
          DIA8312C Disk was full.
DATA #1 : Page key, PD_TYPE_SQLB_PAGE_KEY, 16 bytes
  Pagekey: {pool:1;obj:5;type:128} PPNum:558544
Where pool:1 is the TEMPSPACE as seen from:
Address            Id    Type Content PageSz ExtentSz Auto Prefetch BufID BufIDDisk FSC NumCntrs MaxStripe  LastConsecPg RSE  Name
0x00007F28FF700300 1     SMS  SysTmp  16384  4        No   4        1     1         On  1        0          3            No   TEMPSPACE1

"SOME.TABLE to lock intent "X" has failed. The SQLCODE is "-911"

The DOORS Next upgrade itself may not be reporting errors, but it appears to be hanging. Check the Db2 troubleshooting output, such as db2diag for errors similar to:

2020-05-25-15.24.16.121317+120 E3560589E612          LEVEL: Error
PID     : 58989                TID : 140200353195776 PROC : db2sysc 0
INSTANCE: db2inst1             NODE : 000            DB   : RM
APPHDL  : 0-18435              APPID: 10.78.19.26.35546.220525124505
AUTHID  : DB2INST1             HOSTNAME: myhostname.com
EDUID   : 5430                 EDUNAME: db2agent (RM) 0
FUNCTION: DB2 UDB, data management, sqldEscalateLocks, probe:4
MESSAGE : ADM5503E  The escalation of "180" locks on table 
          "REPOSITORY.ITEM_CURRENTS" to lock intent "X" has failed.  The 
          SQLCODE is "-911".

To resolve this problem:

  • Check the LOCKLIST setting in DB2 for the DNG database Example: Max storage for lock list (4KB) (LOCKLIST) = 20000
  • Modify this setting on the DNG (RM) database to automatic: db2 update db cfg using LOCKLIST automatic
  • Check all the other DB2 settings we recommend and adjust if necessary


Calculating Db2 table sizes

In the Tuning the database guide of the wiki guide to understanding your DOORS Next sizes, we state how to tune your database. Keepy Db2 buffer cache properly sized relies on knowing the size of your data from the database level. The following query will allow you to calculate table sizes:

"SELECT SUBSTR(TABSCHEMA,1,18) TABSCHEMA,SUBSTR(TABNAME,1,30) 
TABNAME,(DATA_OBJECT_P_SIZE + INDEX_OBJECT_P_SIZE + LONG_OBJECT_P_SIZE + LOB_OBJECT_P_SIZE + XML_OBJECT_P_SIZE) 
AS TOTAL_SIZE_IN_KB,(DATA_OBJECT_P_SIZE + INDEX_OBJECT_P_SIZE + LONG_OBJECT_P_SIZE + LOB_OBJECT_P_SIZE + XML_OBJECT_P_SIZE)/1024 
AS TOTAL_SIZE_IN_MB, (DATA_OBJECT_P_SIZE + INDEX_OBJECT_P_SIZE + LONG_OBJECT_P_SIZE + LOB_OBJECT_P_SIZE + XML_OBJECT_P_SIZE) / (1024*1024) 
AS TOTAL_SIZE_IN_GB FROM SYSIBMADM.ADMINTABINFO WHERE TABSCHEMA NOT LIKE 'SYS%'"

We have also seen that this highlights where previous db2 monitoring may have been used and left behind, sometimes, significant data that can be purged. For example:

JAZZDB    STMTHIST_JAZZ_DEADLOCKS    875701760   855177   835
JAZZDB    STMTVALS_JAZZ_DEADLOCKS    473469952   462373   451

The above 2 lines showed how 1 customer had nearly 1.3 TB of data in the RM tablespaces that was not necessary. We recommended that they contact IBM Db2 Support for assistance with these commands to purge the data and release the space to OS level.

db2 drop event monitor JAZZ_DEADLOCKS

db2 alter tablespace lower high water mark
db2 alter tablespace reduce max

Once you have ensured that you understand your database size, then you can refer back to the Tuning the database guide to set an appropriate bufferpool and related memory settings.

Oracle errors experienced during upgrade

The Interactive Upgrade Guide states "If multiple applications connect to a common database server at the same time, the database server might crash or cause connection issues. To avoid database related issues, it is recommended to upgrade the applications one by one.
The statement is referring to resources available in terms of compute on the database server, but also the shared database resources such as TEMP and undo tablespaces.
It is hard to make estimates for how to size common Oracle resources if we cannot predict or know what else is happening at any specific time, for example concurrent upgrades.

If running concurrent upgrades where resources are shared, then staggering the key phases of the upgrade to alleviate pressure on these spaces is strongly advised. It is also advised that a specific test within the Recommendations when planning and implementing ELM DOORS Next 7.0.x upgrade project is made to accommodate a concurrent upgrade, once the focus on individual applications/ELM instances is complete.

We recommend tuning the following optimizer parameters to false as they make changes to execution plans dynamically, reacting to data that it sees.
These add to upgrade unpredictability since the order in which it processes the data, for example, change sets, will matter. This skews our data, so it is hard for the optimizer to make good decisions.

Understanding Oracle tuning required for the upgrade documents the settings that are required to force Oracle to be more consistent when upgrading and to therefore consume less temp dbspace.

If you see the ORA-01652 error below then ensure that you have applied the Oracle setting "_complex_view_merging"=FALSE

There is more information on these values and how they would be set for production performance in the Oracle Tuning portion of the 7.0 Performance report.

TEMP dbspace

It is also important that you ensure that you have sufficient TEMP tablespaces, which have the ability to auto-extend if necessary. Failure to do so will result in a compromised upgrade and likely to require a rollback and re-attempt at the upgrade.
Oracle during the upgrade will be making significant numbers of decisions on how to execute plans. If Oracle does not optimize an execution plan well then this can lead to significantly more processing and invariably more temp tablespace being used.

We recommend that you set the following system variable for the duration of the upgrades. This setting reduces TEMP tablespace usage and allows for multiple validation threads, allowing the upgrade to complete faster.
If you experience this issue during an upgrade, then you must roll back the database and start the upgrade again.

alter system set “_complex_view_merging”=false;

*Note, "_complex_view_merging" is a hidden parameter, so the double quotes are required.
Set that before upgrade and set it back to true after upgrade.

If you are unsure if you encountered this situation, you can parse the upgrade log for messages similar to the Oracle error message:
"ORA-01652: unable to extend temp segment by 128 in tablespace TEMP"

This is only necessary if the results from the queries in Large composite changesets show there are composite changesets >1000.
The higher the amount of composite changesets that are returned which contain over 1000, and the higher the number above 1000, will indicate how much more temp tablespace will be required. If you are unable to apply the Oracle setting for complex_view_merging, then you will need to reduce the number of Foundation threads which perform the queries which can consume large amounts of tablespace.
Add below System property in .../server/repotools-rm.sh file and set to a value of 1 to 10 (10 is default for the maximum).

DEFINE="$DEFINE -DDATA_CHECK_THREAD_POOL_COUNT=1"


ORA-01654: unable to extend index

This error code, [ORA-01654], means the server has failed to allocate an extent of the required number of blocks for an index segment in the specified tablespace, this could be beacuse the server ran out of disk space or the Tablespace setting is too small and you may need to add more datafiles to the tablespace. It is recommended that the Tablespaces are monitored and set to autoextend where possible. Please consult ORA-01654: unable to extend index RM with your Oracle DBA about this error and actions to resolve it.

SQL Message: ORA-01654: unable to extend index RM_DB_USER.DNGRTFCTSDNGRTFCTSDBRTFCTNMTT3 by 1024 in tablespace RM

Undo tablespace

For large upgrades, and/or where the data may be unusual due to import scripts, OSLC, or any other non-GUI usage then it is recommended that the undo tablespace is monitored and set to autoextend where possible.

If you encounter the error below, then either add more datafiles to the undo tablespace, set the autoextensible flag or enlarge the datafile(s). See Oracle documentation for more advice on how to handle this feature. If you choose to add datafiles, then we recommend adding additional datafiles in 64GB increments (for large enterprise clients). Also, note that the error below is an example of where this error could occur. Each dataset may require resources in different places of the upgrade.

Caused by: com.ibm.team.repository.common.InternalRepositoryException: CRJAZ0329E The database query could not be completed. Vendor specific exception
SQL: INSERT INTO DNG.DNGRTFCTSDBRTFCTMDMSTRNGXTNSNS(JZ_PARENT_ID, INTERNAL_ID, KEY, VALUE) select * from (select ? as JZ_PARENT_ID, ? as INTERNAL_ID, ? as KEY, ? as VALUE from DUAL) X 
where not exists (select 1 from DNG.DNGRTFCTSDBRTFCTMDMSTRNGXTNSNS Y where X.INTERNAL_ID=Y.INTERNAL_ID and X.JZ_PARENT_ID=Y.JZ_PARENT_ID)
SQL Exception #1
SQL Message: ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS1'

SQL State: 99999
Error Code: 30036


Another error that can occur either when running an individual application's upgrade, or if running multiple applications concurrently (especially DOORS Next and Rhapsody Model Manager) is ORA-01555

2021-02-02 12:12:12,928 ORA-01555: snapshot too old: rollback segment number 35 with name "_SYSSMU35_2166793944$" too small
com.ibm.team.repository.common.TeamRepositoryException: ORA-01555: snapshot too old: rollback segment number 35 with name "_SYSSMU35_2166793944$" too small
Your Oracle DBA will be able to advise the most appropriate resolution to the problem (retention time increase or overall space increase).

To calculate what the value for UNDO_RETENTION should be, look at the SQL monitor and find out how long the longest SQL is. This would provide a good estimate for what UNDO retention should be. The default is 900s which seems short given how long some of the SQL could be running (https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/UNDO_RETENTION.html#GUID-D238E854-A2C5-463E-A146-31F61300124E).
Other references:
https://docs.oracle.com/en/database/oracle/oracle-database/19/admin/managing-undo.html#GUID-2C865CF9-A8B5-4BF1-A451-E8C08D3611F0
http://www.dba-oracle.com/t_ora_01555_snapshot_old.htm
http://www.dba-oracle.com/oracle_tips_rollback_segments.htm

Oracle archive logging

It is important to ensure that your Oracle archive logs are appropriate for a very large transaction that occurs as part of the DNG upgrade to avoid an Oracle DB archive logging full filesystem producing the below error :

2020-07-23 20:08:56,011 CRJAZ0503E The server was unable to connect to the database. Exception:
"CRJAZ2654E The server was unable to connect to the database because the database connection settings are invalid or because database is unreachable (SQLSTATE: 64000, SQLCODE: 257). 
Verify the connection settings in your teamserver.properties file. See the Jazz Team Server log for more information."
2020-07-23 20:08:57,044 CRJAZ2654E The server was unable to connect to the database because the database connection settings are invalid or because database is unreachable (SQLSTATE: 64000, SQLCODE: 257). 
Verify the connection settings in your teamserver.properties file. See the Jazz Team Server log for more information.
java.sql.SQLException: ORA-00257: Archiver error. Connect AS SYSDBA only until resolved.

The solution for this error is either :

- increase the frequency of your archive logs backup and cleanup
or
- disable the archive logging during the upgrade activities (NOARCHIVELOG mode) (any type of point in time recovery operations or online backups won't be possible until it's enabled again )

ORA-01795: maximum number of expressions in a list is 1000

The addTables execution for RM application fails with this message:

2021-03-01 14:36:48,922 [3147c405] Error while linking open and completed change sets and change set configurations
com.ibm.team.repository.common.InternalRepositoryException: CRJAZ0368E A database query could not run on the server.
   stmt = select t1.ITEM_ID, t1.STATE_ID from RM1USER.VVCMODEL_CONFIGURATION t1 left outer join RM1USER.REPOSITORY_QUERYABLE_JPI_MPPNG t2 on (t1.ID = t2.JPI) where t2.URL  in(?, ?, ?, ?, ?, ?, ?, ?, ?,..., ?, ?, ?)
at com.ibm.team.repository.service.TeamRepositoryExceptionHelper.wrapInternalException(TeamRepositoryExceptionHelper.java:109)
at com.ibm.team.repository.service.internal.dataaccess.FetchRows.executeQuery(FetchRows.java:265)
at com.ibm.team.repository.service.internal.dataaccess.QueryFetchRows.next(QueryFetchRows.java:89)
at com.ibm.team.repository.service.internal.AbstractQueryResult.hasNext(AbstractQueryResult.java:93)
at com.ibm.team.repository.service.internal.vvc.ConfigurationManagementImportService$9.run(ConfigurationManagementImportService.java:1517)
at com.ibm.team.repository.service.internal.vvc.ConfigurationManagementImportService$9.run(ConfigurationManagementImportService.java:1)
...
at com.ibm.team.repository.service.internal.vvc.ConfigurationManagementImportService.linkChangeSetsAndConfigurationsBatch(ConfigurationManagementImportService.java:1484)
at com.ibm.team.repository.service.internal.vvc.ConfigurationManagementImportService.linkChangeSetsAndConfigurations(ConfigurationManagementImportService.java:1472)
at com.ibm.team.repository.service.internal.vvc.ConfigurationManagementImportService.internalInitialize(ConfigurationManagementImportService.java:1297)
at com.ibm.team.repository.service.internal.vvc.ConfigurationManagementImportService.initialize(ConfigurationManagementImportService.java:1275)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:508)
at org.eclipse.soda.sat.core.internal.record.ExportProxyServiceRecord.invoke(ExportProxyServiceRecord.java:361)
at org.eclipse.soda.sat.core.internal.record.ExportProxyServiceRecord.access$0(ExportProxyServiceRecord.java:347)
at org.eclipse.soda.sat.core.internal.record.ExportProxyServiceRecord$ExportedServiceInvocationHandler.invoke(ExportProxyServiceRecord.java:56)
at com.sun.proxy.$Proxy843.initialize(Unknown Source)
at com.ibm.rdm.repotools.upgrade.addtables.AddTables06MigrationHandler.upgrade(AddTables06MigrationHandler.java:319)
at com.ibm.rdm.repotools.upgrade.addtables.AbstractCoreMigrationHandler.postAddTables(AbstractCoreMigrationHandler.java:37)
...
Caused by: java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of expressions in a list is 1000
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:450)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:399)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1059)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:522)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:53)
at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:774)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:925)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1111)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:4798)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:4845)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeQuery(OraclePreparedStatementWrapper.java:1501)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.stat.PreparedStatementStatWrapper.executeQuery(PreparedStatementStatWrapper.java:69)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.errlog.PreparedStatementErrLogWrapper.executeQuery(PreparedStatementErrLogWrapper.java:80)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.leak.PreparedStatementLeakWrapper.executeQuery(PreparedStatementLeakWrapper.java:59)
at com.ibm.team.repository.service.internal.dataaccess.FetchRows.executeQuery(FetchRows.java:237)
... 102 more
........

There is no workaround available for this, the following APAR was raised * APAR: PH35115 7.0.1 upgrade addTables results in the following error: ORA-01795: maximum number of expressions in a list is 1000 (WI 529486) and is fixed in V7.0.1 ifix009 and V7.0.2 ifix003.

ORA-00001: unique constraint (RM1USER.REPOSTR_RPSTR_QRBL_JP_MPPNG_PK)

When upgrading to V7.0.2 The addTables execution for RM application fails with this message:

2021-03-20 23:57:51,363 [626447f4] Exception occurred while retrying to Batch to recover exception. This exception trace can be ignored as it was for retrial. The original exception will be below this trace.
java.sql.BatchUpdateException: ORA-00001: unique constraint (RM1USER.REPOSTR_RPSTR_QRBL_JP_MPPNG_PK) violated
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:12296)
at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:246)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.stat.StatementStatWrapper.executeBatch(StatementStatWrapper.java:254)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.errlog.StatementErrLogWrapper.executeBatch(StatementErrLogWrapper.java:184)
at com.ibm.team.repository.service.internal.db.jdbcwrappers.leak.StatementLeakWrapper.executeBatch(StatementLeakWrapper.java:75)
at com.ibm.team.repository.service.internal.dataaccess.BatchingRowReceiver.handleBatchUpdateException(BatchingRowReceiver.java:567)
at com.ibm.team.repository.service.internal.dataaccess.BatchingRowReceiver.jdbcSupportedBatch(BatchingRowReceiver.java:446)
at com.ibm.team.repository.service.internal.dataaccess.BatchingRowReceiver.executeBatch(BatchingRowReceiver.java:406)
at com.ibm.team.repository.service.internal.dataaccess.BatchingRowReceiver.execute(BatchingRowReceiver.java:334)
at com.ibm.team.repository.service.internal.dataaccess.IRowReceiver$IRowReceiverLocator.executePendingChanges(IRowReceiver.java:158)
at com.ibm.team.repository.service.internal.dataaccess.IRowReceiver$IRowReceiverLocator.executePendingChanges(IRowReceiver.java:137)
at com.ibm.team.repository.service.internal.RdbRepositoryDataMediator.executePendingChanges(RdbRepositoryDataMediator.java:2163)
at com.ibm.team.repository.service.internal.RdbRepositoryDataMediator.preCommitNotify(RdbRepositoryDataMediator.java:2052)
at com.ibm.team.repository.service.internal.rdb.RepositoryDatabase$WriteExclusiveTransaction.completeTransaction(RepositoryDatabase.java:855)
at com.ibm.team.repository.service.internal.rdb.RepositoryDatabase$Transaction.finishTransaction(RepositoryDatabase.java:672)
at com.ibm.team.repository.service.internal.rdb.RepositoryDatabase$3.run(RepositoryDatabase.java:409)
at com.ibm.team.repository.service.internal.rdb.ConnectionPoolService.withCurrentConnection(ConnectionPoolService.java:531)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:508)

There is no workaround available for this, the following APAR was raised * APAR: PH26386 7.0.2 iFix001m Upgrade causes unique constraint violation reported addTables log (WI 530948) and is fixed in V7.0.1 ifix009 and V7.0.2 ifix003.

Some useful guides for interpreting Oracle AWR reports are:


Related topics: Related topics: 7.0 performance: IBM Engineering Requirements Management DOORS Next, Planning your ELM upgrade to version 7.0, Oracle 12c Tuning Guide, Recommendations when planning and implementing ELM DOORS Next 7.0.x upgrade project

External links:

Additional contributors: MoatazHarone, SrinivasMandava, FazhengZheng

Edit | Attach | Printable | Raw View | Backlinks: Web, All Webs | History: r43 | r34 < r33 < r32 < r31 | More topic actions...
 
This site is powered by the TWiki collaboration platformCopyright © by IBM and non-IBM contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our Terms of Use. Please read the following disclaimer.
Dashboards and work items are no longer publicly available, so some links may be invalid. We now provide similar information through other means. Learn more here.