It's all about the answers!

Ask a question

How to replicate standard ETL jobs in DCC?

Luca Martinucci (1.0k291110) | asked Nov 10 '16, 10:49 a.m.
When using DCC for data collection, the standard ETL jobs of the CLM applications must be disabled to avoid data corruption.
I need to know:
1) if the DCC jobs fully susbtitute the standard ETL jobs, so that all data that the ETL jobs collect are also collected by the DCC (and stored in the same tables of the DW database);
2) if this is not true, is there an easy way to replicate the standard ETL jobs in DCC? By an export/import process, for example, without manually defining them.
I am working with version 5.0.2 of CLM and DCC.

Accepted answer

permanent link
Francesco Chiossi (5.7k11119) | answered Nov 11 '16, 5:18 a.m.
Hello Luca,

the burndown report uses the trend data that is stored in the data mart section of the data warehouse.
This data is collected every day during the ETL execution and cannot be reconstructed.
This means that if you create a new data warehouse, like you did moving from Derby to SQL Server, that historical data is lost.
Unluckily there's no way to migrate that data between 2 data warehouse types.
You should start seeing good data in the burndown as soon as you start new sprints.

Best Regards
Francesco Chiossi
Luca Martinucci selected this answer as the correct answer

Luca Martinucci commented Nov 11 '16, 6:13 a.m.

so, is it possible to have both DCC and data mart running and collecting data?
Or is it the DCC that collects the burndown data as well?

Francesco Chiossi commented Nov 11 '16, 6:30 a.m.

Hello Luca,

DCC collects the historic data (aka Metrics or Data Mart section) in the same way the default CLM data collection (Java ETL) does.
I think the problem is that you switched the data warehouse from Derby to SQL Server, so the metrics that where collected before by the Java ETL are only on the old Derby database and not on the SQL Server one.

Best Regards
Francesco Chiossi

Luca Martinucci commented Nov 11 '16, 8:15 a.m.

Thanks for the answer.
Actually, this is not a blocking issue, as we have done this dwh switch in a test environment.
In the production environment we have never set up the dwh, so, when we'll set it up we'll start collecting historic data from scratch.

One other answer

permanent link
Francesco Chiossi (5.7k11119) | answered Nov 10 '16, 11:25 a.m.
Hello Luca,

the DCC jobs fully replace the default data collection (Java ETL).
They collect the same data just in a more efficient way.
Make sure however that you have the appropriate resources for DCC as the performance improvement requires extra resources.
We recommend to have DCC on a dedicated server, separated from the rest of the CLM applications.

Best Regards
Francesco Chiossi

Luca Martinucci commented Nov 11 '16, 4:46 a.m.

Hi Francesco,
I was wondering about DCC not collecting the same data as Java ETL because we have recently "upgraded" from a "light" reporting solution (using Java ETL and out-of-the-box reports) to an enterprise one (using DCC).
Until last week, we practised on reporting using a Derby db for data warehouse.
The new solution that we setup this week leverages JRS, that uses a SQL Server db as the data warehouse source, and DCC, that uses its own SQL Server db.
Both JRS and DCC works fine, but some pre-defined widgets (the burndown widget, in particular), no longer show any data.
So, I suspect that the data these widgets used to show have been "lost" during the dwh upgrade.
And yes, both DCC and JRS are on a dedicated server, separated from the server running CLM.

Your answer

Register or to post your answer.