Skip to main content

Celonis Product Documentation

Parallel Transformations [Performance]

Executing transformations in parallel rather than sequentially – where they can potentially block the execution of other transformations – can accelerate data job executions and improve the predictability of their duration.

To achieve that, split a data job in different data jobs. That can start with assigning groups of transformations from one data job to separate ones (n transformations : 1 data job) and end with isolating single statements of a transformation in a separate data job (1 transformation : n data jobs).

Successively increasing the granularity like that and tracking data job execution performance while doing so allows for detecting and further isolating problematic transformations/statements step by step.

Note that this approach requires careful consideration of potential interdependencies between extractions/transformations: it can be necessary to schedule the execution of data jobs generated in that splitting process before others to maintain some sequence.

Due to that complication and compromised maintainability/transparency of your scripts as a consequence of creating parallel transformations, they should only be set up in problematic cases and are not the standard approach.

Example - Multiple Separate Data Jobs as a Result of Splitting One Data Job
35553829.png
Example - Scheduling Data Jobs to Account for Interdependencies
35553831.png