Skip to main content

Celonis Product Documentation


Schedules are used to automatically execute Data Jobs on a regular basis. In the default scenario, Schedules allow for sequential processing which means that all the Data Jobs within one Schedule are executed one after another and if one job fails, the remaining jobs will be cancelled.

However, to orchestrate your Data Pipeline in a more flexible manner, you can also define to execute the Data Jobs inside on Schedule in parallel (please create a Service Desk ticket to activate this functionality in your team). Such parallel executions may result in race conditions if the Data Jobs makes use of the same data connections and tables. For example when you select a table in one Data Job, into which you are inserting records in the other Data Job, and you run both Data Jobs in parallel, then in some cases the select will return with the inserted records and in other cases without them.

Additionally to such frequency-based Schedules you can also define trigger-based Schedules to define dependencies (please create a Service Desk ticket to activate this functionality in your team). That means that the successful execution of one Schedule automatically triggers another Schedule. This also allows for self-triggering of Schedules to e.g. directly re-trigger a Schedule once it finishes

Time zone

To unify scheduling for different users across time zones, we use server time which in our EMS is UTC+0. Please take this into account when configuring a schedule.

  1. Clicking on the "new schedule" button you can create a new schedule.

  2. The table displays information about the schedules

    1. The name of the schedule. This will be also shown in the logs.

    2. The "status" column gives you an overview of which schedules are enabled and will therefore be executed and which are disabled.

    3. The "frequency" column shows you how often your schedule is executed.

    4. The "next execution" column gives you an indication of when the next run will occur

    5. The "scheduled jobs" column counts the number of jobs that are contained in this schedule.

  3. By clicking on the context menu you have the following available actions:

    1. The context menu entry "open" opens the configuration window for the specific schedule. A click on the row next to the context menu achieves the same.

    2. In the rename dialog you can rename the schedule.

    3. "Open logs in execution history" is a shortcut to the execution history where you look at the logs of the specific schedule.

    4. You can execute the configured Data Jobs of the schedule manually by clicking on "Execute"

    5. After a confirmation dialog you can delete the schedule. This will not delete the contained jobs, only the configuration of the schedule

Configuring a schedule
  1. On the top you can "open logs in execution history" to check past loads. The "Execute" button allows you to execute the configured Data Jobs manually for testing purposes.

  2. After a confirmation you can enable the schedule. Make sure that both the scheduling plan as well as the jobs are correctly set up before enabling it.

  3. You can decide whether all the jobs should be executed as full or as delta loads. See Data Jobs for details.

  4. You can choose to load all Data Model in the Data Pool after a successful run (default). If you uncheck it, you will need to manually load the Data Model or use the API. As of March 24th, 2020 this is no longer recommended because you can define which Data Models are loaded within a Data Job.

  5. There are multiple scheduling plans available. Important: If you make any changes to the scheduling plan you need to save it before changing anything else (e.g. enabling the schedule or adding jobs).

    1. Hourly: The schedule is executed every hour at 0, 15, 30 or 45 minutes after the full hour

    2. Every few hours: Similar to hourly, but you can specify an interval in hours between executions.

    3. Daily: Specify a specific time of the day on which the schedule is executed.

    4. Weekly: Specify one or more weekdays and a time on which the schedule is executed.

    5. Monthly: The schedule is executed on a specific day of the month and a specific time.

    6. Custom cron: Define a scheduling plan with a cron syntax (see below)

  6. By clicking on the "schedule data jobs" button you will receive a list view of the jobs which allows you to add jobs to the schedule. You can reorder the jobs in the schedule by dragging on the handles to the left of the schedule names. Additionally, you can reorder the jobs by using the up and down options in the context menu. There you can also remove the job from the schedule. This does not delete the job - it only removes it from the current schedule.

Details on the cron syntax


You can use the following online tool for help with a custom cron string:

The cron syntax allows you to precisely define a custom scheduling plan. The syntax is composed of six elements. Each element is either a number or an asterisk for "every"

  1. second: 0-59 or *

  2. minute: 0-59 or *

  3. hour: 0-23 or *

  4. day of the month: 1-31 or */?

  5. month of the year: 1-12 or JAN-DEC or *

  6. day of the week: 1-7 or SUN-SAT or */?


Cron syntax


0 0 0 1 * ?

The schedule will be executed at midnight on the first day every month.

0 * * 1 * ?

The schedule will be executed every minute, but only on the first day of the month.