Schedules
Schedules are used to automatically execute Data Jobs on a pre-defined basis. In the default scenario, Schedules allow for sequential processing which means that all the Data Jobs within one Schedule are executed one after another. If one Data Job of the Schedule fails, all remaining Data Jobs will be canceled automatically.
However, to orchestrate your Data Pipeline in a more flexible manner, you can also define to execute the Data Jobs inside on Schedule in parallel (please create a Service Desk ticket to activate this functionality in your team). Such parallel executions may result in race conditions if the Data Jobs makes use of the same Data Connections and tables. For example when you select a table in one Data Job, into which you are inserting records in the other Data Job, and you run both Data Jobs in parallel, then in some cases the select will return with the inserted records, and in other cases without them.
Additionally, to such frequency-based Schedules, you can also define trigger-based Schedules to define dependencies (please create a Service Desk ticket to activate this functionality in your team). That means that the successful execution of one Schedule automatically triggers another Schedule. This also allows for self-triggering of Schedules e.g. directly re-trigger a Schedule once it finishes.
Time zone
To unify Schedules for different users across time zones, we use server time which in our EMS is UTC+0. Please take this into account when configuring a Schedule.

Clicking on the "+ Add a Schedule" button allows you to create a new Schedule.
The table displays information about the Schedules
The name of the Schedule which will also be shown in the logs.
The "status" column gives you an overview of which Schedules are enabled and will therefore be executed and which are disabled.
The "frequency" column shows you how often your Schedule gets executed.
The "next execution" column gives you an indication of when the next execution will happen.
By clicking on the context menu you have the following available actions:
The context menu entry "open" opens the configuration window for the specific Schedule. A click on the row next to the context menu achieves the same.
In the rename dialog, you can rename the Schedule.
"Open logs in execution history" is a shortcut to the Execution History where you can investigate the logs of a specific Schedule.
You can execute the configured Data Jobs of the Schedule manually by clicking on "Execute"
After a confirmation dialog, you can delete the Schedule. This will not delete the contained Data Jobs, but only the configuration of the respective Schedule
Configuring a Schedule

On the top you can "open logs in execution history" to check past loads. The "Execute" button allows you to execute the configured Data Jobs manually for testing purposes.
After confirmation, you can enable the Schedule. Make sure that both the scheduling plan as well as the Data Jobs are correctly set up before enabling it.
You can decide whether all the Data Jobs should be executed as full or as delta loads. See Data Jobs for details.
There are multiple scheduling plans available. Important: If you make any changes to the scheduling plan you need to save it before changing anything else (e.g. enabling the Schedule or adding Data Jobs).
Hourly: The Schedule is executed every hour at 0, 15, 30, or 45 minutes after the full hour
Every few hours: Similar to hourly, but you can specify an interval in hours between executions.
Daily: Specify a specific time of the day on which the Schedule gets executed.
Weekly: Specify one or more weekdays and a time on which the Schedule gets executed.
Monthly: The Schedule is executed on a specific day of the month and at a specific time.
Custom cron: Define a scheduling plan with a cron syntax (see below)
By clicking on the "Schedule Data Jobs" button you will receive a list view of the Data Jobs which allows you to add Data Jobs to the Schedule. You can reorder the Data Jobs in the Schedule by dragging on the handles to the left of the Schedule names. Additionally, you can reorder the Data Jobs by using the up and down options in the context menu. There you can also remove the Data Job from the Schedule. This does not delete the Data Job - it only removes it from the current Schedule.
Details on the custom cron syntax
Note
You can use the following online tool for help with a custom cron string: https://www.freeformatter.com/cron-expression-generator-quartz.html
The cron syntax allows you to precisely define a custom scheduling plan. The syntax is composed of six elements. Each element is either a number or an asterisk for "every"
second: 0-59 or *
minute: 0-59 or *
hour: 0-23 or *
day of the month: 1-31 or */?
month of the year: 1-12 or JAN-DEC or *
day of the week: 1-7 or SUN-SAT or */?
Examples:
Cron syntax | Explanation |
---|---|
0 0 0 1 * ? | The Schedule will be executed at midnight on the first day of every month. |
0 * * 1 * ? | The Schedule will be executed every minute, but only on the first day of the month. |