Managing Your Jobs
Creating Jobs
Before creating the job, you must create your code file with the actions you want your job to perform. In this tutorial, we will use Python technology.
-
Create a Python file called
hello-scranton.py
. -
Open a new file in your preferred text editor.
-
Copy and paste the following code into your file:
print("Hello, Scranton Branch!")
-
Save the file as
hello-scranton.py
.
-
Click
Projects from the primary navigation menu.
Your project library opens, listing the existing projects. -
Click a project in the list.
The project opens on its job library. -
Click New job to create a new job.
The New job page opens. -
Enter the required information.
-
Enter a name, define an alias, and add a description.
-
Click Continue.
-
Select your job type and technology.
-
Click Continue.
-
Depending on whether you are creating an embedded or external job:
-
Click Continue.
-
Configure your job settings. For more information, see Job Settings.
-
-
Click Create job to confirm the creation.
TheOverview page of your job opens, and a message appears saying that your job has been created.
To delete a job, you can either:
A confirmation message appears; Click Delete again to confirm the deletion. Be careful, there is no progress bar to cancel the deletion once it is confirmed.
|
Running and Stopping Jobs
-
Click
Projects from the primary navigation menu.
Your project library opens, listing the existing projects. -
Click a project in the list.
The project opens on its job library. -
From the list, click a job to access its details.
The jobOverview page opens.
-
Click either Run
or Stop
depending of the current status of your job.
You can also access this command at the bottom of the secondary navigation menu from the Instances and
Versions page.
The job status changes depending on the outcome.
Modifying Job Settings
-
Click
Projects from the primary navigation menu.
Your project library opens, listing the existing projects. -
Click a project in the list.
The project opens on its job library. -
From the list, click a job to access its details.
The jobOverview page opens.
-
From the
Overview page, click the desired setting to edit it:
Names are mandatory, with a maximum of 255 characters, and unique within a project.
The job alias is optional and unique for each job within a project. It allows you to refer to a job within another job, and can be used to transfer information between jobs during a pipeline execution when the
Variables setting is enabled.
Descriptions are optional and have no restrictions, but it is a good practice to keep them short and informative.
There are two runtime types:
-
The manual run, which requires you to click Run to start the job.
-
The scheduled run, which launches the job according to the schedule you choose.
Scheduled jobs can also be started manually. The Scheduled run type has three schedule modes:
Simple
,Shortcut
, andExpert
.-
In
Simple
mode, you can easily specify variables through the user interface. There are many possibilities. -
In
Shortcut
mode, you can choose the recurrence of your run on an hourly, daily, weekly, monthly, or annual basis. All other settings are automatic. -
In
Expert
mode, you can specify variables using the Cron format. The Cron time string consists of five values separated by spaces:[minute] [hour] [day of the month] [month] [day of the week]
. They are based on the following information:Table 1. Cron format Descriptor Acceptable values Minute
0 to 59, or *
Hour
0 to 23, or *
Day of the month
1 to 31, or *
Month
1 to 12, or *
Day of the week
0 to 7 (0 and 7 both represent Sunday), or *
The Cron time string must contain entries for each character attribute. If you want to set a value using only minutes, you must have asterisk characters for the other four attributes that you are not configuring. Once you have finished scheduling your run, you will see the summary of your choice written below and the time of the next run.
-
Alerts are optional and can be set to receive an email when the status of your job changes. They can be sent to multiple email addresses to notify you of the following status changes:
-
Requested
: the job’s run has been requested and is being executed. -
Queued
: the job is waiting for the necessary resources to be executed. -
Running
: the job is up and running. -
Failed
: the job has crashed. A failed job can go into anOut Of Memory
(OOM) state, which is an extension of theFailed
state. The OOM state can be due to a lack of memory (RAM). -
Stopping
: the job is stopping. -
Stopped
: the job has stopped running. -
Succeeded
: the job has been successfully executed. -
Unknown
: the job no longer runs because an error has occurred.
For embedded jobs only. CPU/GPU and RAM resources are optional and can be specified for optimal execution.
The consumption of your job can be managed by guaranteed resources, that is, the minimum amount of resource requested, and limited resources, that is, the maximum amount of resource that can be consumed.
This works the same way for CPU/GPU and RAM resources, except that for RAM you can choose between GB and MB units of measure. By default, job resource management is disabled because decisions about resource requests and limits are difficult to make without historical data about resource usage patterns of jobs.
Except for specific requirements, you may not enable this feature and let Saagie automatically assign the appropriate resource requests and limits for your job. Automatic adjustments can be made to avoid inconsistent configurations. If you try to set a guaranteed value greater than the limit value or, similarly, if you try to set a limit value smaller than the guaranteed value, a note appears to inform you that, depending on the situation, the guaranteed value, or the limit value has been adjusted (a).
For GPU resources, the guaranteed and limit values must be equal. Besides, when the guaranteed value and the limit value are not optimal, a recommendation notification appears with the appropriate values for an optimal configuration (b).
-
Switching from CPU to GPU resources results in the loss of the current configuration.
-
Modifying CPU/GPU and RAM resources automatically restarts your job.
-
-
Saving is automatic. You can just press Enter to validate the job name change, click anywhere nearby to confirm the description and job alias change, and close de side panel to validate the scheduled run, email alert and resource changes.
Upgrading Jobs
You must update your code file or create a new one.
-
Open your file in your preferred text editor.
-
Copy and paste the following code into your file:
print("Hi there, Scranton Branch!")
-
Save the file as
hi-scranton.py
.
-
Click
Projects from the primary navigation menu.
Your project library opens, listing the existing projects. -
Click a project in the list.
The project opens on its job library. -
From the list, click a job to access its details.
The jobOverview page opens.
On this page you can see information about the last update, such as when it took place, by whom, the technology and runtime context used, the package and log of the job. -
From the
Instances or
Versions page, click Upgrade job
.
You can also click Edit from the
Overview page.
The Upgrade job page opens.
-
Enter the information to change. You can:
-
Depending on whether you are upgrading an embedded or external job:
-
Click Continue.
-
Add a release note to briefly explain your changes.
-
Click Save job to save your changes and exit the job upgrade settings.
Your job has been upgraded, and you should see that a new version of it has been created, along with a new instance.
-
Moving Jobs
-
Click
Projects from the primary navigation menu.
Your project library opens, listing the existing projects. -
Click a project in the list.
The project opens on its job library. -
From the list, select a job by checking the box next to the job.
-
You can move one or more jobs at the same time, if it is jobs from the same category and technology, and are moved to the same project.
-
You cannot select a job if it is already part of a pipeline.
A pop-in window appears with the number of selected jobs and options for moving them.
-
-
From the pop-in, select the new project and category for the selected job(s).
You can move your job(s) to any project you have access to and whose job technology is selected in its settings. -
Click Move.
A warning message appears indicating that moving job(s) may impact its functionality. -
Select Start Migration to confirm the move or Cancel to cancel it.
Depending on the complexity of the job, this may take a few minutes to complete.
Creating and Modifying Variables in a Job Output in a Pipeline
-
Enable the
Variables setting in your pipeline:
You can access it from the pipeline Overview page or from its
Edit mode.
-
Click
Variables.
A panel opens with the existing variables in a code block. -
Click the switch to enable the modification of variables in pipeline execution.
-
-
To use variables in the code of your jobs, you must write in text form the variables you want to use in other jobs in the execution variables output file
/workdir/output-vars.properties
, located in your job’s local file system.You can either write:
-
One variable per line with the following patterns:
VARIABLE_NAME=variable_value
. -
Several variables on one line, separated by the special character
\n
:VARIABLE_NAME=variable_value\nVARIABLE_NAME=variable_value\nVARIABLE_NAME=variable_value
.
Variable names is mandatory. It has to start with a letter and can be up to 128 characters, including alphanumeric characters (a-z) (A-Z) (0-9) and underscores (_).
Example 1. Defining variables for a Bash jobTo define the following variables in a Bash job:
VARIABLE_NAME
variable_value
myVariable_1
Hello everybody
myVariable_2
2023
Other_variable
444
Write the following code line in the command line of your job:
echo -ne 'myVariable_1=Hello everybody\nMyVariable_2=2023\nOther_variable="444"' > /workdir/output-vars.properties
Refer to the documentation of the technology used in your job to know how to write to a file. Example 2. How to read variables?To read variables in your job’s code, you can use:
-
VARIABLE_NAME
: Corresponds to the last value written by the previous job, which is read when the current job starts. -
INIT_VARIABLE_NAME
: Corresponds to the pipeline initialization value, that is, the value of the corresponding environment variable readable by the pipeline. -
jobAlias_VARIABLE_NAME
: Corresponds to the output value written by the job referenced by the alias.
For example, to read the variables of the previous example, write the following code line in the command line of your job:
echo $myVariable_1 echo $INIT_myVariable_1 echo $jobAlias_myVariable_1 echo $myVariable_2 echo $INIT_myVariable_2 echo $jobAlias_myVariable_2 echo $Other_variable echo $INIT_Other_variable echo $jobAlias_Other_variable
The output variables will be retrieved in the next jobs during pipeline execution and used as environment variables.
-