Managing Your Jobs

Use these tutorials to create, modify, and run your jobs.

Creating Jobs

You can create jobs to perform task and launch them individually or as part of a data pipeline.

Before you begin:

Before creating the job, you must create your code file with the actions you want your job to perform. In this tutorial, we will use Python technology.

  1. Create a Python file called hello-scranton.py.

  2. Open a new file in your preferred text editor.

  3. Copy and paste the following code into your file:

    print("Hello, Scranton Branch!")
  4. Save the file as hello-scranton.py.

  1. Click The "Projects" module icon is a folder. Projects from the primary navigation menu.
    Your project library opens, listing the existing projects.

  2. Click a project in the list.
    The project opens on its job library.

    Navigate to your project

  3. Click New job to create a new job.
    The New job page opens.

  4. Enter the required information.

    1. Enter a name, define an alias, and add a description.

    2. Click Continue.

    3. Select your job type and technology.

    4. Click Continue.

    5. Depending on whether you are creating an embedded or external job:

      • Embedded Job

      • External Job

      Configure the technology version, upload your code file, and enter your shell command.

      Configure the technology version, select the connection from the Connection list or create it on the fly if you don’t already have one, and select your job from the Job list.

    6. Click Continue.

    7. Configure your job settings. For more information, see Job Settings.

  5. Click Create job to confirm the creation.
    The The "Overview" page icon is a square divided into several other squares. Overview page of your job opens, and a message appears saying that your job has been created.

To delete a job, you can either:

  • Click the kebab menu The kebab menu icon is three vertical dots.  delete Delete from the The "Overview" page icon is a square divided into several other squares. Overview page of the corresponding job.

  • Click delete Delete job in the secondary navigation menu from any other page of your job.

A confirmation message appears; Click Delete again to confirm the deletion. Be careful, there is no progress bar to cancel the deletion once it is confirmed.

You cannot delete a job that is part of pipeline.

Running and Stopping Jobs

You can run or stop your jobs manually, even when the jobs are scheduled to run.

  1. Click The "Projects" module icon is a folder. Projects from the primary navigation menu.
    Your project library opens, listing the existing projects.

  2. Click a project in the list.
    The project opens on its job library.

    Navigate to your project

  3. From the list, click a job to access its details.
    The job The "Overview" page icon is a square divided into several other squares. Overview page opens.

  4. Click either Run Run or Stop Stop depending of the current status of your job.

    You can also access this command at the bottom of the secondary navigation menu from the The "Instances" page icon is three overlapping squares. Instances and The "Versions" page icon is a folder with an arrow pointing up. Versions page.

    The job status changes depending on the outcome.

Modifying Job Settings

After its creation, you can always modify your job settings. You can access the settings from the The "Overview" page icon is a square divided into several other squares. Overview page of the job.

  1. Click The "Projects" module icon is a folder. Projects from the primary navigation menu.
    Your project library opens, listing the existing projects.

  2. Click a project in the list.
    The project opens on its job library.

    Navigate to your project

  3. From the list, click a job to access its details.
    The job The "Overview" page icon is a square divided into several other squares. Overview page opens.

  4. From the The "Overview" page icon is a square divided into several other squares. Overview page, click the desired setting to edit it:

    Job settings from the "Overview" page.
    • 1 - Name

    • 2 - Alias

    • 3 - Description

    • 4 - Scheduled Run

    • 5 - Email Alerts

    • 6 - Resources

    Names are mandatory, with a maximum of 255 characters, and unique within a project.

    The job alias is optional and unique for each job within a project. It allows you to refer to a job within another job, and can be used to transfer information between jobs during a pipeline execution when the settings env vars Variables setting is enabled.

    Descriptions are optional and have no restrictions, but it is a good practice to keep them short and informative.

    There are two runtime types:

    • The manual run, which requires you to click Run to start the job.

    • The scheduled run, which launches the job according to the schedule you choose.

      Scheduled jobs can also be started manually.

      The Scheduled run type has three schedule modes: Simple, Shortcut, and Expert.

      • In Simple mode, you can easily specify variables through the user interface. There are many possibilities.

        Screenshot of the settings for the scheduled run type in simple mode

      • In Shortcut mode, you can choose the recurrence of your run on an hourly, daily, weekly, monthly, or annual basis. All other settings are automatic.

        Screenshot of the settings for the scheduled run type in shortcut mode

      • In Expert mode, you can specify variables using the Cron format. The Cron time string consists of five values separated by spaces: [minute] [hour] [day of the month] [month] [day of the week]. They are based on the following information:

        Table 1. Cron format
        Descriptor Acceptable values

        Minute

        0 to 59, or *

        Hour

        0 to 23, or *

        Day of the month

        1 to 31, or *

        Month

        1 to 12, or *

        Day of the week

        0 to 7 (0 and 7 both represent Sunday), or *

        The Cron time string must contain entries for each character attribute. If you want to set a value using only minutes, you must have asterisk characters for the other four attributes that you are not configuring.

        Screenshot of the settings for the scheduled run type in expert mode

        Once you have finished scheduling your run, you will see the summary of your choice written below and the time of the next run.

    Alerts are optional and can be set to receive an email when the status of your job changes. They can be sent to multiple email addresses to notify you of the following status changes:

    • queued requested Requested: the job’s run has been requested and is being executed.

    • queued requested Queued: the job is waiting for the necessary resources to be executed.

    • spinner Running: the job is up and running.

    • fail Failed: the job has crashed. A failed job can go into an out of memory Out Of Memory (OOM) state, which is an extension of the Failed state. The OOM state can be due to a lack of memory (RAM).

    • stop Stopping: the job is stopping.

    • stop Stopped: the job has stopped running.

    • success Succeeded: the job has been successfully executed.

    • unknown Unknown: the job no longer runs because an error has occurred.

    For embedded jobs only.

    CPU/GPU and RAM resources are optional and can be specified for optimal execution.

    The consumption of your job can be managed by guaranteed resources, that is, the minimum amount of resource requested, and limited resources, that is, the maximum amount of resource that can be consumed.

    This works the same way for CPU/GPU and RAM resources, except that for RAM you can choose between GB and MB units of measure.

    By default, job resource management is disabled because decisions about resource requests and limits are difficult to make without historical data about resource usage patterns of jobs.

    Except for specific requirements, you may not enable this feature and let Saagie automatically assign the appropriate resource requests and limits for your job.

    Automatic adjustments can be made to avoid inconsistent configurations. If you try to set a guaranteed value greater than the limit value or, similarly, if you try to set a limit value smaller than the guaranteed value, a note appears to inform you that, depending on the situation, the guaranteed value, or the limit value has been adjusted (a).

    For GPU resources, the guaranteed and limit values must be equal.

    RAM limit adjustment message.

    Besides, when the guaranteed value and the limit value are not optimal, a recommendation notification appears with the appropriate values for an optimal configuration (b).

    CPU recommendation message.

    • Switching from CPU to GPU resources results in the loss of the current configuration.

    • Modifying CPU/GPU and RAM resources automatically restarts your job.

  5. Saving is automatic. You can just press Enter to validate the job name change, click anywhere nearby to confirm the description and job alias change, and close de side panel to validate the scheduled run, email alert and resource changes.

Upgrading Jobs

You can upgrade your jobs to always get the most out of them. By upgrading your job, you create a new version of it.

Before you begin:

You must update your code file or create a new one.

  1. Open your file in your preferred text editor.

  2. Copy and paste the following code into your file:

    print("Hi there, Scranton Branch!")
  3. Save the file as hi-scranton.py.

  1. Click The "Projects" module icon is a folder. Projects from the primary navigation menu.
    Your project library opens, listing the existing projects.

  2. Click a project in the list.
    The project opens on its job library.

    Navigate to your project

  3. From the list, click a job to access its details.
    The job The "Overview" page icon is a square divided into several other squares. Overview page opens.

    On this page you can see information about the last update, such as when it took place, by whom, the technology and runtime context used, the package and log of the job.
  4. From the The "Instances" page icon is three overlapping squares. Instances or The "Versions" page icon is a folder with an arrow pointing up. Versions page, click Upgrade job Upgrade job button.

    You can also click edit Edit from the The "Overview" page icon is a square divided into several other squares. Overview page.

    The Upgrade job page opens.

  5. Enter the information to change. You can:

    1. Depending on whether you are upgrading an embedded or external job:

      • Embedded Job

      • External Job

      Configure the technology version, upload your code file, and enter your shell command.

      Configure the technology version, select the connection from the Connection list or create it on the fly if you don’t already have one, and select your job from the Job list.

    2. Click Continue.

    3. Add a release note to briefly explain your changes.

    4. Click Save job to save your changes and exit the job upgrade settings.

    Your job has been upgraded, and you should see that a new version of it has been created, along with a new instance.

Moving Jobs

You can move or migrate jobs from their current project to another project on the same platform.

If the job was part of a pipeline at some point, the job can only be moved if that pipeline has been deleted.
  1. Click The "Projects" module icon is a folder. Projects from the primary navigation menu.
    Your project library opens, listing the existing projects.

  2. Click a project in the list.
    The project opens on its job library.

    Navigate to your project

  3. From the list, select a job by checking the box next to the job.

    • You can move one or more jobs at the same time, if it is jobs from the same category and technology, and are moved to the same project.

    • You cannot select a job if it is already part of a pipeline.

    A pop-in window appears with the number of selected jobs and options for moving them.

    Pop-in window with the number of selected jobs and option for moving them.
  4. From the pop-in, select the new project and category for the selected job(s).

    You can move your job(s) to any project you have access to and whose job technology is selected in its settings.
  5. Click Move.
    A warning message appears indicating that moving job(s) may impact its functionality.

  6. Select Start Migration to confirm the move or Cancel to cancel it.
    Depending on the complexity of the job, this may take a few minutes to complete.

Creating and Modifying Variables in a Job Output in a Pipeline

In a pipeline execution context, you can create and modify variables at the output of a job to transfer and use them in the next job.

You must have the settings env vars Variables setting enabled in your pipeline to allow the creation and modification of variables.
  1. Enable the settings env vars Variables setting in your pipeline:

    You can access it from the pipeline The "Overview" page icon is a square divided into several other squares. Overview page or from its edit Edit mode.
    1. Click settings env vars Variables.
      A panel opens with the existing variables in a code block.

    2. Click the switch to enable the modification of variables in pipeline execution.

  2. To use variables in the code of your jobs, you must write in text form the variables you want to use in other jobs in the execution variables output file /workdir/output-vars.properties, located in your job’s local file system.

    You can either write:

    • One variable per line with the following patterns: VARIABLE_NAME=variable_value.

    • Several variables on one line, separated by the special character \n: VARIABLE_NAME=variable_value\nVARIABLE_NAME=variable_value\nVARIABLE_NAME=variable_value.

    Variable names is mandatory. It has to start with a letter and can be up to 128 characters, including alphanumeric characters (a-z) (A-Z) (0-9) and underscores (_).

    Example 1. Defining variables for a Bash job

    To define the following variables in a Bash job:

    VARIABLE_NAME variable_value

    myVariable_1

    Hello everybody

    myVariable_2

    2023

    Other_variable

    444

    Write the following code line in the command line of your job:

    echo -ne 'myVariable_1=Hello everybody\nMyVariable_2=2023\nOther_variable="444"' > /workdir/output-vars.properties
    Refer to the documentation of the technology used in your job to know how to write to a file.
    Example 2. How to read variables?

    To read variables in your job’s code, you can use:

    • VARIABLE_NAME: Corresponds to the last value written by the previous job, which is read when the current job starts.

    • INIT_VARIABLE_NAME: Corresponds to the pipeline initialization value, that is, the value of the corresponding environment variable readable by the pipeline.

    • jobAlias_VARIABLE_NAME: Corresponds to the output value written by the job referenced by the alias.

    For example, to read the variables of the previous example, write the following code line in the command line of your job:

    echo $myVariable_1
    echo $INIT_myVariable_1
    echo $jobAlias_myVariable_1
    echo $myVariable_2
    echo $INIT_myVariable_2
    echo $jobAlias_myVariable_2
    echo $Other_variable
    echo $INIT_Other_variable
    echo $jobAlias_Other_variable

    The output variables will be retrieved in the next jobs during pipeline execution and used as environment variables.

You can view a summary table of the variables used and modified for each job during the pipeline execution. In the The "Overview" page icon is a square divided into several other squares. Overview and The "Instances" page icon is three overlapping squares. Instances pages of the corresponding pipeline, click See Variables from the contextual menu of the job instance.