Using Amazon Elastic Kubernetes Service (EKS)

Use this tutorial to create an Amazon EKS cluster compatible with Saagie.

Before you begin:

Before creating a new cluster, set up your computer as follows:

  1. Create an Amazon Web Services (AWS) account if you do not already have one.

  2. Enable the appropriate access keys and secret access keys.

  3. Install the Kubernetes command-line tool, kubectl.

  4. Install the Amazon Web Services CLI.

  5. Configure the Amazon Web Services CLI.

  6. Install the command line tool, eksctl, to work with Amazon EKS clusters.

Creating or Configuring Your Cluster

  • Creating a New Cluster

  • Configuring an Existing Cluster

  1. To create your Amazon EKS cluster, refer to the Amazon EKS User’s Guide.

    For more information on managing access to AWS resources, see the Amazon Web Services documentation.
  2. Choose the eksctl method.

  3. Confirm that the AWS_ACCESS_KEY and AWS_SECRET_KEY environment variables are defined.

  4. Create a cluster.yml file with the following content:

    apiVersion: eksctl.io/v1alpha5
    kind: ClusterConfig
    
    metadata:
     name: <cluster name> (1)
     region: <region> (2)
     version: "<version>" (3)
    
    nodeGroups:
     - name: ng-1
       instanceType: m5.2xlarge
       desiredCapacity: 3

    Where:

    1 <cluster name> must be replaced with the name of your cluster. Your cluster name must be a string of case-insensitive letters, from a to z, or numbers.
    2 <region> must be replaced with the region in which the cluster will be used.
    3 <version> must be replaced with a Kubernetes version that is compatible with Saagie. Current compatible Kubernetes versions are 1.20.x, 1.21.x, and 1.23.x.
    Use quotes around the version number as eksctl requires a string text and not a float number in the YAML file.
  5. Run the following command line:

    eksctl create cluster -f cluster.yml
  1. If you are using an existing Amazon EKS cluster, create your configuration file by running the following aws command line:

    aws eks --region <aws region> update-kubeconfig --name <cluster name> (1)

    Where:

    1 <aws region> and <cluster name> must be replaced with your region and cluster name.
  2. Once your configuration file is created, check the connectivity.

Verifying Your Kubernetes Cluster

  1. Run the following command line to verify that you have access to your Kubernetes cluster:

    kubectl get nodes

    The output of the command should look like the following:

    NAME                                           STATUS   ROLES    AGE    VERSION
    ip-192-168-15-134.eu-west-1.compute.internal   Ready    <none>   9m8s   v1.13.8-eks-cd3eb0
    ip-192-168-35-150.eu-west-1.compute.internal   Ready    <none>   9m3s   v1.13.8-eks-cd3eb0
    ip-192-168-88-76.eu-west-1.compute.internal    Ready    <none>   9m7s   v1.13.8-eks-cd3eb0
    All nodes must have the status ready.

Installing Calico

Calico is a network policy engine for Kubernetes used to implement network segmentation and tenant isolation.

Amazon EKS does not automatically install Calico, which is necessary for your Kubernetes cluster.
  1. To install Calico, refer to the Amazon EKS User’s Guide.

  2. If you did not install Calico when you created your cluster, run the following command line:

    kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.3/calico.yaml (1)
    1 Make sure the version of Calico is compatible with your cluster.

Setting Up a Role for Saagie Jobs

The Kubernetes pods responsible for running Saagie jobs use a service account associated with an AWS role, which configures access rights.
If you choose to skip it, note that jobs launched on Saagie can get admin rights on the AWS API.

  1. Choose the AWS policy that meets your needs.

    Example of jobs that will not require access to AWS resources.

    ARN: arn:aws:iam::aws:policy/AWSDenyAll

    Example of jobs requiring access to S3.

    ARN: arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess

    To create your own policy, see the AWS user guide, Creating IAM policies.
  2. Create the file create-job-role.sh as follows, defining the variables indicated:

    #!/bin/bash
    set -e
    
    # Define your variables here. Variables are explained below the code block.
    CLUSTER_NAME=<cluster-name> (1)
    SAAGIE_PREFIX=<installationId> (2)
    ROLE_NAME=<role-name> (3)
    AWS_POLICY_ARN=<policy-arn> (4)
    
    ISSUER_URL=$(aws eks describe-cluster \
        --name $CLUSTER_NAME \
        --query cluster.identity.oidc.issuer \
        --output text)
    ISSUER_HOSTPATH=$(echo $ISSUER_URL | cut -f 3- -d'/')
    ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
    PROVIDER_ARN="arn:aws:iam::$ACCOUNT_ID:oidc-provider/$ISSUER_HOSTPATH"
    cat > saagie-job-trust-policy.json << EOF
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Federated": "$PROVIDER_ARN"
          },
          "Action": "sts:AssumeRoleWithWebIdentity",
          "Condition": {
            "StringEquals": {
              "${ISSUER_HOSTPATH}:aud": "sts.amazonaws.com"
            },
            "StringLike": {
              "${ISSUER_HOSTPATH}:sub": "system:serviceaccount:${SAAGIE_PREFIX}-project*:*"
            }
          }
        }
      ]
    }
    EOF
    aws iam create-role \
      --role-name $ROLE_NAME \
      --assume-role-policy-document file://saagie-job-trust-policy.json
    aws iam update-assume-role-policy \
      --role-name $ROLE_NAME \
      --policy-document file://saagie-job-trust-policy.json
    aws iam attach-role-policy \
      --role-name $ROLE_NAME \
      --policy-arn $AWS_POLICY_ARN
    aws iam get-role \
      --role-name $ROLE_NAME \
      --query Role.Arn --output text

    Where:

    1 <cluster-name> must be replaced with the name of your EKS cluster. Your cluster name must be a string of case-insensitive letters, from a to z, or numbers.
    2 <installationId> must be replaced with the same value determined for your DNS entry at the beginning of the installation process.
    3 <role-name> must be replaced with the name of the role that will be created. For example, saagie_job_role.
    4 <policy-arn> must be replaced with the ARN of the chosen policy for Saagie jobs.
    Take note of the <installationId> value, you will need it in several steps to come.
  3. Make the file executable with the following command line:

    chmod +x create-job-role.sh
  4. Start the role creation by running the script file with the following command line:

    ./create-job-role.sh

    The ARN of the role you created is printed in the output.

    Take note of the ARN, you will need when configuring your instance.

Creating Storage Classes for Your Saagie Platform

Use this tutorial to create storage classes to store data in a non-volatile device during and after the execution of your platform.

Storage classes are stored in a file named storage.yml which contains the configuration for your storageClass resources:

  • common-storageclass: Used to store Saagie data, such as databases.

  • <installationId>-storageclass: Used to store job data, such as uploaded artifacts.

  • <installationId>-app-storageclass: Optional storageClass used to store app data and job data on a different provisioner.

    The <installationId> value is the same value you chose when you determined your DNS entry at the beginning of the installation process. It must be a string of up to 12 lowercase alphanumeric characters with no special characters.
  1. Create the storage.yml file for your Amazon EKS Kubernetes cluster.

    The following sample storage.yml file for Amazon EKS can be customized according to your needs.
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: common-storageclass
    parameters:
      type: gp2
      fsType: ext4
    provisioner: kubernetes.io/aws-ebs
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <installationId>-storageclass (1)
    parameters:
      type: gp2
      fsType: ext4
    provisioner: kubernetes.io/aws-ebs

    Where:

    1 <installationId> must be replaced with the same value determined for your DNS entry at the beginning of the installation process.
  2. To store app data and job data on different provisioners, include the following lines in the same storage.yml file:

    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <installationId>-app-storageclass (1)
    parameters: (2)
    
    provisioner: (3)

    Where:

    1 <installationId> must be replaced with the same value determined for your DNS entry at the beginning of the installation process.
    2 parameters must contain the parameters for app data.
    3 provisioner must indicate your second provisioner used to store app data.
  3. Apply the storage.yml file by running the following command line:

    kubectl apply -f storage.yml
  4. Confirm that the storage classes are available by running the following command line:

    kubectl get sc
    Example 1. Output of the command for Amazon EKS
    NAME                   PROVISIONER              RECLAIMPOLICY    VOLUMEBINDINGMODE     ALLOWVOLUMEEXPANSION   AGE
    common-storageclass    kubernetes.io/aws-ebs    Delete           Immediate             false                  2m43s
    gp2 (default)          kubernetes.io/aws-ebs    Delete           WaitForFirstConsumer  false                  5h25s
    <installationId>-storageclass  kubernetes.io/aws-ebs    Delete           Immediate             false                  30s

Creating the requirements.yml File

All Saagie deployments need the same requirements.yml file regardless of your cloud provider. The requirements.yml file will create two service accounts on the saagie-common namespace:

  • sa-saagie-deploy with the cluster-admin role

  • traefik-ingress-controller with its related ClusterRole and ClusterRoleBinding

  1. Create your requirements.yml file with the code as follows:

    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: <installationId>
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: sa-saagie-deploy
      namespace: <installationId>
    automountServiceAccountToken: true
    imagePullSecrets:
      - name: saagie-docker-config
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: sa-saagie-deploy-crbinding
      namespace: <installationId>
    roleRef:
      kind: ClusterRole
      name: cluster-admin
      apiGroup: rbac.authorization.k8s.io
    subjects:
    - kind: ServiceAccount
      name: sa-saagie-deploy
      namespace: <installationId>
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: traefik-ingress-controller
      namespace: <installationId>
    imagePullSecrets:
      - name: saagie-docker-config
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: traefik-ingress-cluster-binding
    subjects:
    - kind: ServiceAccount
      name: traefik-ingress-controller
      namespace: <installationId>
    roleRef:
      kind: ClusterRole
      name: traefik-ingress-cluster
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: traefik-ingress-cluster
    rules:
      - apiGroups:
          - ""
        resources:
          - services
          - endpoints
          - secrets
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - extensions
          - networking.k8s.io
        resources:
          - ingresses
        verbs:
          - get
          - list
          - watch
      - apiGroups:
        - extensions
        - networking.k8s.io
        resources:
        - ingresses/status
        verbs:
        - update
      - apiGroups:
        - traefik.containo.us
        resources:
        - middlewares
        - ingressroutes
        - traefikservices
        - ingressroutetcps
        - ingressrouteudps
        - tlsoptions
        - tlsstores
        verbs:
        - get
        - list
        - watch
      - apiGroups:
        - apiextensions.k8s.io
        resources:
        - customresourcedefinitions
        verbs:
        - create
      - apiGroups:
        - apiextensions.k8s.io
        resourceNames:
        - middlewares.traefik.containo.us
        - ingressroutes.traefik.containo.us
        - traefikservices.traefik.containo.us
        - ingressroutetcps.traefik.containo.us
        - ingressrouteudps.traefik.containo.us
        - tlsoptions.traefik.containo.us
        - tlsstores.traefik.containo.us
        resources:
        - customresourcedefinitions
        verbs:
        - get
    ---
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      labels:
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/cluster-service: "true"
      name: 00-saagie-common-psp
    spec:
      allowPrivilegeEscalation: false
      allowedHostPaths:
        - pathPrefix: /etc/machine-id
          readOnly: true
        - pathPrefix: /etc/fluent-bit
          readOnly: false
        - pathPrefix: /var/log
          readOnly: true
        - pathPrefix: /var/lib/docker/containers
          readOnly: true
        - pathPrefix: /data/docker/containers
          readOnly: true
      fsGroup:
        rule: RunAsAny
      runAsUser:
        rule: RunAsAny
      seLinux:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      volumes:
        - configMap
        - emptyDir
        - secret
        - persistentVolumeClaim
        - hostPath
        - projected
        - downwardAPI
    ---
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      labels:
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/cluster-service: "true"
      name: 00-saagie-project-psp
    spec:
      allowPrivilegeEscalation: true
      fsGroup:
        rule: RunAsAny
      runAsUser:
        rule: RunAsAny
      seLinux:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      volumes:
        - configMap
        - emptyDir
        - secret
        - persistentVolumeClaim
        - projected
        - downwardAPI
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/cluster-service: "true"
      name: psp:saagie-common:saagie-common-cluster-psp
    rules:
      - apiGroups:
          - policy
        resourceNames:
          - 00-saagie-common-psp
        resources:
          - podsecuritypolicies
        verbs:
          - use
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/cluster-service: "true"
      name: psp:saagie-common:saagie-project-cluster-psp
    rules:
      - apiGroups:
          - policy
        resourceNames:
          - 00-saagie-common-psp
        resources:
          - podsecuritypolicies
        verbs:
          - use
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: psp:saagie-common:saagie-deploy-psp-crbinding
      namespace: <installationId>
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: psp:saagie-common:saagie-common-cluster-psp
    subjects:
      - kind: Group
        name: system:serviceaccounts:saagie-common

    Where:

    • <installationId> must be replaced with your installation ID.

  2. Apply your requirements.yml file by running the following command line:

    kubectl apply -f requirements.yml

    The output of the command should look like the following:

    namespace/<installationId> created
    serviceaccount/sa-saagie-deploy created
    ...
    rolebinding.rbac.authorization.k8s.io/psp:saagie-commin:saagie-deploy-psp-cribinding created

    Where:

    • <installationId> must be replaced with your installation ID.

Applying or Installing Secret saagie-docker-config

Saagie Docker images are pulled from a private registry that requires credentials. The credentials should have been provided to you.

  1. Apply or install the secret:

    • Apply: If you receive the credentials in a Kubernetes secret file, apply the secret to your cluster by running the following kubectl command line:

      kubectl apply -n <installationId> -f saagie-docker-config.yaml (1)

      Where:

      1 <installationId> must be replaced with your installation ID.
    • Install: If you receive a username and password, install the secret on your cluster by running the following kubectl command line:

      kubectl create secret docker-registry -n <installationId> saagie-docker-config \ (1)
        --docker-server=<registry server \ (2)
        --docker-username=<username> \ (3)
        --docker-password=<password> (4)

      Where:

      1 <installationId> must be replaced with your installation ID.
      2 <registry server> must be replaced with the Docker repository hosting Saagie images.
      3 <username> must be replaced with the username provided to you.
      4 <password> must be replaced with the password provided to you.
  2. Edit the default service account to reference the saagie-docker-config secret by running the following kubectl command line:

    kubectl patch serviceaccount -n <installationId> default -p '{"imagePullSecrets":[{"name" : "saagie-docker-config"}]}'

    Where:

    • <installationId> must be replaced with your installation ID.

  3. Confirm that the secret is properly installed by running the following command line:

    kubectl get secret -n <installationId>

    Where:

    • <installationId> must be replaced with your installation ID.

    The output of the command should look like the following:

    NAME                   TYPE                             DATA   AGE
    saagie-docker-config   kubernetes.io/dockerconfigjson   1      2m43s

Installing Saagie in Offline Mode

You can install Saagie in offline mode if your Kubernetes cluster is not connected to the Internet.

To install Saagie in offline mode, you need to manage your own Docker registry containing images of the Saagie product as well as Saagie technologies.

This part will guide you to upload the resources in your registry and install the repository in your cluster.

Saagie provides you with the archives of Docker images needed to run your platform, as well as the technologies.

Uploading Docker Images

Before you begin:

To upload the Docker images to your registry, make sure you meet all the following prerequisites:

  • A machine with access to your Docker registry.

  • The tar archives provided by Saagie, which include the Saagie product and technologies.

  • The Skopeo command line tool installed on your machine. For more information, you can refer to the Git repository dedicated to Skopeo.

  • The credentials to push the images into the registry (if any).

  • Uploading Saagie Product Archive

  • Uploading Saagie Technologies Archive

  1. Run the following command line to decompress the archive:

    untar xvf <product-tar-archive> (1)

    Where:

    1 tar archive is the file name of the Saagie product provided by Saagie itself.
  2. OPTIONAL: If you need to require authentication, configure the user and password to connect to your registry using skopeo login. For more information, you can refer to the Git repository dedicated to Skopeo).

  3. Run the following command line in the decompressed archive to start the image upload:

    ./pushall.sh <registry> (1)

    Where:

    1 <registry> is the hostname of your Docker registry.
The process is the same as for uploading Saagie product archives.
  1. Run the following command line to decompress the archive:

    untar xvf <technologies-tar-archive> (1)

    Where:

    1 tar archive is the file name of the Saagie technologies provided by Saagie itself.
  2. OPTIONAL: If you need to require authentication, configure the user and password to connect to your registry using skopeo login. For more information, you can refer to the Git repository dedicated to Skopeo).

    If you configured authentication when you uploaded the first tar archive file, you will not need to configure it again.
  3. Run the following command line in the decompressed archive to start the image upload:

    ./pushall.sh <registry> (1)

    Where:

    1 <registry> is the hostname of your Docker registry.

Installing Technology Repository

The repository containing your technologies must be installed manually in your cluster.

For more information on adding technologies, see our SDK documentation.
  1. Copy the path to the technologies.zip file that contains your technologies.

  2. Run the following saagiectl command line to install the repository in your cluster:

    ./bin/saagiectl upload technologies --file <technologies-file> (1)

    Where:

    1 <technologies-file> must be replaced with the path to your technologies.zip file.

Setting Up SMTP (Simple Mail Transfer Protocol) Requirements

An SMTP server is mandatory to send, receive, and relay outgoing mail between your Saagie platform and users' email address. Saagie must therefore have access to your SMTP server and be compatible with the following configurations:

  • SMTP authentication can be anonymous or require authentication.

  • SMTP transport can be SMTP or SMTPS.

  • You must have a valid SSL certificate.

Once configured, you will be able to use your user email address to receive status alerts or change/reset the password associated with your Saagie account.

Deploying and Updating Your SSL Certificate

Use this tutorial to deploy and update your SSL certificate to your Kubernetes cluster.

Before you begin:

Make sure that your SSL certificate is valid by checking the following constraints:

  • The certificate’s validity date must be correct.

  • The certificate must include at least the Saagie product URL.

  • The KeyUsage attribute must include the digitalSignature and keyEncipherment elements.

  1. Open your preferred terminal command.

  2. To deploy (or update) your SSL certificate, run the following command line:

    kubectl create secret tls saagie-common-tls --cert=cert.pem --key=cert.key -n <installationId> --dry-run=client -o yaml | kubectl apply -f -

    Where:

    • <installationId> must be replaced with your installation ID.