Using Another Service Platform

Use this tutorial to create a Kubernetes cluster compatible with Saagie without using a Kubernetes as a Service (KaaS) platform.

Creating or Configuring Your Cluster

If you are using an on-premise cloud or are not using a KaaS platform, you will need a Vanilla Kubernetes cluster.

Before you begin:

Before creating a Vanilla Kubernetes cluster, make sure you meet the following requirements:

Table 1. Kubernetes Vanilla Cluster Requirements for Saagie
Requirement Details

Network add-on (CNI – Container Network Interfaces)

Choose Calico in the section Installing a Pod network add-on.

Saagie is only tested and certified with Calico. We could work with other CNI, but they have not been tested.

Volume Management System

A storage provider is required to create persistent storage. For more information, see Creating and Configuring Storage Classes for Your Saagie Platform.

LoadBalancer

We recommend using a LoadBalancer tool to provide access to the Saagie platform.

Other methods, such as using NodePort and a reverse proxy, are possible but not recommended as they have not been tested extensively. Parts of your Saagie platform may not work as expected without additional configuration and testing.
  1. To create your Vanilla Kubernetes cluster, follow the official Kubernetes tutorial, Creating a cluster with kubeadm.

Labeling Nodes for Isolation Mode

The isolated mode will allow you to separate the workload between your platforms on dedicated nodes, preventing the executions of one platform to intrude on another platform’s resources.

To isolate your workload, you must add the correct label to each node to dedicate them to a platform. There are two types of node:

Common node

Common node(s) allow you to isolate the Saagie installation from the rest of your workload. A common node must be labeled as follows:

kubectl label nodes <your-node-name> io.saagie/type=common
kubectl label nodes <your-node-name> io.saagie/installationId=<installationId> (1)

Where:

1 <installationId> must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry.
If you do not have a common node labeled as such, Saagie will not start.

Platform node

Platform node(s) allow you to separate the workload between your platforms. You can have as many labeled nodes as required. A platform node must be labeled as follows:

kubectl label nodes <your-node-name> io.saagie/type=platform
kubectl label nodes <your-node-name> io.saagie/installationId=<installationId> (1)
kubectl label nodes <your-node-name> io.saagie/platform-assignable=<platformId> (2)

Where:

1 <installationId> must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry.
2 <platformId> must be replaced with the ID of the platform. It is determined during the configuration of your platform. Its value is defined according to the number of platforms and their order, starting from one. You can therefore predict it.
The order in which the platforms are declared during configuration must match the order of the platform IDs you entered here in the node pool. So remember it for later.

Verifying Your Kubernetes Cluster

  1. Run the following command line to verify that you have access to your Kubernetes cluster:

    kubectl get nodes

    All nodes must have the ready status.

Creating and Configuring Storage Classes for Your Saagie Platform

Create storage classes to store data in a non-volatile device during and after the execution of your platform. Storage classes are stored in a file named storage.yml which contains the configuration for your storageClass resources:

  • common-storageclass: Used to store Saagie data, such as databases.

  • <installationId>-storageclass: Used to store job data, such as uploaded artifacts.

  • <installationId>-app-storageclass: Optional storageClass used to store app and job data on a different provisioner.

  1. Create the storage.yml file for your Service platform cluster. Here are some examples:

    • The following examples cover advanced use of Kubernetes clusters and assume a strong understanding of them.

    • The following sample storage.yml file can be customized according to your needs. For more information, see the Kubernetes documentation.

    • Ceph

    • NFS

    • OVH

    Saagie’s SRE team uses Ceph with the provisioner ceph.com/rbd. Here is an example of their configuration. Yours will depend on your storage technology and provisioner.

    To use Ceph, add the following configuration to your storage.yml file:

    ---
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: common-storageclass
    provisioner: ceph.com/rbd
    allowVolumeExpansion: true
    parameters:
      monitors: <array of ips> (1)
      adminId: admin
      adminSecretName: ceph-secret-admin
      adminSecretNamespace: kube-system
      pool: common
      userId: common
      userSecretName: ceph-secret-common
      fsType: ext4
      imageFormat: "2"
      imageFeatures: "layering"
    ---
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: <customer name>-storageclass (2)
    provisioner: ceph.com/rbd
    allowVolumeExpansion: true
    parameters:
      monitors: <array of ips>
      adminId: admin
      adminSecretName: ceph-secret-admin
      adminSecretNamespace: kube-system
      pool: <pool id> (3)
      userId: <user id> (4)
      userSecretName: ceph-secret-<customer name>
      fsType: ext4
      imageFormat: "2"
      imageFeatures: "layering"
    ---
    apiVersion: v1
    data:
      key: <key>
    kind: Secret
    metadata:
      name: ceph-secret-<customer name>
      namespace: <installationId> (5)
    type: ceph.com/rbd
    ---
    apiVersion: v1
    data:
      key: <key> (6)
    kind: Secret
    metadata:
      name: ceph-secret-common
      namespace: <installationId> (5)
    type: ceph.com/rbd

    Where:

    1 <array of ips> for monitors is provided by Saagie’s SRE team. For example, 192.168.50.100:6789,192.168.50.101:6789,192.168.50.102:6789,192.168.50.110:6789,192.168.50.111:6789,192.168.50.112:6789
    2 <customer name> is the platform URL installationId given during installation.
    3 <pool id> is provided by Saagie’s SRE team.
    4 <user id> is provided by Saagie’s SRE team.
    5 <installationId> must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry.
    6 <key> is provided by Saagie’s SRE team.

    To use NFS, add the following configuration to your storage.yml file:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: common-storageclass
    provisioner: nfsprovisioner/ifs
    allowVolumeExpansion: true
    parameters:
      archiveOnDelete: "false"
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <installationId>-storageclass (1)
    provisioner: nfsprovisioner/ifs
    allowVolumeExpansion: true
    parameters:
      archiveOnDelete: "false"

    Where:

    1 <installationId> must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry.

    To use OVH, add the following configuration to your storage.yml file:

    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: common-storageclass
    parameters:
      availability: nova
      type: classic
    provisioner: cinder.csi.openstack.org
    allowVolumeExpansion: true
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <installationId>-storageclass (1)
    parameters:
      availability: nova
      type: classic
    provisioner: cinder.csi.openstack.org
    allowVolumeExpansion: true

    Where:

    1 <installationId> must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry.
  2. To store app data and job data on different provisioners, include the following lines in the same storage.yml file:

    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <installationId>-app-storageclass (1)
    parameters: (2)
    provisioner: (3)
    allowVolumeExpansion: true

    Where:

    1 <installationId> must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry.
    2 The parameters value must contain the parameters for your app data.
    3 The provisioner value must indicate your second provisioner used to store app data.
  3. Apply the storage.yml file by running the following command line:

    kubectl apply -f storage.yml
  4. Confirm that the storage classes are available by running the following command line:

    kubectl get sc
Because operations teams may need to adjust the configuration of a Vanilla Kubernetes cluster after the installation process is complete, you can always customize parts of your configuration by adding information to the created storage.yml file afterwards.

Creating the requirements.yml File

All Saagie deployments need the same requirements.yml file regardless of your cloud provider. The requirements.yml file will create two service accounts on the saagie-common namespace:

  • sa-saagie-deploy with the cluster-admin role

  • traefik-ingress-controller with its related ClusterRole and ClusterRoleBinding

  1. Create your requirements.yml file as follows:

    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: <installationId>
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: sa-saagie-deploy
      namespace: <installationId>
    automountServiceAccountToken: true
    imagePullSecrets:
      - name: saagie-docker-config
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: sa-saagie-deploy-crbinding
      namespace: <installationId>
    roleRef:
      kind: ClusterRole
      name: cluster-admin
      apiGroup: rbac.authorization.k8s.io
    subjects:
    - kind: ServiceAccount
      name: sa-saagie-deploy
      namespace: <installationId>
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: traefik-ingress-controller
      namespace: <installationId>
    imagePullSecrets:
      - name: saagie-docker-config
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: traefik-ingress-cluster-binding
    subjects:
    - kind: ServiceAccount
      name: traefik-ingress-controller
      namespace: <installationId>
    roleRef:
      kind: ClusterRole
      name: traefik-ingress-cluster
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: traefik-ingress-cluster
    rules:
      - apiGroups:
          - ""
        resources:
          - services
          - endpoints
          - secrets
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - networking.k8s.io
        resources:
          - ingresses
          - ingressclasses
        verbs:
          - get
          - list
          - watch
      - apiGroups:
        - networking.k8s.io
        resources:
        - ingresses/status
        verbs:
        - update
      - apiGroups:
        - traefik.containo.us
        resources:
        - middlewares
        - middlewaretcps
        - ingressroutes
        - traefikservices
        - ingressroutetcps
        - ingressrouteudps
        - tlsoptions
        - tlsstores
        - serverstransports
        verbs:
        - get
        - list
        - watch
      - apiGroups:
        - apiextensions.k8s.io
        resources:
        - customresourcedefinitions
        verbs:
        - create
      - apiGroups:
        - apiextensions.k8s.io
        resourceNames:
        - middlewares.traefik.containo.us
        - middlewaretcps.traefik.containo.us
        - ingressroutes.traefik.containo.us
        - traefikservices.traefik.containo.us
        - ingressroutetcps.traefik.containo.us
        - ingressrouteudps.traefik.containo.us
        - tlsoptions.traefik.containo.us
        - tlsstores.traefik.containo.us
        - serverstransports.traefik.containo.us
        resources:
        - customresourcedefinitions
        verbs:
        - get

    Where:

  2. Apply your requirements.yml file by running the following command line:

    kubectl apply -f requirements.yml

    The output of the command should look like the following:

    namespace/<installationId> created
    serviceaccount/sa-saagie-deploy created
    ...

    Where:

Applying or Installing Secret saagie-docker-config

Saagie Docker images are pulled from a private registry that requires credentials. The credentials should have been provided to you.

  1. Apply or install the secret:

    • Apply: If you receive the credentials in a Kubernetes secret file, apply the secret to your cluster by running the following kubectl command line:

      kubectl apply -n <installationId> -f saagie-docker-config.yaml (1)

      Where:

      1 <installationId> must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry.
    • Install: If you receive a username and password, install the secret on your cluster by running the following kubectl command line:

      kubectl create secret docker-registry -n <installationId> saagie-docker-config \ (1)
        --docker-server=<registry server \ (2)
        --docker-username=<username> \ (3)
        --docker-password=<password> (4)

      Where:

      1 <installationId> must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry.
      2 <registry server> must be replaced with the Docker repository hosting Saagie images.
      3 <username> must be replaced with the username provided to you.
      4 <password> must be replaced with the password provided to you.
  2. Edit the default service account to reference the saagie-docker-config secret by running the following kubectl command line:

    kubectl patch serviceaccount -n <installationId> default -p '{"imagePullSecrets":[{"name" : "saagie-docker-config"}]}' (1)

    Where:

    1 <installationId> must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry.
  3. Confirm that the secret is properly installed by running the following command line:

    kubectl get secret -n <installationId> (1)

    Where:

    1 <installationId> must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry.

    The output of the command should look like the following:

    NAME                   TYPE                             DATA   AGE
    saagie-docker-config   kubernetes.io/dockerconfigjson   1      2m43s

Installing Saagie in Offline Mode

In case your Kubernetes cluster is not connected to the internet, you can install Saagie in an offline mode. To do this, you need to manage your own Docker registry. This registry contains images of the Saagie product and Saagie technologies.

This part will guide you to upload the resources in your registry and install the repository in your cluster.

Saagie gives you the archives of Docker images needed to run your platform, as well as the technologies.

Uploading Docker Images

Before you begin:

To upload the Docker images to your registry, make sure you meet all the following requirements. You must have:

  • A machine with access to your Docker registry.

  • The tar archives that are provided by Saagie and that contain the Saagie product and technologies.

  • The Skopeo command line tool installed on your machine. For more information, see the Git repository dedicated to Skopeo.

  • The credentials to push the images into the registry, if any.

  1. Run the following command line to decompress the archive:

    • Uploading Saagie Product Archive

    • Uploading Saagie Technologies Archive

    untar xvf <product-tar-archive> (1)

    Where:

    1 tar archive is the file name of the Saagie product provided by Saagie itself.
    untar xvf <technologies-tar-archive> (1)

    Where:

    1 tar archive is the file name of the Saagie technologies provided by Saagie itself.
  2. OPTIONAL: If you need to require authentication, configure the user and password to connect to your registry using skopeo login. For more information, you can refer to the Git repository dedicated to Skopeo.

  3. Run the following command line in the decompressed archive to start the image upload:

    ./pushall.sh <registry> (1)

    Where:

    1 <registry> is the hostname of your Docker registry.

Installing Technology Repository

The repository containing your technologies must be installed manually in your cluster.

For more information on adding technologies, see our SDK documentation.
  1. Copy the path to the technologies.zip file that contains your technologies.

  2. Run the following saagiectl command line to install the repository in your cluster:

    ./bin/saagiectl upload technologies --file <technologies-file> (1)

    Where:

    1 <technologies-file> must be replaced with the path to your technologies.zip file.

Setting Up SMTP (Simple Mail Transfer Protocol) Requirements

An SMTP server is required to send, receive, and relay outgoing mail between your Saagie platform and users' email addresses. For this reason, Saagie must have access to your SMTP server and be compatible with the following configurations:

  • SMTP authentication can be anonymous or require authentication.

  • SMTP transport can be SMTP or SMTPS.

  • You must have a valid SSL certificate.

Once configured, you will be able to use your user email address to receive status alerts or change and reset the password associated with your Saagie account.

Deploying Your SSL Certificate

Use this tutorial to deploy your SSL certificate to your Kubernetes cluster.

Before you begin:

Make sure your SSL certificate is valid by checking the following constraints:

  • The certificate’s validity date must be correct.

  • The certificate must include at least the Saagie product URL.

  • The KeyUsage attribute must include the digitalSignature and keyEncipherment elements.

  1. Open your preferred terminal command.

  2. To deploy your SSL certificate, run the following command line:

    kubectl create secret tls saagie-common-tls --cert=cert.pem --key=cert.key -n <installationId> --dry-run=client -o yaml | kubectl apply -f -

    Where: