Using Another Service Platform

Use this tutorial to create a Kubernetes cluster compatible with Saagie without using a Kubernetes as a Service (KaaS) platform.

Creating or Configuring Your Cluster

If you are using an on-premise cloud or are not using a KaaS platform, you will need a Vanilla Kubernetes cluster.

Before you begin:

Before creating a Vanilla Kubernetes cluster, make sure you meet the following requirements:

Table 1. Kubernetes Vanilla Cluster Requirements for Saagie
Requirement Details

Network add-on (CNI – Container Network Interfaces)

Choose Calico in the section Installing a Pod network add-on.

Saagie is only tested and certified with Calico. We could work with other CNI, but they have not been tested.

Volume Management System

A storage provider is required to create persistent storage. For more information, see Creating and Configuring Storage Classes for Your Saagie Platform.

LoadBalancer

We recommend using a LoadBalancer tool to provide access to the Saagie platform.

Other methods, such as using NodePort and a reverse proxy, are possible but not recommended as they have not been tested extensively. Parts of your Saagie platform may not work as expected without additional configuration and testing.
  1. To create your Vanilla Kubernetes cluster, follow the official Kubernetes tutorial, Creating a cluster with kubeadm.

Verifying Your Kubernetes Cluster

  1. Run the following command line to verify that you have access to your Kubernetes cluster:

    kubectl get nodes
    All nodes must have the status ready.

Creating and Configuring Storage Classes for Your Saagie Platform

Create storage classes to store data in a non-volatile device during and after the execution of your platform.

Storage classes are stored in a file named storage.yml which contains the configuration for your storageClass resources:

  • common-storageclass: Used to store Saagie data, such as databases.

  • <installationId>-storageclass: Used to store job data, such as uploaded artifacts.

  • <installationId>-app-storageclass: Optional storageClass used to store app data and job data on a different provisioner.

    The <installationId> value is the same value you chose when you determined your DNS entry at the beginning of the installation process. It must be a string of up to 12 lowercase alphanumeric characters with no special characters.
  1. Create the storage.yml file for your Service platform cluster. Here are some examples:

    • The following examples cover advanced use of Kubernetes clusters and assume a strong understanding of them.

    • The following sample storage.yml file can be customized according to your needs. For more information, see the Kubernetes documentation.

    • Ceph

    • NFS

    • OVH

    Saagie’s SRE team uses Ceph with the provisioner ceph.com/rbd. Here is an example of their configuration. Yours will depend on your storage technology and provisioner.

    To use Ceph, add the following configuration to your storage.yml file:

    ---
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: common-storageclass
    provisioner: ceph.com/rbd
    allowVolumeExpansion: true
    parameters:
      monitors: <array of ips> (1)
      adminId: admin
      adminSecretName: ceph-secret-admin
      adminSecretNamespace: kube-system
      pool: common
      userId: common
      userSecretName: ceph-secret-common
      fsType: ext4
      imageFormat: "2"
      imageFeatures: "layering"
    ---
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: <customer name>-storageclass (2)
    provisioner: ceph.com/rbd
    allowVolumeExpansion: true
    parameters:
      monitors: <array of ips>
      adminId: admin
      adminSecretName: ceph-secret-admin
      adminSecretNamespace: kube-system
      pool: <pool id> (3)
      userId: <user id> (4)
      userSecretName: ceph-secret-<customer name>
      fsType: ext4
      imageFormat: "2"
      imageFeatures: "layering"
    ---
    apiVersion: v1
    data:
      key: <key>
    kind: Secret
    metadata:
      name: ceph-secret-<customer name>
      namespace: <installationId> (5)
    type: ceph.com/rbd
    ---
    apiVersion: v1
    data:
      key: <key> (6)
    kind: Secret
    metadata:
      name: ceph-secret-common
      namespace: <installationId> (5)
    type: ceph.com/rbd

    Where:

    1 <array of ips> for monitors is provided by Saagie’s SRE team. For example, 192.168.50.100:6789,192.168.50.101:6789,192.168.50.102:6789,192.168.50.110:6789,192.168.50.111:6789,192.168.50.112:6789
    2 <customer name> is the platform URL installationId given during installation.
    3 <pool id> is provided by Saagie’s SRE team.
    4 <user id> is provided by Saagie’s SRE team.
    5 <installationId> must be replaced with your installation ID.
    6 <key> is provided by Saagie’s SRE team.

    To use NFS, add the following configuration to your storage.yml file:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: managed-nfs-storage
    provisioner: nfsprovisioner/ifs
    parameters:
      archiveOnDelete: "false"

    To use OVH, add the following configuration to your storage.yml file:

    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: common-storageclass
    parameters:
      availability: nova
      type: classic
    provisioner: cinder.csi.openstack.org
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <installationId>-storageclass (1)
    parameters:
      availability: nova
      type: classic
    provisioner: cinder.csi.openstack.org

    Where:

    1 <installationId> must be replaced with the same value determined for your DNS entry at the beginning of the installation process.
  2. To store app data and job data on different provisioners, include the following lines in the same storage.yml file:

    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: <installationId>-app-storageclass (1)
    parameters: (2)
    
    provisioner: (3)

    Where:

    1 <installationId> must be replaced with the same value determined for your DNS entry at the beginning of the installation process.
    2 The parameters value must contain the parameters for app data.
    3 The provisioner value must indicate your second provisioner used to store app data.
  3. Apply the storage.yml file by running the following command line:

    kubectl apply -f storage.yml
  4. Confirm that the storage classes are available by running the following command line:

    kubectl get sc
Because operations teams may need to adjust the configuration of a Vanilla Kubernetes cluster after the installation process is complete, you can always customize parts of your configuration by adding information to the created storage.yml file afterwards.

Creating the requirements.yml File

All Saagie deployments need the same requirements.yml file regardless of your cloud provider. The requirements.yml file will create two service accounts on the saagie-common namespace:

  • sa-saagie-deploy with the cluster-admin role

  • traefik-ingress-controller with its related ClusterRole and ClusterRoleBinding

  1. Create your requirements.yml file with the code as follows:

    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: <installationId>
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: sa-saagie-deploy
      namespace: <installationId>
    automountServiceAccountToken: true
    imagePullSecrets:
      - name: saagie-docker-config
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: sa-saagie-deploy-crbinding
      namespace: <installationId>
    roleRef:
      kind: ClusterRole
      name: cluster-admin
      apiGroup: rbac.authorization.k8s.io
    subjects:
    - kind: ServiceAccount
      name: sa-saagie-deploy
      namespace: <installationId>
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: traefik-ingress-controller
      namespace: <installationId>
    imagePullSecrets:
      - name: saagie-docker-config
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: traefik-ingress-cluster-binding
    subjects:
    - kind: ServiceAccount
      name: traefik-ingress-controller
      namespace: <installationId>
    roleRef:
      kind: ClusterRole
      name: traefik-ingress-cluster
      apiGroup: rbac.authorization.k8s.io
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: traefik-ingress-cluster
    rules:
      - apiGroups:
          - ""
        resources:
          - services
          - endpoints
          - secrets
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - extensions
          - networking.k8s.io
        resources:
          - ingresses
        verbs:
          - get
          - list
          - watch
      - apiGroups:
        - extensions
        - networking.k8s.io
        resources:
        - ingresses/status
        verbs:
        - update
      - apiGroups:
        - traefik.containo.us
        resources:
        - middlewares
        - ingressroutes
        - traefikservices
        - ingressroutetcps
        - ingressrouteudps
        - tlsoptions
        - tlsstores
        verbs:
        - get
        - list
        - watch
      - apiGroups:
        - apiextensions.k8s.io
        resources:
        - customresourcedefinitions
        verbs:
        - create
      - apiGroups:
        - apiextensions.k8s.io
        resourceNames:
        - middlewares.traefik.containo.us
        - ingressroutes.traefik.containo.us
        - traefikservices.traefik.containo.us
        - ingressroutetcps.traefik.containo.us
        - ingressrouteudps.traefik.containo.us
        - tlsoptions.traefik.containo.us
        - tlsstores.traefik.containo.us
        resources:
        - customresourcedefinitions
        verbs:
        - get
    ---
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      labels:
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/cluster-service: "true"
      name: 00-saagie-common-psp
    spec:
      allowPrivilegeEscalation: false
      allowedHostPaths:
        - pathPrefix: /etc/machine-id
          readOnly: true
        - pathPrefix: /etc/fluent-bit
          readOnly: false
        - pathPrefix: /var/log
          readOnly: true
        - pathPrefix: /var/lib/docker/containers
          readOnly: true
        - pathPrefix: /data/docker/containers
          readOnly: true
      fsGroup:
        rule: RunAsAny
      runAsUser:
        rule: RunAsAny
      seLinux:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      volumes:
        - configMap
        - emptyDir
        - secret
        - persistentVolumeClaim
        - hostPath
        - projected
        - downwardAPI
    ---
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      labels:
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/cluster-service: "true"
      name: 00-saagie-project-psp
    spec:
      allowPrivilegeEscalation: true
      fsGroup:
        rule: RunAsAny
      runAsUser:
        rule: RunAsAny
      seLinux:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      volumes:
        - configMap
        - emptyDir
        - secret
        - persistentVolumeClaim
        - projected
        - downwardAPI
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/cluster-service: "true"
      name: psp:saagie-common:saagie-common-cluster-psp
    rules:
      - apiGroups:
          - policy
        resourceNames:
          - 00-saagie-common-psp
        resources:
          - podsecuritypolicies
        verbs:
          - use
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        addonmanager.kubernetes.io/mode: Reconcile
        kubernetes.io/cluster-service: "true"
      name: psp:saagie-common:saagie-project-cluster-psp
    rules:
      - apiGroups:
          - policy
        resourceNames:
          - 00-saagie-common-psp
        resources:
          - podsecuritypolicies
        verbs:
          - use
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: psp:saagie-common:saagie-deploy-psp-crbinding
      namespace: <installationId>
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: psp:saagie-common:saagie-common-cluster-psp
    subjects:
      - kind: Group
        name: system:serviceaccounts:saagie-common

    Where:

    • <installationId> must be replaced with your installation ID.

  2. Apply your requirements.yml file by running the following command line:

    kubectl apply -f requirements.yml

    The output of the command should look like the following:

    namespace/<installationId> created
    serviceaccount/sa-saagie-deploy created
    ...
    rolebinding.rbac.authorization.k8s.io/psp:saagie-commin:saagie-deploy-psp-cribinding created

    Where:

    • <installationId> must be replaced with your installation ID.

Applying or Installing Secret saagie-docker-config

Saagie Docker images are pulled from a private registry that requires credentials. The credentials should have been provided to you.

  1. Apply or install the secret:

    • Apply: If you receive the credentials in a Kubernetes secret file, apply the secret to your cluster by running the following kubectl command line:

      kubectl apply -n <installationId> -f saagie-docker-config.yaml (1)

      Where:

      1 <installationId> must be replaced with your installation ID.
    • Install: If you receive a username and password, install the secret on your cluster by running the following kubectl command line:

      kubectl create secret docker-registry -n <installationId> saagie-docker-config \ (1)
        --docker-server=<registry server \ (2)
        --docker-username=<username> \ (3)
        --docker-password=<password> (4)

      Where:

      1 <installationId> must be replaced with your installation ID.
      2 <registry server> must be replaced with the Docker repository hosting Saagie images.
      3 <username> must be replaced with the username provided to you.
      4 <password> must be replaced with the password provided to you.
  2. Edit the default service account to reference the saagie-docker-config secret by running the following kubectl command line:

    kubectl patch serviceaccount -n <installationId> default -p '{"imagePullSecrets":[{"name" : "saagie-docker-config"}]}'

    Where:

    • <installationId> must be replaced with your installation ID.

  3. Confirm that the secret is properly installed by running the following command line:

    kubectl get secret -n <installationId>

    Where:

    • <installationId> must be replaced with your installation ID.

    The output of the command should look like the following:

    NAME                   TYPE                             DATA   AGE
    saagie-docker-config   kubernetes.io/dockerconfigjson   1      2m43s

Installing Saagie in Offline Mode

You can install Saagie in offline mode if your Kubernetes cluster is not connected to the Internet.

To install Saagie in offline mode, you need to manage your own Docker registry containing images of the Saagie product as well as Saagie technologies.

This part will guide you to upload the resources in your registry and install the repository in your cluster.

Saagie provides you with the archives of Docker images needed to run your platform, as well as the technologies.

Uploading Docker Images

Before you begin:

To upload the Docker images to your registry, make sure you meet all the following prerequisites:

  • A machine with access to your Docker registry.

  • The tar archives provided by Saagie, which include the Saagie product and technologies.

  • The Skopeo command line tool installed on your machine. For more information, you can refer to the Git repository dedicated to Skopeo.

  • The credentials to push the images into the registry (if any).

  • Uploading Saagie Product Archive

  • Uploading Saagie Technologies Archive

  1. Run the following command line to decompress the archive:

    untar xvf <product-tar-archive> (1)

    Where:

    1 tar archive is the file name of the Saagie product provided by Saagie itself.
  2. OPTIONAL: If you need to require authentication, configure the user and password to connect to your registry using skopeo login. For more information, you can refer to the Git repository dedicated to Skopeo).

  3. Run the following command line in the decompressed archive to start the image upload:

    ./pushall.sh <registry> (1)

    Where:

    1 <registry> is the hostname of your Docker registry.
The process is the same as for uploading Saagie product archives.
  1. Run the following command line to decompress the archive:

    untar xvf <technologies-tar-archive> (1)

    Where:

    1 tar archive is the file name of the Saagie technologies provided by Saagie itself.
  2. OPTIONAL: If you need to require authentication, configure the user and password to connect to your registry using skopeo login. For more information, you can refer to the Git repository dedicated to Skopeo).

    If you configured authentication when you uploaded the first tar archive file, you will not need to configure it again.
  3. Run the following command line in the decompressed archive to start the image upload:

    ./pushall.sh <registry> (1)

    Where:

    1 <registry> is the hostname of your Docker registry.

Installing Technology Repository

The repository containing your technologies must be installed manually in your cluster.

For more information on adding technologies, see our SDK documentation.
  1. Copy the path to the technologies.zip file that contains your technologies.

  2. Run the following saagiectl command line to install the repository in your cluster:

    ./bin/saagiectl upload technologies --file <technologies-file> (1)

    Where:

    1 <technologies-file> must be replaced with the path to your technologies.zip file.

Setting Up SMTP (Simple Mail Transfer Protocol) Requirements

An SMTP server is mandatory to send, receive, and relay outgoing mail between your Saagie platform and users' email address. Saagie must therefore have access to your SMTP server and be compatible with the following configurations:

  • SMTP authentication can be anonymous or require authentication.

  • SMTP transport can be SMTP or SMTPS.

  • You must have a valid SSL certificate.

Once configured, you will be able to use your user email address to receive status alerts or change/reset the password associated with your Saagie account.

Deploying and Updating Your SSL Certificate

Use this tutorial to deploy and update your SSL certificate to your Kubernetes cluster.

Before you begin:

Make sure that your SSL certificate is valid by checking the following constraints:

  • The certificate’s validity date must be correct.

  • The certificate must include at least the Saagie product URL.

  • The KeyUsage attribute must include the digitalSignature and keyEncipherment elements.

  1. Open your preferred terminal command.

  2. To deploy (or update) your SSL certificate, run the following command line:

    kubectl create secret tls saagie-common-tls --cert=cert.pem --key=cert.key -n <installationId> --dry-run=client -o yaml | kubectl apply -f -

    Where:

    • <installationId> must be replaced with your installation ID.