Using Another Service Platform
Creating or Configuring Your Cluster
Before creating a Vanilla Kubernetes cluster, make sure you meet the following requirements:
Requirement | Details | ||
---|---|---|---|
Network add-on (CNI – Container Network Interfaces) |
Choose Calico in the section Installing a Pod network add-on.
|
||
Volume Management System |
A storage provider is required to create persistent storage. For more information, see Creating and Configuring Storage Classes for Your Saagie Platform. |
||
LoadBalancer |
We recommend using a LoadBalancer tool to provide access to the Saagie platform.
|
-
To create your Vanilla Kubernetes cluster, follow the official Kubernetes tutorial, Creating a cluster with
kubeadm
.
Labeling Nodes for Isolation Mode
To isolate your workload, you must add the correct label to each node to dedicate them to a platform. There are two types of node:
Common node
Common node(s) allow you to isolate the Saagie installation from the rest of your workload. A common node must be labeled as follows:
kubectl label nodes <your-node-name> io.saagie/type=common
kubectl label nodes <your-node-name> io.saagie/installationId=<installationId> (1)
Where:
1 | <installationId> must be replaced with your installation ID, which must match the prefix you have determined for your DNS entry. |
If you do not have a common node labeled as such, Saagie will not start. |
Platform node
Platform node(s) allow you to separate the workload between your platforms. You can have as many labeled nodes as required. A platform node must be labeled as follows:
kubectl label nodes <your-node-name> io.saagie/type=platform
kubectl label nodes <your-node-name> io.saagie/installationId=<installationId> (1)
kubectl label nodes <your-node-name> io.saagie/platform-assignable=<platformId> (2)
Where:
1 | <installationId> must be replaced with your installation ID, which must match the prefix you have determined for your DNS entry. |
||
2 | platformId must be replaced with the ID of the platform, which is determined during the configuration of your platform.
Its value is defined according to the number of platforms and their order, starting from one.
You can therefore predict it.
|
Verifying Your Kubernetes Cluster
-
Run the following command line to verify that you have access to your Kubernetes cluster:
kubectl get nodes
All nodes must have the status ready
.
Creating and Configuring Storage Classes for Your Saagie Platform
-
Create the
storage.yml
file for your Service platform cluster. Here are some examples:-
The following examples cover advanced use of Kubernetes clusters and assume a strong understanding of them.
-
The following sample
storage.yml
file can be customized according to your needs. For more information, see the Kubernetes documentation.
Saagie’s SRE team uses Ceph with the provisioner
ceph.com/rbd
. Here is an example of their configuration. Yours will depend on your storage technology and provisioner.To use Ceph, add the following configuration to your
storage.yml
file:--- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: common-storageclass provisioner: ceph.com/rbd allowVolumeExpansion: true parameters: monitors: <array of ips> (1) adminId: admin adminSecretName: ceph-secret-admin adminSecretNamespace: kube-system pool: common userId: common userSecretName: ceph-secret-common fsType: ext4 imageFormat: "2" imageFeatures: "layering" --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <customer name>-storageclass (2) provisioner: ceph.com/rbd allowVolumeExpansion: true parameters: monitors: <array of ips> adminId: admin adminSecretName: ceph-secret-admin adminSecretNamespace: kube-system pool: <pool id> (3) userId: <user id> (4) userSecretName: ceph-secret-<customer name> fsType: ext4 imageFormat: "2" imageFeatures: "layering" --- apiVersion: v1 data: key: <key> kind: Secret metadata: name: ceph-secret-<customer name> namespace: <installationId> (5) type: ceph.com/rbd --- apiVersion: v1 data: key: <key> (6) kind: Secret metadata: name: ceph-secret-common namespace: <installationId> (5) type: ceph.com/rbd
Where:
1 <array of ips>
for monitors is provided by Saagie’s SRE team. For example,192.168.50.100:6789,192.168.50.101:6789,192.168.50.102:6789,192.168.50.110:6789,192.168.50.111:6789,192.168.50.112:6789
2 <customer name>
is the platform URL installationId given during installation.3 <pool id>
is provided by Saagie’s SRE team.4 <user id>
is provided by Saagie’s SRE team.5 <installationId>
must be replaced with your installation ID, which must match the prefix you have determined for your DNS entry.6 <key>
is provided by Saagie’s SRE team.To use NFS, add the following configuration to your
storage.yml
file:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: common-storageclass provisioner: nfsprovisioner/ifs parameters: archiveOnDelete: "false" --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <installationId>-storageclass (1) provisioner: nfsprovisioner/ifs parameters: archiveOnDelete: "false"
Where:
1 <installationId>
must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry.To use OVH, add the following configuration to your
storage.yml
file:--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: common-storageclass parameters: availability: nova type: classic provisioner: cinder.csi.openstack.org --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <installationId>-storageclass (1) parameters: availability: nova type: classic provisioner: cinder.csi.openstack.org
Where:
1 <installationId>
must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry. -
-
To store app data and job data on different provisioners, include the following lines in the same
storage.yml
file:--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <installationId>-app-storageclass (1) parameters: (2) provisioner: (3)
Where:
1 <installationId>
must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry.2 The parameters
value must contain the parameters for app data.3 The provisioner
value must indicate your second provisioner used to store app data. -
Apply the
storage.yml
file by running the following command line:kubectl apply -f storage.yml
-
Confirm that the storage classes are available by running the following command line:
kubectl get sc
Because operations teams may need to adjust the configuration of a Vanilla Kubernetes cluster after the installation process is complete, you can always customize parts of your configuration by adding information to the created storage.yml file afterwards.
|
Creating the requirements.yml
File
-
Create your
requirements.yml
file with the code as follows:--- apiVersion: v1 kind: Namespace metadata: name: <installationId> --- apiVersion: v1 kind: ServiceAccount metadata: name: sa-saagie-deploy namespace: <installationId> automountServiceAccountToken: true imagePullSecrets: - name: saagie-docker-config --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: sa-saagie-deploy-crbinding namespace: <installationId> roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: sa-saagie-deploy namespace: <installationId> --- apiVersion: v1 kind: ServiceAccount metadata: name: traefik-ingress-controller namespace: <installationId> imagePullSecrets: - name: saagie-docker-config --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: traefik-ingress-cluster-binding subjects: - kind: ServiceAccount name: traefik-ingress-controller namespace: <installationId> roleRef: kind: ClusterRole name: traefik-ingress-cluster apiGroup: rbac.authorization.k8s.io --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: traefik-ingress-cluster rules: - apiGroups: - "" resources: - services - endpoints - secrets verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses - ingressclasses verbs: - get - list - watch - apiGroups: - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - traefik.containo.us resources: - middlewares - middlewaretcps - ingressroutes - traefikservices - ingressroutetcps - ingressrouteudps - tlsoptions - tlsstores - serverstransports verbs: - get - list - watch - apiGroups: - apiextensions.k8s.io resources: - customresourcedefinitions verbs: - create - apiGroups: - apiextensions.k8s.io resourceNames: - middlewares.traefik.containo.us - middlewaretcps.traefik.containo.us - ingressroutes.traefik.containo.us - traefikservices.traefik.containo.us - ingressroutetcps.traefik.containo.us - ingressrouteudps.traefik.containo.us - tlsoptions.traefik.containo.us - tlsstores.traefik.containo.us - serverstransports.traefik.containo.us resources: - customresourcedefinitions verbs: - get --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: labels: addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/cluster-service: "true" name: 00-saagie-common-psp spec: allowPrivilegeEscalation: false allowedHostPaths: - pathPrefix: /etc/machine-id readOnly: true - pathPrefix: /etc/fluent-bit readOnly: false - pathPrefix: /var/log readOnly: true - pathPrefix: /var/lib/docker/containers readOnly: true - pathPrefix: /data/docker/containers readOnly: true fsGroup: rule: RunAsAny runAsUser: rule: RunAsAny seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - configMap - emptyDir - secret - persistentVolumeClaim - hostPath - projected - downwardAPI --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: labels: addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/cluster-service: "true" name: 00-saagie-project-psp spec: allowPrivilegeEscalation: true fsGroup: rule: RunAsAny runAsUser: rule: RunAsAny seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny volumes: - configMap - emptyDir - secret - persistentVolumeClaim - projected - downwardAPI --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/cluster-service: "true" name: psp:saagie-common:saagie-common-cluster-psp rules: - apiGroups: - policy resourceNames: - 00-saagie-common-psp resources: - podsecuritypolicies verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/cluster-service: "true" name: psp:saagie-common:saagie-project-cluster-psp rules: - apiGroups: - policy resourceNames: - 00-saagie-common-psp resources: - podsecuritypolicies verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: psp:saagie-common:saagie-deploy-psp-crbinding namespace: <installationId> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: psp:saagie-common:saagie-common-cluster-psp subjects: - kind: Group name: system:serviceaccounts:saagie-common
Where:
-
<installationId>
must be replaced with your installation ID, which must match the prefix you have determined for your DNS entry.
-
-
Apply your
requirements.yml
file by running the following command line:kubectl apply -f requirements.yml
The output of the command should look like the following:
namespace/<installationId> created serviceaccount/sa-saagie-deploy created ... rolebinding.rbac.authorization.k8s.io/psp:saagie-commin:saagie-deploy-psp-cribinding created
Where:
-
<installationId>
must be replaced with your installation ID, which must match the prefix you have determined for your DNS entry.
-
Applying or Installing Secret saagie-docker-config
-
Apply or install the secret:
-
Apply: If you receive the credentials in a Kubernetes secret file, apply the secret to your cluster by running the following
kubectl
command line:kubectl apply -n <installationId> -f saagie-docker-config.yaml (1)
Where:
1 <installationId>
must be replaced with your installation ID, which must match the prefix you have determined for your DNS entry. -
Install: If you receive a username and password, install the secret on your cluster by running the following
kubectl
command line:kubectl create secret docker-registry -n <installationId> saagie-docker-config \ (1) --docker-server=<registry server \ (2) --docker-username=<username> \ (3) --docker-password=<password> (4)
Where:
1 <installationId>
must be replaced with your installation ID, which must match the prefix you have determined for your DNS entry.2 <registry server>
must be replaced with the Docker repository hosting Saagie images.3 <username>
must be replaced with the username provided to you.4 <password>
must be replaced with the password provided to you.
-
-
Edit the default service account to reference the
saagie-docker-config
secret by running the followingkubectl
command line:kubectl patch serviceaccount -n <installationId> default -p '{"imagePullSecrets":[{"name" : "saagie-docker-config"}]}'
Where:
-
<installationId>
must be replaced with your installation ID, which must match the prefix you have determined for your DNS entry.
-
-
Confirm that the secret is properly installed by running the following command line:
kubectl get secret -n <installationId>
Where:
-
<installationId>
must be replaced with your installation ID, which must match the prefix you have determined for your DNS entry.
The output of the command should look like the following:
NAME TYPE DATA AGE saagie-docker-config kubernetes.io/dockerconfigjson 1 2m43s
-
Installing Saagie in Offline Mode
Uploading Docker Images
To upload the Docker images to your registry, make sure you meet all the following prerequisites:
-
A machine with access to your Docker registry.
-
The
tar
archives provided by Saagie, which include the Saagie product and technologies. -
The Skopeo command line tool installed on your machine. For more information, you can refer to the Git repository dedicated to Skopeo.
-
The credentials to push the images into the registry (if any).
-
Run the following command line to decompress the archive:
untar xvf <product-tar-archive> (1)
Where:
1 tar archive
is the file name of the Saagie product provided by Saagie itself. -
OPTIONAL: If you need to require authentication, configure the user and password to connect to your registry using
skopeo login
. For more information, you can refer to the Git repository dedicated to Skopeo). -
Run the following command line in the decompressed archive to start the image upload:
./pushall.sh <registry> (1)
Where:
1 <registry>
is the hostname of your Docker registry.
The process is the same as for uploading Saagie product archives. |
-
Run the following command line to decompress the archive:
untar xvf <technologies-tar-archive> (1)
Where:
1 tar archive
is the file name of the Saagie technologies provided by Saagie itself. -
OPTIONAL: If you need to require authentication, configure the user and password to connect to your registry using
skopeo login
. For more information, you can refer to the Git repository dedicated to Skopeo).If you configured authentication when you uploaded the first tar archive
file, you will not need to configure it again. -
Run the following command line in the decompressed archive to start the image upload:
./pushall.sh <registry> (1)
Where:
1 <registry>
is the hostname of your Docker registry.
Installing Technology Repository
For more information on adding technologies, see our SDK documentation. |
-
Copy the path to the
technologies.zip
file that contains your technologies. -
Run the following
saagiectl
command line to install the repository in your cluster:./bin/saagiectl upload technologies --file <technologies-file> (1)
Where:
1 <technologies-file>
must be replaced with the path to yourtechnologies.zip
file.
Setting Up SMTP (Simple Mail Transfer Protocol) Requirements
An SMTP server is mandatory to send, receive, and relay outgoing mail between your Saagie platform and users' email address. Saagie must therefore have access to your SMTP server and be compatible with the following configurations:
-
SMTP authentication can be anonymous or require authentication.
-
SMTP transport can be SMTP or SMTPS.
-
You must have a valid SSL certificate.
Once configured, you will be able to use your user email address to receive status alerts or change/reset the password associated with your Saagie account.
Deploying and Updating Your SSL Certificate
Make sure that your SSL certificate is valid by checking the following constraints:
-
The certificate’s validity date must be correct.
-
The certificate must include at least the Saagie product URL.
-
The
KeyUsage
attribute must include thedigitalSignature
andkeyEncipherment
elements.
-
Open your preferred terminal command.
-
To deploy (or update) your SSL certificate, run the following command line:
kubectl create secret tls saagie-common-tls --cert=cert.pem --key=cert.key -n <installationId> --dry-run=client -o yaml | kubectl apply -f -
Where:
-
<installationId>
must be replaced with your installation ID, which must match the prefix you have determined for your DNS entry.
-