Customizing Vanilla Kubernetes Cluster

Because operations teams may need to adjust the configuration of a Vanilla Kubernetes cluster after the installation process is complete, you can customize parts of your configuration.

These examples cover advanced use of Kubernetes clusters. They assume a strong understanding of Kubernetes.

Customizing Persistent Storage

To customize your persistent storage, you must add information to the existing storage.yml file created during the storage classes creation.

Examples

  • Ceph

  • NFS

Saagie's SRE team uses Ceph with the provisioner ceph.com/rbd. Here is an example of their configuration. Yours will depend on your storage technology and provisioner.

To use Ceph, add the following configuration to your existing storage.yml file:

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: common-storageclass
provisioner: ceph.com/rbd
allowVolumeExpansion: true
parameters:
  monitors: <array of ips> (1)
  adminId: admin
  adminSecretName: ceph-secret-admin
  adminSecretNamespace: kube-system
  pool: common
  userId: common
  userSecretName: ceph-secret-common
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: <customer name>-storageclass (2)
provisioner: ceph.com/rbd
allowVolumeExpansion: true
parameters:
  monitors: <array of ips>
  adminId: admin
  adminSecretName: ceph-secret-admin
  adminSecretNamespace: kube-system
  pool: <pool id> (3)
  userId: <user id> (4)
  userSecretName: ceph-secret-<customer name>
  fsType: ext4
  imageFormat: "2"
  imageFeatures: "layering"
---
apiVersion: v1
data:
  key: <key>
kind: Secret
metadata:
  name: ceph-secret-<customer name>
  namespace: saagie-common
type: ceph.com/rbd
---
apiVersion: v1
data:
  key: <key> (5)
kind: Secret
metadata:
  name: ceph-secret-common
  namespace: saagie-common
type: ceph.com/rbd

Where:

1 <array of ips> for monitors is provided by Saagie's SRE team. For example, 192.168.50.100:6789,192.168.50.101:6789,192.168.50.102:6789,192.168.50.110:6789,192.168.50.111:6789,192.168.50.112:6789
2 <customer name> is the platform URL prefix given during installation.
3 <pool id> is provided by Saagie's SRE team.
4 <user id> is provided by Saagie's SRE team.
5 <key> is provided by Saagie's SRE team.

To use NFS, add the following configuration to your existing storage.yml file:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: nfsprovisioner/ifs
parameters:
  archiveOnDelete: "false"