How to configure a multi-cloud Kubernetes cluster with Scaleway Kosmos?

Scalway's Kosmos multi-cloud solution offers you the possibility to handle the control-plane of your Kubernetes cluster for 100e / month. You can rent a server from any cloud provider and add it as a worker node to the cluster. It is great if you cannot find a server that suits you in Scaleway's instances catalogue.

In this post we will configure an external server to act as a worker in the multi-cloud cluster. We will also install OpenEBS to handle the creation of ReadWriteMany volumes on the node itself.

There are some limitations with this setup.

  • Deleting a node will delete the data created on it because the data is created on the nodes themselves and is not replicated
  • The pods must be run on the same nodes than the volumes they are attached to

Create a multi-cloud cluster

Create a new multi-cloud cluster on Scaleway console.

Create a multi-cloud pool in the cluster

Create a new multi-cloud pool on Scaleway console.

Add a node to the pool

Add a node (server) to the pool, you should have:

  • A server running with Ubuntu 20.04, and with a public IP
  • A Scaleway secret key generated from on your Scaleway Account

Connect to your server as root

The server may not allow you to connect as root. If so, it will ask you to connect as ubuntu user instead. In the later case, you will need to execute the script multicloud-init with sudo.

Retrieve the multicloud-init script

wget https://scwcontainermulticloud.s3.fr-par.scw.cloud/multicloud-init.sh
chmod +x multicloud-init.sh

Export the required environment variables. Replace the values where needed.

export POOL_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx REGION=xx-xxx SCW_SECRET_KEY=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Execute the script to attach the node to the Multi-Cloud pool

./multicloud-init.sh -p $POOL_ID -r $REGION -t $SCW_SECRET_KEY

The node should appear on the Scaleway console. Wait for the blinking dot to become steady green.

When the node is ready, you can download kubectl config from the Scaleway console and configure your local kubectl with it.

Configure OpenEBS volumes

Install OpenEBS Helm chart

Set nfs-provisioner flag to enable nfs.

helm repo add openebs https://openebs.github.io/charts
helm repo update
helm install --namespace openebs openebs openebs/openebs --create-namespace --set nfs-provisioner.enabled=true

Update default storage class

Disable scaleway as default storage class.

kubectl patch storageclass scw-bssd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

Set openebs-kernel-nfs as default storage class.

kubectl patch storageclass openebs-kernel-nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Well done. Your cluster is now ready to create persistent volumes.

Sources

OpenEBS Helm Repository
OpenEBS Helm Repository
charts/values.yaml at main · openebs/charts
OpenEBS Helm Charts and other utilities. Contribute to openebs/charts development by creating an account on GitHub.
dynamic-nfs-provisioner/intro.md at develop · openebs/dynamic-nfs-provisioner
Operator for dynamically provisioning an NFS server on any Kubernetes Persistent Volume. Also creates an NFS volume on the dynamically provisioned server for enabling Kubernetes RWX volumes. - dyn...
website/read-write-many.md at main · openebs/website
OpenEBS Website and User Documentation. Contribute to openebs/website development by creating an account on GitHub.