In this article I will show how to enable SSH connections into a Kubernetes pod.
In this article I will show how to enable SSH connections into a Kubernetes pod. Password connection will be disabled. Users with the authorized SSH key will be able to instantiate an SSH connection.
You may want to SSH directly into a pod for several reasons:
- you want your team to be able to debug a specific container
- you want to execute remote scripts into specific containers
- you don't want to give full access to the cluster, like creating pods or reading secrets
Ingress and redirecting the endpoint to the Pod's port 22 is not possible because HTTP does not support SSH protocol and the ingress will simply send a 400 Bad request error to any incoming SSH open connection request.
SSH has no idea what the HTTP Host directive actually means. We will have to handle the reverse SSH proxy with port numbers.
Install and enable SSH daemon
First you need to install an SSH server into your container. You can do it directly in the Dockerfile.
RUN apt-get install openssh-server
Then you need to start the SSH server when the container starts.
You can do it directly at the end of the Dockerfile,
Or with a dedicated script that you can call anytime,
#!/bin/bash exec service ssh start
[program:sshd] command=/usr/sbin/sshd -D
Create SSH config map
You need to create a
ConfigMap containing two fields.
The first one is
sshd_config, it disables password authentication.
The second file is
authorized_keys, it lists the keys authorized to connect to the pod. You should put your SSH keys and those of your team.
The resulting Kubernetes
ConfigMap looks like this,
apiVersion: v1 kind: ConfigMap metadata: name: ssh-config data: sshd_config: | PasswordAuthentication no ChallengeResponseAuthentication no UsePAM no authorized_keys: | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCopt1BadYCtaBkKUZUp8RwCU6rVlltF1PoN3+uI/K5iGIxfYqzIcz3tEko0GHE6/0kRypMIhIiaP0nyL3o1VLeASsnzeEDpxwydACb7R6BEbxZz2pDyXsx0yuBfU941hXmXqrGDSx6oYQBcExoCcpXUT88x2u71Ql6O9qvcPe495xaYfUbVfavHJ8WoDPtsTghrpDA9q4NSsiiNpAj+whcw33yc5k9FjcF9GH7LXp0AQkgzV5LbBFKqxCNdBHMFMqO3EZ8lHaNXUUiKZcdXRzAKJ+3ZuYyEe6dGFHJssheZv8tdsCKP6JF+BkfNMkN2O9JVaBSdIfWNxEMBUThLoPd
Mount the config map into the container
Create a volume referencing the ConfigMap you have created above.
sshd_config is mounted at
authorized_keys is mounted at
apiVersion: apps/v1 kind: Deployment metadata: name: application-deployment spec: selector: matchLabels: app: application replicas: 1 template: metadata: labels: app: application spec: containers: - name: application image: ubuntu ports: - containerPort: 22 volumeMounts: - name: ssh-volume subPath: sshd_config mountPath: /etc/ssh/sshd_config - name: ssh-volume subPath: authorized_keys mountPath: /root/.ssh/authorized_keys volumes: - name: ssh-volume configMap: name: ssh-config
Create the service
Finally, create a
Service of type
LoadBalancer to expose the container port 22 to the world. You can set the
port to the any value.
apiVersion: v1 kind: Service metadata: name: ssh-service spec: type: LoadBalancer ports: - port: 22222 targetPort: 22 selector: app: application
Check that the service is up and is listening to your IP.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ssh-service LoadBalancer XX.XX.XXX.XX YY.YYY.YY.YY 22222:33333/TCP 4h19m
Then check your connection,
ssh -p 22222 root@YY.YYY.YY.YY root@application-xxxxxxxxxx-xxxxx:~#