If you deploy apps using Kubernetes, knowing how to SSH into a Kubernetes pod is an important skill. Opening a shell on a pod via SSH is often the easiest way to perform administrative tasks on running applications.
If you’re wondering how to SSH into pods, you’ve come to the right place. This article walks through the details. We also explain the benefits of using SSH to access pods, as opposed to alternative methods like kubectl exec.
What Is SSH Access to Kubernetes Pods?
SSH in Kubernetes pods is the process of opening a command-line shell on a Kubernetes pod using the SSH protocol.
A pod is the type of resource that Kubernetes uses to run applications. Although there are multiple ways to deploy an application in Kubernetes, they all involve running pods, which host the individual container or containers necessary to run an application.
SSH is a network protocol that lets you connect to remove endpoints. Traditionally, SSH was used most commonly to connect from one server to another. But you can also use it to log into pods, as this article explains.
Note that while SSH opens up command-line shells by default, it’s also possible to connect to graphical applications or environments using SSH with help from techniques like X11 forwarding. However, since Kubernetes applications are usually accessible only through the command line, connecting to shell environments is almost always the goal when using SSH to log into pods.
Why Use SSH for Kubernetes Pod Access?
There are two main benefits of using SSH as a way of logging into Kubernetes pods:
Speed: SSH is a fast access method because the SSH protocol sends and receives data quickly, making it easy to administer applications without experiencing lag.
Security: SSH offers several security benefits. It encrypts all network data by default, minimizing the risk of eavesdropping and man-in-the-middle attacks. It supports a range of authentication options, including hack-resistant methods like certificate-based authentication and even 2FA. And it doesn’t require you to log in as root, which is sometimes the case for alternative access methods.
SSH vs. Kubectl exec
Speaking of other ways of connecting to pods in Kubernetes, the main alternative to SSH is the kubectl exec command. Kubectl exec lets you run commands inside a container. If you use it to run a command that opens a shell (like the sh command), it will open a shell for you. From there, you can run additional commands.
Unlike SSH access, however, kubectl exec is not an especially secure way to log into a pod. Not all connection data is encrypted when using kubectl exec. It also logs you into a shell as a root user. For these reasons, some Kubernetes admins create Roles and RoleBindings that block kubectl exec access, and rely instead on SSH as the only approved way to connect to pods.
Kubectl exec can also be less ideal than SSH from a speed perspective. Although sessions usually open quickly when using kubectl exec, it typically takes longer to move data, so the remote shell environment may not be as smooth and responsive to input.
To be sure, kubectl exec has some advantages over SSH. One is that kubectl exec doesn’t require you to install any special software inside your pod other than a shell. SSH access requires an SSH server to be present inside the pod (we’ll explain how to install one in a moment).
Another benefit of kubectl exec is that when someone connects to a pod this way, the access event is automatically recorded via the Kubernetes auditing framework — so you’ll know whenever someone connects to a pod. SSH connections are not recorded by Kubernetes. (Most SSH servers keep a log of connections, so you can still find out when someone logged in via SSH, but you’d need to get that data from inside the pod rather than being able to pull it from a central Kubernetes audit log.)
Steps for Connecting to a Pod Using SSH
Here’s the process for connecting to pods via SSH.
1. Install an SSH server in your pod
As mentioned, your pod needs to have an SSH server inside it to accept SSH connections. Whether an SSH server is installed by default depends on which containers you include in your pod.
If SSH isn’t available, you’ll need to redeploy the pod using a container image that includes an SSH server. The easiest way to do this is to create a container based on a Dockerfile like the following:
FROM ubuntu:20.04
RUN apt-get update && \
             apt-get install -y openssh-server && \
             mkdir /var/run/sshd
RUN echo ‘root:rootpassword’ | chpasswd
# Allow root login via SSH
RUN sed -i ‘s/PermitRootLogin prohibit-password/PermitRootLogin yes/’ /etc/ssh/sshd_config
EXPOSE 22
CMD [“/usr/sbin/sshd”, “-D”]
This is an easy way to run an SSH server because it installs OpenSSH (a popular open source SSH server) using the apt package manager.
Then, create a Kubernetes Deployment that includes the SSH-enabled container image. For example:
apiVersion: v1
kind: Pod
metadata:
  name: ssh-pod
spec:
  containers:
             – name: ssh-container
             image: /ssh-enabled-image:latest
             ports:
             – containerPort: 22
Note that these Dockerfile and Deployment examples expose port 22, which is important because this is the network port that SSH uses by default.
2. Set up port forwarding
To make your pod accessible via SSH, you need to configure port forwarding in Kubernetes. This tells Kubernetes to forward traffic from a network port running on a local machine to a specific pod.
You can set up port forwarding using the kubectl port-forward command. Here’s an example that forwards traffic from local port 2222 to port 22 on a pod named ssh-pod:
kubectl port-forward pod/ssh-pod 2222:223. SSH into the pod
With port forwarding set up and an SSH server present in your pod, you can now SSH into it using a command such as the following:
ssh user@localhost -p 2222
Although this command may look like it would cause you to SSH into your local machine (since it connects to localhost), it will actually direct the SSH traffic to a Kubernetes pod. This is because of the port forwarding rule we created in step 2, where we told Kubernetes to forward traffic on local port 2222 to port 22 (the SSH port) on the pod named ssh-pod.