Certified Kubernetes Security Specialist (CKS) Preparation Part 4— Cluster Hardening

Jonathan
9 min readFeb 25, 2021

--

If you have not yet checked the previous parts of this series, please go ahead and check Part1 , Part2 and Part3.

In this article, I would focus on the preparation around cluster hardening in CKS certification exam.

Role and Role Binding

  • Role = the position that could perform actions
  • RoleBinding = the binding of user/service account and roles
  • Roles are namespace specific.

Let’s see an example to get a clearer idea. Our goal is to create a role that could get pods in namespace “test” and bind user “jon” to this role.

Create a Role called “get-pod” in namespace “test”

  • kubectl create role get-pod — verb=get — resource=pods -n test

Create a RoleBinding called “get-pod-jon” that connects the role “get-pod” with user “jon” in namespace “test”

  • kubectl create rolebinding get-pod-jon — role=get-pod — user=jon -n test

Test whether user “jon” could actually get pods in namespace “test”

  • kubectl auth can-i get pods — as jon -n test
  • kubectl auth can-i get pods — as jon -n default

Cluster Role and Cluster Role Binding

  • ClusterRole = the position that could perform actions across the whole cluster
  • ClusterRoleBinding = the binding of user/service account and cluster roles
  • ClusterRoles are NOT namespace specific.

Let’s see an example to have a clearer idea. Our goal is to create a cluster role that could delete pods in namespace “test2” and bind user “jon” to this cluster role.

Create a ClusterRole called “delete-pod”

  • kubectl create clusterrole delete-pod — verb=delete — resource pod

Create a ClusterRoleBinding called “delete-pod-jon” that connects the ClusterRole “delete-pod” with user “jon”

  • kubectl create clusterrolebinding delete-pod-jon — clusterrole delete-pod — user jon

Test whether user “jon” could actually delete pods in all namespaces.

  • kubectl auth can-i delete pods — as jon -n test
  • kubectl auth can-i delete pods — as jon -n default

Certificate Signing Requests

Certificate signing requests (CSR) are essentially users or service accounts asking kube-apiserver to provide access for managing K8s clusters with their own identity. This would ensure that users and service accounts would be communicating with kube-apiserver on their own behalf when having interactions. The basic process flow is

User/service account generates a key

  • openssl genrsa -out jon.key 2048

Use the key to generate a CSR

  • openssl req -new -key jon.key -out jon.csr

Encode the output CSR with base 64 and copy the content to a K8s CSR YAML. Check here for getting the default K8s CSR YAML template.

  • nano k8s-jonw-csr.yaml
  • cat jon.csr | base64 -w 0

Create the K8s CSR

  • kubectl create -f k8s-jon-csr.yaml

Check CSR status and Approve the CSR

  • kubectl get csr
  • kubectl certificate approve jon

Get client certificate from K8s CSR YAML and Decode it from base 64 and save it to a new file

  • kubectl get csr jon -o yaml
  • echo <certificate content> | base64 -d > jon.crt
  • cat jon.crt

Set the new credential in kubeconfig for administrators to use

  • kubectl config set-credentials — client-key=jon.key — client-certificate=jon.crt — embed-certs
  • kubectl config view

Set new context in the cluster

  • kubectl config set-context jon — user=jon — cluster=kubernetes

Use the context

  • kubectl config use-context jon

Depending on what permissions have been given to user “jon”, the machine could perform different actions on the credential provided.

Service Account in Pods

The topic is to ensure administrators are not giving service accounts within Pods to have permissions besides required. For demonstration, we would be creating a service account

  • kubectl create sa podsa

and we would see a secret is also auto generated associated with the service account

  • kubectl get secrets

Then, create a Pod that uses that service account and allow the Pod to use its auto-generated service account token (If not explicitly denied, the default is always allow.)

When we execute inside a shell of the Pod, we could see the auto-generated service account token.

  • kubectl exec usesa -it — bash
  • mount | grep sec
  • cd /run/secrets/kubernetes.io/serviceaccount
  • cat token

If we recreate the Pod with DISALLOWING it to use the auto-generated service account token, we see no auto-generated tokens being mounted in the running container.

Last but not least, we could apply best practices by limit service account permissions with the right roles or cluster roles.

Kube API Server Access Management

Kube API server is considered the brain of K8s, so we would need to take extra caution what could get access to this core service. By default, kube-apiserver would allow anonymous access as we could see we get HTTP status 403 forbidden access when executing

If the request cannot even start the authentication, most likely, the console would return HTTP status 401 unauthorized according to this section of official documentation. The way to DISABLE anonymous access is by adding an additional parameter in kube-apiserver

  • sudo nano /etc/kubernetes/manifests/kube-apiserver.yaml

Once this is done, wait for kube-apiserver to restart and we could again try to access kube-apiserver with anonymous user. As expected, the console is now return HTTP status 401 as the request is not even being authenticated.

Let’s switch the setting back on allowing anonymous access to kube-apiserver and see how we could expose kube-apiserver service for external access.

First thing first, change service “kubernetes” to be exposed from ClusterIP to NodePort, so we could use nodes’ public/private IP address and assigned port to access. Since I do not have another VM setup in the same network environment the K8s cluster is in, I would use nodes’ public IP address for demonstration.

Do a simple curl test to see whether we are getting HTTP status 403

Now, we ensure we are using kube-apiserver-recognized FQDN or IP address

  • openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text

Edit hosts to let any of the FQDN we are seeing in the image above to be resolved in nodes’ public IP address required (nodes’ public/private IP address).

  • sudo nano /etc/hosts

Look up the kubernetes and see what it resolves into

  • nslookup kubernetes

View the raw data of kubectl config content.

  • kubectl config view — raw > conf

Save the existing raw CA certificate, client certificate and key information into file “conf” and modify the server information to use nodes’ public/private IP address or in this case FQDN if you have configured in hosts record.

Try contacting kube-apiserver, try to get namespaces in this case, with the modified conf file.

  • kubectl — kubeconfig conf get ns

Node Restriction

Check on K8s master nodes and see whether NodeRestriction admission plug-in is enabled already

  • sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml

Head over to K8s worker nodes and check whether using worker nodes’ kubelet context could modify master nodes labels. First, we would need to associate worker nodes’ kubelet context as default context for executing K8s CLI

  • sudo su
  • export KUBECONFIG=/etc/kubernetes/kubelet.conf

and test with whatever command works or not

  • kubectl get ns

As expected, worker nodes’ kubelet does not have permission to get namespaces.

With worker nodes’ kubelet context, it is not authorized to label master nodes but it is able to label worker nodes (itself or other nodes).

  • kubectl label node cks-master cks/test=yes
  • kubectl label node cks-worker cks/test=yes

Upgrade Kubernetes Clusters with kubeadm

One of the most common tasks every IT administrator would need to do is to update or upgrade the running machines. For K8s administrators, K8s version would also need to be maintain in supported scope and the following would show how to do it.

Master Nodes

Get all the apt updates

  • sudo apt-get update

Make sure no more pods being scheduled to master nodes.

  • kubectl cordon cks-master

Drain all pods, deployment from master nodes.

  • kubectl drain cks-master — ignore-daemonsets

Check kubeadm version

  • kubeadm version

Get the upgrade plan

  • kubeadm upgrade plan

Apply the upgrade plan shown before

  • kubeadm upgrade apply <K8s version>

Check kubectl and kubelet version

  • kubectl version
  • kubectl get nodes -o yaml | grep kubelet

Install all core components to the required version

  • apt-get install kubeadm=<K8s version> kubelet=<K8s version> kubectl=<K8s version>

Make master nodes available for pod scheduling once again.

  • kubectl uncordon cks-master

Worker Nodes

Worker nodes upgrade is slightly different, but mostly the same concept.

Get all the apt updates

  • sudo apt-get update

Make sure no more pods being scheduled to master nodes.

  • kubectl cordon cks-worker

Drain all pods, deployment from master nodes.

  • kubectl drain cks-worker — ignore-daemonsets

Check kubeadm version

  • kubeadm version

Install the required kubeadm version

  • apt-get install kubeadm=<required version>

Upgrade worker nodes with kubeadm

  • kubeadm upgrade node

Install all core components to the required version

  • apt-get install kubelet=<K8s version> kubectl=<K8s version>

Check kubelet and kubectl version to ensure they are running in the required version.

  • kubelet — version
  • kubectl — version

Make worker nodes available for pod scheduling once again.

  • kubectl uncordon cks-worker

For more details on how to use kubeadm to upgrade master nodes and worker nodes, please check this site.

--

--

Jonathan
Jonathan

Written by Jonathan

Started my career as a consultant, moved to support engineer, service engineer and now a product manager. Trying to be a better PM systematically every day.