Certified Kubernetes Security Specialist (CKS) Preparation Part 5 — Microservice Vulnerabilities

Jonathan
7 min readMar 4, 2021

--

If you have not yet checked the previous parts of this series, please go ahead and check Part1, Part2, Part3 and Part4.

In this article, I would focus on the preparation around microservice vulnerabilities in CKS certification exam.

Manage Secrets

For knowing how Kubernetes store secrets and how Pods use secrets, we first create 2 secrets (secret1 and secret2).

  • kubectl create secret generic secret1 — from-literal=username=jonw
  • kubectl create secret generic secret2 — from-literal=password=12345678
  • kubectl get secrets

and put them inside a Pod as files and as environment variables. The YAML template of the Pod.

  • kubectl run mountsecrets — image=nginx -o yaml — dry-run=client > pod-mountsecrets.yaml
  • nano pod-mountsecrets.yaml
  • kubectl create -f pod-mountsecrets.yaml

After the pod gets created, we could execute into a shell inside of the Pod and try to read the secrets.

  • kubectl exec mountsecrets -it — bash
  • cat /etc/secret1/user
  • env | grep secret2

The permission each Pod leveraging is the default service account created in the namespace. In this example, it would be service account “default” in the namespace “default”. Each service account would automatically generate a token, the name would be similar to “<service account name>-token-<randon-generated-string>”.

  • kubectl get sa
  • kubectl get secrets

If executing into one of the shell within Pods(containers), the token would be shown under file mount.

  • mount | grep serviceaccount
  • cat /run/secrets/kubernetes.io/serviceaccount/token

If any endpoint needs to be access with this token, this could be easily done with

  • curl <target endpoint> -H “Authorization: Bearer <token content> -k”

**Ignore the fact that it is returning HTTP status 403 as the service account does not have proper permissions.**

If we try to get secrets through Docker, we would need to get to the node that hosts this pod. In this case, it would be one of the worker nodes. After getting into the terminal of the worker node, we locate the container by using the image the pod leverages and execute approximately the same commands executed in K8s pods.

  • docker ps | grep nginx
  • docker exec -it <container ID> sh
  • cat <secret file path>/<secret key>
  • cat /run/secrets/kubernetes.io/serviceaccount/token
  • curl <target endpoint> -H “Authorization: Bearer <token content>” -k

If we try to get the secrets from ETCD, we first query all the certificates and keys kube-apiserver uses to communicate with ETCD.

  • cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep etcd

Then, we execute ETCD CLI with the queried certificates and keys above to get the secrets are showing in plain format.

  • ETCDCTL_API=3 etcdctl — cacert=”/etc/kubernetes/pki/etcd/ca.crt” — cert=”/etc/kubernetes/pki/apiserver-etcd-client.crt” — key=”/etc/kub
    ernetes/pki/apiserver-etcd-client.key” get /registry/secrets/<namespace>/<secret name>

By now, we understand it would be “safer” if we could have ETCD encrypted even at rest and the way to do that is by enabling it in kube-apiserver configuration file. First, we create an ETCD encryption configuration file somewhere under /etc/kubernetes. In this case, we would create a file named “ec.yaml” in a folder named “etcd” under that path. For the secret section, please use base 64 encoded form of whatever string provided. Please check here for more ETCD encryption information.

  • cd /etc/kubernetes
  • mkdir etcd
  • nano etcd/ec.yaml

After creating the ETCD configuration file, head over to configure kube-apiserver file for using the newly created configuration file. Also, remember to add a hostVolume and volumeMount in the bottom part of the configuration file.

  • nano /etc/kubernetes/manifests/kube-apiserver.yaml

Wait for the kube-apiserver to restart and test whether newly created secrets are now being encrypted even at rest. So, secret3 is created after enabling the at-rest encryption.

  • ETCDCTL_API=3 etcdctl — cacert=”/etc/kubernetes/pki/etcd/ca.crt” — cert=”/etc/kubernetes/pki/apiserver-etcd-client.crt” — key=”/etc/kub
    ernetes/pki/apiserver-etcd-client.key” get /registry/secrets/<namespace>/<secret name>

Container Runtime Interface (CRI)

Before preparing for CKS, I have few to no knowledge around this topic. After reading through a couple of good articles, like this one, I get some ideas why CRI needs to be in place. Before open container initiative (OCI) proposing to have CRI, the communication between containers and Kubernetes (K8s) is relying on dockershim/rkt provided and maintained by Docker. However, when containers and K8s are getting more and more sophisticated, the maintenance cost of dockershim/rkt becomes higher and higher. Therefore, having an interface that opens to the open source community and for solely dealing with container runtime becomes the answer to this challenging situation.

For CKS exam, we would need to know how to create RuntimeClass and how to create Pods using the specific RuntimeClass to communicate with K8s. For more information, please check this site.

  • kubectl create -f <runtimeclass.yaml>
Image Credit: kubernetes.io
  • kubectl get runtimeclass
  • kubectl create -f <pod.yaml>
Image Credit: kubernetes.io

Container with Proper Permissions

Containers are essentially processes running inside hosts, meaning if we want, we could “intentionally” retrieve host information from each container. If containers are not set with proper permissions, the hosting environment are exposed to potential threat. Luckily, there are many configuration that could be implemented inside container to not running into that situation.

runAs

By adding these security contexts, whenever administrators execute into one of the Pod’s terminal, the associated identity would be user:1000 and group:3000. Please check here for more information.

  • kubectl create -f <pod.yaml>
  • kubectl exec <pod name> -it — sh
  • id

runAsNonRoot

This parameter could be run in Pod level or container level. However, adding this security context may lead to the result that container cannot be started at the end since it would need root access to run with the image. Please check here for more information.

  • kubectl create -f <pod.yaml>
  • kubectl describe pod <pod name>

privileged

There is no apparent way to know whether the container is running in privileged state or not. However, this could be tested by the possibility of executing some privileged actions.

  • kubectl create -f <pod.yaml>
  • kubectl exec <pod name> -it — sh
  • sysctl kernel.hostname=whatever

allowPrivilegeEscalation

By checking /proc/1/status → NoNewPrivs, we would get the information on whether the Pod could execute privileged actions, including actions shown in the previous section “sysctl kernel.hostname=whatever”. If the Pod is allowed to have privilege escaltion, there should be no issues.

  • kubectl create -f <pod.yaml>
  • kubectl exec <pod name> -it — sh
  • cat /proc/1/status | grep NoNewPrivs

PodSecurityPolicy

First, we enable kube-apiserver to apply Pod security policy within its configuration file. Then, we create one Pod security policy to determine what actions could be performed by Pod and what could not. Please check here for more information.

  • sudo nano /etc/kubernetes/manifests/kube-apiserver.yaml
  • kubectl create -f psp.yaml

When try deploying another pods allowing privilege escalation, the console would return error message because it is contradicting to Pod security policy that was applied.

  • kubectl create -f <pod.yaml>

mTLS

mTLS stands for mutual authentication, meaning client authenticates server and server does the same to client. This Medium article provides a pretty clear explanation how it works. Basically, whenever we are putting client certificate and client key in the command like “curl” or “xxxxctl”, there is a high possibility that the communication is applying mTLS. In the Medium article above, if we follow all the steps until the end, we should be having an Ingress YAML set up somewhat like below.

If we try to curl the HTTPS website without client credentials.

** The NGINX Ingress is setup to be exposed with NodePort

** The resolved public IP address could be either node’s

If we try to curl the HTTPS website with client credentials, we would get similar results as below. We might still see the HTTP status 403 but that is just because the client is forbidden to visit the site.

  • curl https://meow.com:30908 — resolve meow.com:30908:52.183.91.218 — cert client.crt — key client.key -kv

--

--

Jonathan

Started my career as a consultant, moved to support engineer, service engineer and now a product manager. Trying to be a better PM systematically every day.