Certified Kubernetes Security Specialist (CKS) Preparation Part 3— Cluster Setup
If you have not yet checked the previous parts of this series, please go ahead and check Part1 and Part2.
In this article, I would focus on the preparation around cluster setup in CKS certification exam. Some focus would be on CIS Kubernetes Benchmark, kube-bench and verify platform binaries as these 3 are new to me.
Network Policy
Let’s first look at Kubernetes network policy. The concept is pretty similar to firewall, just that it could manage pod-to-pod, pod-to-service and external-to-service communication. Administrators identify what resources could access and could be accessed by Pod labels, namespaces or IP addresses. Pretty straightforward. Take the default example from official documentation, it basically means
- Pods with label “role: db” could be accessed with TCP, and port 6379
- Pods and Services with label “role: frontend”, in namespace “project: myproject” and reside in 172.17.0.0/16 would be able to access Pods with label “role: db”.
- Resources reside in 172.17.1.0/24 would not be able to access Pods with label “role: db”.
- The same concept applies to egress rules.
With proper network security rules, administrators could ensure only the allowed resources are accessing the resources that made accessible with the accessible protocols and ports. Pay extra attention on default deny all and allow all rules.
Deny All Ingress
Allow All Ingress
Let’s try to see this in practice. We do
- Allow ingress from Pod in namespace label “cks: allow” to Pod with label “cks: denymost”
- Allow ingress from Pod label “cks: allow” to Pod with label “cks: denymost”
Network Policy
Test accessing to Pod with label “cks: denymost” from Pod with label “cks: denymost”
What about trying to access Pod with label “cks: denymost” from elsewhere?Denied.
Kubernetes Dashboard
Although most administrators would prefer to use Kubernetes CLI commands to administrate all resources within K8s clusters, some still prefer to have visual display. Kubernetes Dashboard is created for this reason.
By executing the command below (more details in this website), people easily setup the all Kubernetes Dashboard required resource under the namespace of “kubernetes-dashboard”.
- kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
We could see all the resources being deployed in the one liner.
- kubectl -n kubernetes-dashboard get pod,deploy,svc
As most of you notice, default Kubernetes Dashboard service is exposed as Cluster IP and it would not be possible for administrators to access this IP address without getting inside a shell inside a Pod. For most cases, administrators use “kubectl proxy” to proxy an endpoint within the working machine to the actual Kubernetes Dashboard service. For more information on how to proxy the service, check here.
In some testing environments in less security concern, we could make Kubernetes Dashboard deployments and services to be exposed with Node Port, so administrators could use nodes’ IP address, public or private, and assigned port to access the service. We edit the actual running deployment YAML.
- kubectl edit deployment kubernetes-dashboard -n kubernetes-dashboard
- Remove the argument of auto generate certificates
- Add the argument of insecure port 9090
- Change livenessProbe to listen on port 9090
- Change scheme to listen on HTTP
After that, we make changes on Kubernetes Dashboard services.
- kubectl edit service kubernetes-dashboard -n kubernetes-dashboard
- Change port to 9090
- Change targetPort to 9090
- Change type to NodePort
Right now, try to access Kubernetes Dashboard with nodes’ IP address with assigned port.
Since Kubernetes Dashboard is leveraging service account “default” in namespace “kubernetes-dashboard” for accessing each resource, binding the right permission to this service account would allow the dashboard to show more information in the corresponding namespaces.
Secure Ingress
As most of the readers here already have some practical experience on K8s, one of the most popular ingress being used nowadays in K8s is NGINX Ingress, so we would use it as the example. First thing first, we could install NGINX Ingress with one liner.
- kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/baremetal/deploy.yaml
The detailed instruction is here. Let’s take a look what have been created. One thing to note down is that NGINX Ingress is now
- Being exposed as Node Port service
- Assigned high port 30597 over HTTP
- Assigned high port 32529 over HTTPS.
After installation, our goal would be to expose a couple of services to test ingress functionalities. As you see below, 2 Pods (service1, service2) has been created and both being exposed as ClusterIP service (service1, service2).
With that, we could head over to official documentation and check the section about how to set up Ingress rules. Basically, what it means in the template is that Ingress would be directing traffic to a service named “test” with the path of “http://<node IP>:<assigned node port>/testpath”. So, let’s give it a try.
**Note: since we expose service with names as “service1” and “service2” earlier, we would need to make sure the Ingress YAML is actually pointing to service with those names. In other words, with this template, we need to replace “test” with either “service1” or “service2”.**
We see we could access “http://<node IP>:<assigned node port>/testpath” and view the default NGINX content.
We could make Ingress to point to 2 different services with different paths. Since we are making Ingress to point to more than 1 service hosted inside K8s cluster, it makes sense to use path name to correctly show which service we are trying to reach.
Now we know the service works with HTTP. We have reached the point of making the website to be only accessible over HTTPS, a secure way of accessing the content. Of course, we would first need a certificate to bind the website.
Create self-signed certificate and key with one liner.
- openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
Create a K8s TLS secret with
- kubectl create secret tls secure-ingress — cert=cert.pem — key=key.pem
Bind the TLS secret to the existing Ingress. For detailed template, please refer to this site.
From the screenshot above, we learn Ingress service is listening on 32529 for HTTPS . On Linux environment, use commands like below to visit websites using self-signed certificate
- curl https://secure-ingress.com:32529 — resolve secure-ingress.com:32529:52.137.121.234 -k
Resolved public IP is needed since the FQDN resolution would not work in this situation and “-k” is needed for insecure access as it is leveraging self-signed certificate.
Add “-v” at the end of the curl command to get verbose of the whole process of clients accessing HTTPS website and we would see the access is using self-signed certificate for certain.
- curl https://secure-ingress.com:32529 — resolve secure-ingress.com:32529:52.137.121.234 -kv
Node Metadata Protection
Every virtual machines (nodes) hosted in the cloud would need to access node metadata endpoint for various reasons. However, allowing the access to every resource is not ideal. To enhance the security of this access is to apply network policies, so only designated resources within K8s could have the ability of contacting node metadata endpoints. As this is more or less the same concept as section “Network Policy”, we would just quick go to the next section.
CIS Benchmarks
CIS stands for Center for Internet Security and CIS Benchmarks goal is to make every resource running on the Net to be as secured as possible. In CKS certificate exam, the target is to see whether administrators could make running K8s cluster to reach that goal. With that in mind, we could head to this site and download the CIS Benchmarks PDF for K8s.
The PDF file is essentially a reference to let you know what needs to be modified within K8s clusters, but how would we know what needs to be changed? That is where kube-bench come into play. From this site, run
- docker run — pid=host -v /etc:/etc:ro -v /var:/var:ro -t aquasec/kube-bench:latest [master|node] — version 1.13
**You might need to replace the content of “master|node” and “running K8s version”.**
Then, you would get a result similar to below.
We could ignore the passed ones and focus the ones with [FAIL]. Using [FAIL] 1.2.6 as an example, we first head to the CIS Benchmarks PDF for K8s and search for 1.2.6. Then, we see something like below.
The description matches and it also provides us the way to remediate. Let’s try the steps in the cluster. First, we show kubelet configuration with
- sudo cat /etc/kubernetes/kubelet.conf
We copied the content in “certificate-authority-data” and come outside of the text editor to decode the base 64 encoding on this content. Also, we save the content to /etc/kubernetes/pki/apiserver-kubelet-ca.crt. The command should be similar to
- echo <copied base64-encoded content> | base64 -d > /etc/kubernetes/pki/apiserver-kubelet-ca.crt
Now, we head over to modify kube-apiserver.yaml in /etc/kubernetes/manifests/kube-apiserver.yaml, so the communication between kube-apiserver and kubelet could leaverage the newly added CA information.
After kube-apiserver restarts, check by running
- kubectl get pods -n kube-system | grep kube-apiserver
We rerun the kube-bench testing to ensure now [FAIL] 1.2.6 turns to [PASS] 1.2.6.
Verify Platform Binaries
This topic is pretty self-explanatory as administrators would need to ensure whatever that would be installed inside K8s clusters should be intact and complete. So, we are comparing the downloaded file sha512sum with the string provided in GitHub page.
First, we head to this site and search for the version we would desire to install in K8s clusters.
For me, I would need to know v1.20.2 details, so I click on v1.20.2.
Then, click on CHANGELOG and find Server binaries.
Depends on the processor K8s clusters are running on, different server binaries would have different sha512sum. The next step is to download the binaries into server and compare it to the string shown on the site.
- echo “<copied sha512sum>” > compare
- wget <v1.20.2 server binaries>
- sha512sum <downloaded file> >> compare
- cat compare | uniq
This articles briefly covers the main topics within the CKS certificate exam domain, cluster setup. We would be talking about the next domain, cluster hardening, in the next article.