Azure Kubernetes Service (AKS) Network Deep Dive Part 3—Azure Container Network Interface (CNI)

Jonathan
4 min readMay 20, 2021

If you have not yet checked out AKS Network Deep Dive Part1 and Part2, please click on the links above to go through the content.

In this article, we would be focusing on how Azure CNI operates inside AKS.

What is CNI?

Abbreviated for Container Networking Interface. It is a specification, where all the networking implementation is done by Plugins. It was developed to have simple contract between Container Runtime and networking implementation on containers.

— Quoted from here

  • K8s references the plugins stored in /opt/cni/bin.
  • Kubelet reads a file from CNI directory /etc/cni/net.d. The left is Kubenet CNI and the right is Azure CNI.

The main characteristic of this network solution is that it gives each Pod an Azure-virtual-network-level IP address. To put in plain words, if you have a virtual network with classless inter-domain routing (CIDR) of 10.0.0.0/8, all the Pods, Services and Nodes would be using IP addresses in this range. The advantage of this setup is that Pods could represent itself to communicate with other Azure services, such as Azure SQL, Azure Web App, Azure Cache for Redis and so on. It also reduces a huge amount of pressure on the hosting Nodes as they would not need to perform additional masquerading before being source network address translation (SNAT) to AKS associated public IP addresses.

The difference between Kubenet AKS and Azure CNI AKS is that with Kubenet, Pods and Services are in 1 CIDR and Nodes are in another. It requires route tables for Pods and Services to communicate across Nodes; Azure CNI does not need that.

How does Azure CNI achieve the goal of having each Pod with an Azure-level IP address?

In simple words, whenever a Node is created, there are multiple IP configurations being created along with the primary one.

From Azure portal, virtual network.

We could check the same information by getting into the Node shell.

# get the node's name
- kubectl get nodes
# get into the node's shell. Reference for installation.
- kubectl node-shell <node name>
# list IP address
- ip addr list

It is not hard to see that Node’s network interface index is actually linked with Pod’s network interface index, but different number.

# get all {ods and their hosting Nodes
- kubectl get pods -n kube-system -o wide
# look for the Pod that is in the target Node with an IP close to the beginning of CIDR. Execute into the Pod.
- kubectl exec -it <Pod name> -n kube-system -- /bin/bash
# check IP addresses associated with the Pod
- ip addr list

Since both Pods and Nodes are in the same Azure virtual network layer, they would be sharing the indexes. Hence, Pod is taking index 4 and Node is taking index 5.

Image Source: Concepts — Networking in Azure Kubernetes Services (AKS) — Azure Kubernetes Service | Microsoft Docs

Also, because Pods are already in the Azure virtual network layer, it would not need to be SNAT to Node’s primary IP address. Pod could directly contact the external environment by SNAT to associated public IP address.

Unlike kubenet, traffic to endpoints in the same virtual network isn’t NAT’d to the node’s primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that’s external to the virtual network still NATs to the node’s primary IP.

In practice, since every resource in the cluster is using an “actual” IP address from Azure, the importance of IP planning could not be stressed enough. There are many cases in the real-world situation that IP addresses become not enough when the business ramps up. However, this situation could be eased a little as there is a new feature, dynamic IP address allocation and enhanced subnet support, coming in the near future. Instead of pre-provisioning all the IP configurations in the hosting Node, IP configuration could be prepared in the backend and get created on the fly.

I would write more about inbound/outbound connection if there is a big difference between Kubenet and Azure CNI, but there is not when looking into the iptables rule chains. If you are not familiar with the theory yet, please go to AKS Network Deep Dive Part2 — Inbound Communication to get more information. That is it! This covers the bare minimum of knowing how Azure CNI works under the hood. Happy learning!

--

--

Jonathan

Started my career as a consultant, moved to support engineer, service engineer and now a product manager. Trying to be a better PM systematically every day.