Azure Kubernetes Service (AKS) Networking Deep Dive Part 1— Pod Intra-Node/Inter-Node Communication

  • Container Communication
  • Pod Intra-Node/Inter-Node Communication
  • Pod Service Communication
  • Pod Ingress Communication
  • Azure Container Network Interface (CNI)
  • CoreDNS

Container Communication

Containers would need a way to communicate with external environment docker0 on each hosting Node serves as the container gateway and it is created for this purpose.

# get the node's name
- kubectl get nodes
# get into the node's shell. Reference for installation.
- kubectl node-shell <node name>
# go through the iptables rule chains of the newly created NGINX Ingress
- iptables -t nat -L -n | column -t | grep DOCKER
- iptables -t nat -L POSTROUTING -n | column -t

Pod Intra-Node Communication

Let’s go through the background information. On Azure portal, if we create the cluster and keep everything as default, the networking information would be similar to below.

  • Kubenet is the first and default CNI any K8s cluster would apply. The characteristic of this CNI is that it would have separate Docker, Pod and Service address space to ensure everything is not overlapped.
  • In AKS with Kubenet CNI, all Docker, Pod, Service address spaces are NOT on the same layer as Azure virtual network. We could almost think the cluster has a nested virtual environment where resource uses their own nested virtual private IP address to communicate.
# get nodes' name
- kubectl get nodes
# get into one of the node's shell
- kubectl node-shell <node's name>
# if you exit the shell and check the Pods. You would find out it is actually a Pod with privileged permissions to execute commands in the hosting Node.
- kubectl get pods
# list containers
- docker ps
# get container ID by approximate name
- docker ps | grep k8s_<pod name>_<pod name>
# get process ID of the container
- docker inspect --format '{{.State.Pid}}' <container ID>
# enter the network namespace of the process
- nsenter -t <process ID> -n ip addr
Ref: More about nsenter
# list the ip links on the hosting Node
- ip link list
# in the hosting Node
- ifconfig -a

Pod Inter-Nodes Communication

All the concepts of Pod communication would be the same as within the same Node. The only difference is that when Pod1 (Node1) is trying to contact Pod3 (Node2), the ARP request would fail on Node1’s container bridge (cbr0). Therefore, Node1’s cbr0 would forward requests based on the set user-defined routes. Node1’s eth0 would then reach the right Node (Node2) where Pod3 is located. After that, Node2’s cbr0 would do an ARP request to figure out where Pod3 is located and the connection would be made.

Source: Understanding Kubernetes Networking — Part 2 | by Sumeet Kumar | Microsoft Azure | Medium
  • Pod1 netns’ eth0 ← → Root netns’ veth0
  • Pod2 netns’ eth0 ← → Root netns’ veth1
  • Node1 and Node2 both have cbr0 as container bridge. Container bridge is in the same Pod address space.
# get IP forwarding setting
- sysctl -a | grep -i net.ipv4.ip_forward
# get routing table information
- route -n



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


Learning new things about Kubernetes every day. Hopefully, the learning notes could help people on the same journey!