CNI is a spec and a set of binaries that Kubernetes calls. It does not define how networking works, only the interface for creating and deleting network interfaces for Pods. The CNI plugin you install (Calico, Cilium, Flannel, etc.) implements that contract.
When a Pod is created, kubelet invokes CNI with a JSON config and environment variables. The plugin creates veth pairs, moves one end into the Pod namespace, assigns IPs, and returns a result to kubelet.
sequenceDiagram
participant K as Kubelet
participant C as CNI plugin
participant N as Pod netns
K->>C: CNI_COMMAND=ADD + config
C->>N: create veth, set IP, routes
C-->>K: result (IP, routes, DNS)
K->>C: CNI_COMMAND=DEL (on Pod delete)
C->>N: teardown veth + release IP
What kubelet passes (common env vars):
CNI_COMMAND = ADD / DEL / CHECK
CNI_CONTAINERID = container ID
CNI_NETNS = path to Pod netns
CNI_IFNAME = interface name (usually eth0)
CNI_PATH = where plugin binaries live
Config locations:
Binaries: /opt/cni/bin
Configs: /etc/cni/net.d/*.conf or *.conflist
Plugin chain: main + IPAM + optional meta plugins#
Most real deployments use a chain:
Main plugin: creates the interface and sets up connectivity (bridge, macvlan, vxlan, eBPF).
Most CNIs create a veth pair between the Pod namespace and the host. The host side is attached to a bridge or virtual switch, and routes are installed so Pods can reach each other.
flowchart LR
Pod[Pod netns] -- veth --> Host[Host veth] --> Bridge[cni0 or vswitch]
Bridge --> Route[Routing table]
If your CNI overlays traffic (VXLAN/Geneve), it adds headers and reduces usable MTU. If MTU is too large, you see TCP stalls and intermittent failures.
Practical rule of thumb:
Underlay MTU 1500, VXLAN overhead ~50 bytes -> set Pod MTU to ~1450
CNI is responsible for Pod interfaces and routing, but Service load balancing is often done by kube-proxy or eBPF dataplanes:
iptables/IPVS (kube-proxy)
eBPF (Cilium, kube-proxy replacement)
Think of it as:
CNI: connect Pods to the network
Service dataplane: translate ClusterIP to Pods
CNI deep dive: design philosophies and architectures#
Each popular CNI reflects a strong opinion about what “good cluster networking” looks like. The choice is often less about features and more about architecture and operational fit.
CNI is the glue between Kubernetes and Linux networking. Once you see it as a contract and a lifecycle (ADD/DEL), debugging becomes much more straightforward.
Series: Kubernetes Internals: How the Cluster Actually Works
homelabird
Sharing hands-on cloud infrastructure and DevOps experience. Writing about Kubernetes, Terraform, and observability, and documenting lessons learned as a solo operator.