Skip to main content
Homelab Kubernetes with Kubespray, Cilium, Istio, and | homelabird-blog
Contents Section 1/16 Read 0%
Goals Series outline Prerequisites 1) Cilium: CNI + kube-proxy replacement + L2 LoadBalancer 2) Istio: public Gateway + VirtualService routing + mTLS 2-1. Public gateway 2-2. VirtualService routing 2-3. PeerAuthentication (mTLS) 2-4. Events JWT protection 3) ExternalDNS: Cloudflare + Istio Gateway/VirtualService Cloudflare API token (how to issue) Cluster snapshot (kubectl top nodes) Istio vs Linkerd (quick comparison) Why Istio (not Linkerd) mTLS modes: sidecar vs ambient 4) Wrap-up
Contents Section 1/16 Read 0%
Jump to section Goals Series outline Prerequisites 1) Cilium: CNI + kube-proxy replacement + L2 LoadBalancer 2) Istio: public Gateway + VirtualService routing + mTLS - 2-1. Public gateway - 2-2. VirtualService routing - 2-3. PeerAuthentication (mTLS) - 2-4. Events JWT protection 3) ExternalDNS: Cloudflare + Istio Gateway/VirtualService Cloudflare API token (how to issue) Cluster snapshot (kubectl top nodes) Istio vs Linkerd (quick comparison) Why Istio (not Linkerd) mTLS modes: sidecar vs ambient 4) Wrap-up
Homelab Kubernetes with Kubespray, Part 2: Cilium, Istio, and ExternalDNS Baseline#
Configure Cilium L2 load balancing, Istio ingress routing, and ExternalDNS so bare-metal services get stable IPs and domains.
Goals#
Enable Cilium L2 LB IPAM and announcements for bare-metal LoadBalancer services.
Standardize ingress routing with Istio Gateway/VirtualService and mTLS.
Automate DNS records with ExternalDNS + Cloudflare for Istio resources.
Series outline#
Part 1 — Architecture and bootstrap
Part 2 — Cilium, Istio, and ExternalDNS baseline
Part 3 — Storage with Rook (Ceph) and Local Path Provisioner
Part 4 — Tooling: Velero, Hubble UI, Jaeger, Kiali, Tailscale, K9s
Part 5 — ECK Stack (Elastic Cloud on Kubernetes)
Planned next (optional):
Monitoring Stack (Prometheus/Grafana)
GitOps with Argo CD
Security, backup, and upgrades
Prerequisites#
Part 1 is completed (cluster bootstrapped with Kubespray).
You have kubectl access to the cluster.
A free LAN IP range reserved for LoadBalancer IPs.
1) Cilium: CNI + kube-proxy replacement + L2 LoadBalancer#
Note: Kubespray was installed with kube_network_plugin: none, so I install CNI (Cilium) separately after bootstrap. Cilium's L2 LB is already in place and is sufficient for my needs, so I do not install MetalLB.
The cluster CNI is Cilium and it runs in kube-proxy replacement mode. For bare-metal LoadBalancer, I pair Cilium LB IPAM with L2 announcements.
LoadBalancer IP pool: 192.168.2.2-192.168.2.254
L2 announce interface: enp1s0
Homelab Kubernetes with Kubespray Part 2 of 5
Apply L2 announce to all LB services (empty selector) Config: platform/cilium/cilium-l2lb-pool.yaml
Note: Cilium CRD apiVersion may vary by version. Confirm with:
kubectl api-resources | rg CiliumLoadBalancerIPPool
apiVersion : cilium.io/v2alpha1
kind : CiliumLoadBalancerIPPool
metadata :
name : homelab-l2lb-pool
spec :
blocks :
- start : 192.168.2.2
stop : 192.168.2.254
---
apiVersion : cilium.io/v2alpha1
kind : CiliumL2AnnouncementPolicy
metadata :
name : homelab-l2-announce
spec :
interfaces :
- ^enp1s0$
loadBalancerIPs : true
serviceSelector : {} This lets me assign static IPs to LoadBalancer services without MetalLB, and ARP announcements are handled at L2.
2) Istio: public Gateway + VirtualService routing + mTLS# Istio is the routing hub for everything exposed outside the cluster. I run a shared public gateway and route services with VirtualService.
Config: gitops/argocd-apps/istio-routing/istio-virtualservices.yaml
Open 80/443
Route *.homelabird.com, homelabird.com (replace with your domain)
Use a wildcard TLS secret
Keycloak uses a separate TLS secret
apiVersion : networking.istio.io/v1beta1
kind : Gateway
metadata :
name : public-gateway
namespace : istio-system
spec :
selector :
istio : ingressgateway
servers :
- port :
number : 80
name : http
protocol : HTTP
hosts : [ " *.homelabird.com " , " homelabird.com " ]
- port :
number : 443
name : https
protocol : HTTPS
tls :
mode : SIMPLE
credentialName : wildcard-homelabird-tls
hosts : [ " *.homelabird.com " , " homelabird.com " ]
- port :
number : 443
name : https-keycloak
protocol : HTTPS
tls :
mode : SIMPLE
credentialName : keycloak-homelabird-tls
hosts : [ " keycloak.homelabird.com " ] 2-2. VirtualService routing# Examples of public routes:
Grafana: grafana.homelabird.com
ArgoCD: argocd.homelabird.com
Kiali: kiali.homelabird.com
Ceph RGW: object.homelabird.com
Knative Broker: events.homelabird.com
Harbor, Jaeger, Kibana, etc.
Each service has a DestinationRule to control TLS mode (ISTIO_MUTUAL/disable).
2-3. PeerAuthentication (mTLS)# Namespaces like logging, monitoring, and argocd can be STRICT by default. Be careful with kube-system and add PERMISSIVE exceptions for components that do not support sidecar injection (CoreDNS, CNI, etc.).
Config: gitops/argocd-apps/istio-routing/istio-peerauth.yaml
2-4. Events JWT protection# events.homelabird.com is blocked unless a JWT header is present. I enforce this on Istio IngressGateway with RequestAuthentication + AuthorizationPolicy.
Config: platform/istio/istio-events-auth.yaml
JWT header: X-Events-JWT
Keycloak OIDC issuer
3) ExternalDNS: Cloudflare + Istio Gateway/VirtualService# ExternalDNS is integrated with Cloudflare and also watches Istio resources for record creation.
Config: platform/external-dns/values.yaml
provider: cloudflare
domain filter: homelabird.com
sources: service, ingress, istio-gateway, istio-virtualservice
registry: TXT (ownerId: cluster-local-ext, prefix: extdns2-)
sync policy
ServiceMonitor enabled (monitoring namespace)
Istio sources require an ExternalDNS build/chart that includes those sources; verify your chart flags and version.
sources :
- service
- ingress
- istio-gateway
- istio-virtualservice
domainFilters :
- homelabird.com
policy : sync
registry : txt
txtOwnerId : cluster-local-ext
txtPrefix : extdns2- Cloudflare API tokens are injected as Secret values (cloudflare-api-token, key: api-token).
Cloudflare API token (how to issue)#
Cloudflare dashboard → My Profile → API Tokens → Create Token
Use the Edit zone DNS template (or create a custom token)
Permissions: Zone → DNS → Edit
Zone Resources: Include → Specific zone → your domain
Create token and store it as cloudflare-api-token Secret
Cluster snapshot (kubectl top nodes)# NAME CPU(cores) CPU(%) MEMORY(bytes) MEMORY(%)
master-1 510m 15% 9224Mi 62%
master-2 282m 8% 7637Mi 52%
master-3 1081m 7% 20071Mi 72%
worker-1 1920m 12% 19776Mi 65%
worker-2 751m 4% 17960Mi 59% Istio vs Linkerd (quick comparison)# Item Istio Linkerd Traffic management Rich Gateway/VirtualService routing, advanced policies. Simpler routing, fewer advanced controls. Security Fine-grained authN/authZ, JWT, policy layering. Strong mTLS by default, simpler policy surface. Overhead Heavier (Envoy sidecars / ambient data plane). Lighter (more minimal data plane). Observability Broad integrations and telemetry depth. Solid basics, simpler stack. Portability Strong multi-cloud and hybrid support. Good, but fewer enterprise-grade edge cases. Ops complexity Higher learning curve and tuning. Easier day-to-day operations. Fit for this lab Matches future expansion and advanced features. Great for simplicity, but less headroom.
I chose Istio mainly for advanced features and multi-cloud portability. Concretely:
Traffic management depth: richer Gateway/VirtualService routing, multi-protocol support, and fine-grained policy controls.
Security surface: stronger built-in controls for mTLS, authN/authZ, JWT, and policy layering across namespaces.
Ecosystem breadth: broader integrations with observability tools and GitOps flows I already use.
Portability: consistent behavior across on-prem and multiple cloud environments, which matters for later expansion.
Feature headroom: supports more complex ingress, mesh, and policy needs as the lab grows.
Linkerd is simpler and lighter, but for my roadmap the extra capabilities and portability in Istio are worth the overhead.
mTLS modes: sidecar vs ambient# Mode How it works Pros Cons Sidecar Envoy sidecar per pod terminates and originates mTLS. Strong isolation per workload; mature and widely documented; rich per-pod telemetry. Extra CPU/memory per pod; rollout complexity; more injection/upgrade overhead. Ambient mTLS handled by ztunnel per node; optional waypoint for L7. Lower per-pod overhead; simpler onboarding; faster rollouts. L7 policy needs waypoint; newer model; debugging path different.
flowchart LR
subgraph Sidecar_mTLS
A1[Pod A] --> S1[Sidecar]
S1 --> S2[Sidecar]
S2 --> B1[Pod B]
end
subgraph Ambient_mTLS
A2[Pod A] --> Z1[ztunnel]
Z1 --> Z2[ztunnel]
Z2 --> B2[Pod B]
A2 -. L7 policy .-> W1[Waypoint Proxy]
end
Ambient generally reduces per-pod overhead, but you trade off some per-pod visibility and must add a waypoint for L7 policy. If you want hard numbers, check the current Istio performance docs for your target version.
This baseline focuses on three things:
Cilium + L2 LB for stable bare-metal LoadBalancer IPs
Istio Gateway/VirtualService for consistent routing with mTLS
ExternalDNS + Cloudflare for automatic DNS registration of Istio resources
With this in place, the flow becomes “deploy service → route via Istio → auto DNS” and the stack can be operated almost entirely through GitOps.
Section 1/16 Read 0%
Goals Series outline Prerequisites 1) Cilium: CNI + kube-proxy replacement + L2 LoadBalancer 2) Istio: public Gateway + VirtualService routing + mTLS 2-1. Public gateway 2-2. VirtualService routing 2-3. PeerAuthentication (mTLS) 2-4. Events JWT protection 3) ExternalDNS: Cloudflare + Istio Gateway/VirtualService Cloudflare API token (how to issue) Cluster snapshot (kubectl top nodes) Istio vs Linkerd (quick comparison) Why Istio (not Linkerd) mTLS modes: sidecar vs ambient 4) Wrap-up
homelabird
Sharing hands-on cloud infrastructure and DevOps experience. Writing about Kubernetes, Terraform, and observability, and documenting lessons learned as a solo operator.
← Previous Post
Kubernetes on VMs vs Raspberry Pi: Is It Efficient?
One line summary: I built a Proxmox VM based cluster and learned that resource overhead, storage bottlenecks, and daemonset baselines make small hardware far less efficient than it looks on paper. Context About a year ago, while I was in the military, I tried to build a Kubernete
Updated on Jan 23, 2026
Next Post →
Homelab K8s Part 4: Velero & Tooling
Part 4: Day-2 tooling—Velero backups, Hubble UI, Jaeger/Kiali, Tailscale, K9s, and ops.
Updated on Jan 23, 2026