One-line summary: I built a Proxmox VM-based cluster and learned that resource overhead, storage bottlenecks, and daemonset baselines make small hardware far less efficient than it looks on paper.
Context
About a year ago, while I was in the military, I tried to build a Kubernetes lab on mini PCs using Proxmox (PVE). The goal was a clean VM-based cluster using Terraform, Ansible, and cloud-init. I did get it running, but the efficiency was much worse than I expected.
Hardware I used
Two Ryzen 5700U mini PCs, each with:
- Ryzen 5700U (8C/16T)
- 32 GB DDR4 (16 GB x2)
- Samsung 1 TB NVMe SSD x2
Plus three N100 mini PCs:
- Intel N100 (4 cores)
- 16 GB RAM each
Even with this, the cluster was not efficient under real workloads.
Cluster topology at the time
flowchart TB Router[Router] --> Node1[Ryzen 5700U Mini PC #1] Router --> Node2[Ryzen 5700U Mini PC #2] Router --> Node3[N100 Mini PC #1] Router --> Node4[N100 Mini PC #2] Router --> Node5[N100 Mini PC #3]
What went wrong: The Efficiency Analysis
While the setup "worked," the ratio of resources consumed by the infrastructure versus resources available for actual workloads was abysmal. Here is the technical breakdown of why small VM clusters fail to deliver.
1) Invisible I/O Amplification
I relied on consumer-grade NVMe SSDs, assuming they would be fast enough. However, I underestimated the Write Amplification caused by the virtualization layer and Kubernetes control loops.
On bare metal, the kernel handles I/O directly. In a Proxmox VM setup, these I/O requests pass through a much longer path, introducing latency.



