Tip: If you have limited RAM (16GB), skip the full SIEM for now and just run the UTM and Kali, or use a lighter SIEM alternative like a simple ELK container, though Security Onion is the gold standard for learning.
T-Pot is built around honeypots and attack telemetry. It is excellent for observing real-world behavior but should be isolated on a dedicated segment to reduce risk.
Caption: T-Pot dashboard (image provided by user).
Practical SIEM lab workflow (Security Onion or T-Pot)#
ISO-based installs make these platforms feel like full operating systems, but the biggest advantage is the pre-integrated pipeline: Suricata and Zeek feed structured telemetry into Elasticsearch, and Kibana provides immediate visibility. This lets you spend lab time on detection logic and analysis instead of plumbing.
flowchart LR
A[Test VMs] -->|Traffic| VSwitch
VSwitch -->|SPAN/Mirror| B[Sensor VM - Security Onion / T-Pot]
B --> C[Ingest + Enrich]
C --> D[Elasticsearch]
D --> E[Kibana / Dashboards]
Deployment guidance:
Install Security Onion or T-Pot on a dedicated VM with sufficient disk.
Configure a SPAN/port mirror on your virtual switch to copy traffic into the sensor VM.
Verify the sensor NIC is in promiscuous mode and packet counters increase.
Generate traffic from attacker/target VMs and confirm logs in Kibana.
Validation checks:
Suricata alerts and Zeek logs appear in the SIEM UI within minutes.
Dashboards show src/dst IPs, ports, and protocol breakdowns.
A simple scan (for example, a TCP SYN scan) triggers an alert or notable.
Scenario mini-recipes (quick)
Brute-force simulation against a test SSH service (detects auth noise).
bash
hydra -l test -P /usr/share/wordlists/rockyou.txt ssh://target
DNS tunneling indicators (high-entropy subdomains).
bash
for i in {1..50}; do dig $(openssl rand -hex 8).lab.local @target; done
KQL quick queries (starter)
event.dataset : "zeek.dns" and dns.question.name : "*.lab.local"
event.dataset : "suricata.eve" and suricata.alert.severity <= 2
Common pitfalls:
No data in dashboards usually means SPAN configuration or vSwitch binding is wrong.
High CPU or disk usage often indicates excessive capture volume or long retention.
Time drift across VMs causes confusing timelines; keep NTP consistent.
LLM-assisted scenario design:
LLM agents (Codex, Claude, Gemini) can draft structured attacker playbooks and expected detections.
A local LLM via Ollama keeps traffic and scenario data on-box when you need privacy.
The fastest way to build real-world instincts in a homelab is to run a proven SIEM stack end-to-end. When I prepared for security work in the military (pre GPT-1 era), Security Onion and T-Pot were my go-to platforms because they boot from ISO like a full OS and deliver a complete SIEM stack out of the box. That all-in-one setup is exactly why these tools show up in small teams and larger competition labs.
Important context: this lab prep was done before military service, on personal hardware, and used different gear than any production or vendor environment. It helped me understand operational workflows, but it does not reflect real-world systems, and nothing here originates from vendor internals.
The shared advantage is a ready-to-use practice environment. Elasticsearch, Kibana, Suricata, Zeek, and related components arrive pre-wired, so you can skip the glue work and focus on the detection and analysis loop. In a homelab, reproducing and observing traffic is the accelerant: it makes learning far more efficient than static docs or isolated single-tool practice.
To get full value, you need mirrored traffic. Build SPAN/port-mirror plumbing in your hypervisor so packets from multiple VMs are copied into the sensor VM. Proxmox or ESXi makes this straightforward, and the difference is immediate: scenario testing against realistic traffic yields much stronger outcomes than synthetic log injection alone.
Back then, scenario design was manual. Today, LLM agents such as Codex, Claude, and Gemini can generate structured test plans and attacker playbooks. If you want local, higher-intensity testing without shipping data off-box, a local LLM like Ollama can drive repeatable, aggressive scenario runs.
The net: Security Onion and T-Pot provide the fastest path to a usable SIEM practice lab, and that speed compounds when you layer in traffic mirroring and modern scenario automation.
A focused stack keeps your lab stable and aligned with learning goals.
Next: design the network layout and segmentation for safe practice.
homelabird
Sharing hands-on cloud infrastructure and DevOps experience. Writing about Kubernetes, Terraform, and observability, and documenting lessons learned as a solo operator.