My Homelab for 2024 Part 1

Series - Homelab 2024

I scratched my head thinking about how I can describe my Homelab, I started with wikijs, but seeing all the information I put in here like the serial number of the machines, IP addresses, vlans, etc… I started to feel less comfortable showing this to the internet. I was thinking about making two versions of the documentation in my wiki, one private, one public, then it seemed complicated to maintain… So here we are, it will be 2 or 3 articles on this blog.

Comments
Commenting is disabled because I want it to happen on reddit, reddit is for discussion and in the first place I should have published that on the r/homelab subreddit… You will land here from a Reddit post…
Questions
Feel free to ask me if you are interested in a particular article !

Let’s start with the physical part

HP EliteDesk 800 G5
Make & model HP EliteDesk 800 G5
6 x Intel(R) Core(TM) i5-9500(vc1&2)/8500(vc3) CPU @ 3.00GHz (1 Socket)
32Go
2xSamsung 980 Nvme 500Go RAID1
1xShit SSD 500Go alone
1x Gigabit Intel I219-LM ( embedded )
2x Gigabit Intel I350
2x 10Gigabit Intel 82599ES
pve/8.1.3/b46aac3b42da5d15

I run 3 HP SFF Desktop machines as Proxmox node, I bought them used. Original lab was done with little MFF Dell optiplex, 3050 with core i5 and 32Gb ram each, performance was good with very low power consuption but they were running hot… The nvme especially, so I decided to abort it and went with bigger one to have better temperatures and 10G networking !

Proxmox Web UI

So there is a 3 nodes cluster, storage is local with lvm thin on a RAID1 of 2 nvme, no HA, I manage availabily at service level.

I use them to run QEMU Virtual Machines and LXC Containers but I’m migrating anything that left on LXC to QEMU. I use Debian OS everywhere I can !

I use two separate bridges for networking, one with the two gigabit cards in LACP, with every VLAN tagged on it. All VMs / LXCs use it, backup go through this one, the WebUI too. The Two 10G cards are on a separate bridge, no LACP. only Kubernetes workers use this one and I made a loop with the main switch. The switch use the spanning tree to avoid making a mess and I can loose one cable or one host and still be able to reach every node. I use dell Direct Attach Copper cables for this loop, some Reddit research showed me that DAC are less expensive and run cooler !

Hp Microserver Gen 8
Make & model HP Microserver G8
2 x Intel(R) Celeron(R) CPU G1610T @ 2.30GHz (1 Socket)
16Go
4xIron Wolf 8To RAID10
1xCrucial SSD 120Go alone
2x Gigabit Broadcom BCM5720 ( embedded )
1x Gigabit Intel I210
pbs/2.4.1-1

The oldest of the stack, nerver failed me…It runs Proxmox Backup Server and all vzdumps from Proxmox hosts are stored here !

PBS WebUI

Due to the fact that the HP BIOS won’t let you boot on the CDROM SATA port, where the SSD is connected, I installed GRUB on a small USB stick that seats inside the server, I boot from this USB, everyting else is on the SSD ( OS ) , ZFS RAID10 Storage ( backups ) on the Spinning drives.

Synology DS923+
Make & model Synology DS923+
2 x AMD Ryzen R1600 @ 2.6Ghz
4Go
4x Western Red NAS nonpro 4To RAID 5
2 Toshiba NVME 512Go for cache
Bond with 2 Gigabit NICs
DSM 7.2

This is my main storage for personal documents, media, backups… The only thing that run on this is DSM obviously and a minio docker container because I need S3 compatible storage. And of course NFS / SMB for the shares. My kubernetes cluster use NFS to mount volumes.

HomeNetwork
HomeNetwork
HomeNetwork

Livebox 5
Comments
I am forced to use a livebox 5 from Sagemcom and do not want to talk about it

Getting rid of this is on the roadmap, I will try a ubiquiti GPON… In the meantime it is providing net to my main router and all ports are forwarded to it… All filtering is disabled, no DHCP, DNS or WIFI !

AlixBoard APU2C4
Make & model AlixBoard APU2D2
4 x AMD GX-412TC @ 1Ghz
2Go
16Go
3x Gigabit Intel I211
Debian 11 with Nftable

More information of this amazing board Here. This is my main firewall, only purpose of this box is filtering, routing, NAT and VPN. By default everything is rejected both ways IN and OUT, I try to be as precise as I can to just open what is needed. There is two VPN on it, one wireguard for me when I’m not at home and need access to something that is not exposed. And a LAN to LAN openvpn with the society I work for ! This way I can work from home, send my backups off site, and we can send the society backup in my Homelab, win/win for everybody ! Futur project is to have two off these sharing virtual IP to do HA.

BootMode
It can be booted as a switch or a router but the hardware is definitely selected for switching so I boot SWOS from Mikrotik.
Mikrotik crs326-24g-2s
Unifi Flex Mini

This little gut is Amazing, for 26€ you get a 5 ports Gigabit swith that have VLANs, what else ?

But not everything
Just be aware that the flex mini won’t do LACP, SNMP and SSH.
Unifi SW8

Last switch that I use close to the TV, it connects to WIFI AP and some media equipments to the rest of the house. This one is fully functional but more expensive…

Unifi UAP Pro

Only one Access Point with 3 SSID, one for the LAN, one for the GUESTS clients and the last for IOT devices. I live in a flat so only 1 is more than enough !

The network is segmented with VLANs:

  • LAN : There is everyting in here, this is the main one.
  • IOT : The shit vlan, smart tv, smart light bulbs, things that I do not trust !
  • GUEST : This one is for the GUEST Wifi SSID, it gives only internet.
  • STORAGE : PVE and PBS are connected in this one for the backups.
  • ADMIN : Switches, Access point, PBS & PVE Webui / SSH
  • CLUSTER: This one is for corosync, the Proxmox cluster communication protocol

Base services

...
Thoses services are Virtual Machines or LXC Containers but I do not want them to be on kubernetes.

Kea Logo
I am in the process of migrating to Kea DHCP , my first solution was ISC-DHCP in cluster with two LXCs, it was working great but I am moving to KEA because the future is now… The KEA cluster is working but the final goal is to have the Kea Agent connected to the Stork management UI. I already have the agent working but I miss the Stork part… Part of the roadmap !

This one could be an article for itself. I use two PiHoles for DNS filtering / blacklisting, two PowerDNS Recursors and two PowerDNS Authoritative Servers connected to a master/slave MariaDB database !

DNS Infrastructure Schema
DNS Infrastructure Schema
DNS Infrastructure Schema

My DHCP cluster pushes the two piHoles to my clients, servers are using the recursors directly. I have 2 000 000+ domains on blacklist, with the time I’ve got to the point that a good amount of crap is blocked and I’m not bothered anymore with false positives… The PowerDNS solution is very robust, the cluster keeps my DNS service running even if one of the two vm is down for maintenance. I can route my queries where I want, this is really important for my work, and I can gather some metrics for my monitoring ! I also installed PowerDNS Admin webui, nice way to manage my records, I quite like it to be honnest.

Unifi Controller

Used to manage Unifi devices, push updates, checks connected clients…

Gitea Webui

This is a Critical One, I use it to store :

  • This website source
  • GitOps for all the kubernetes Yaml / Kustomize manifests
  • Ansible Playbooks
  • Sometimes a project I need to fork to make a PR

It is a Gitea instance for now and it will be switched to Forgejo !

Harbor Webui

I use Harbor to store container images. CVE Scanning is enabled and we can see that the image for that blog have warning… I’ll take care of that !

I have a 3 nodes Docker Swarm cluster running with Portainer installed.

Portainer Webui

This is used as a testing environment for me, at work we still use Swarm a lot and if I want to quickly test something it is easier on this. There is one manager ( No HA needed ) and two workers.

Minecraft Logo

I still have an Old Minecraft server that I used to play on, still here because I do not want to destroy the map on this one. I do not know anymore how this thing is working but I remember that it had a management interface in CLI to launch command directly on the server…

This one is a little special, I needed a PKI to manage my certs / CA, and at work we use our PfSense for that… I found this dead simple and great so I installed a PfSense VM, stripped every service I can and use it as a PKI with a WebUI… For example, ldap server certs, openvpn certs, … It use less than 256M of RAM…

I have a honneygain worker, it basically use your internet connection to make money, they give you 0.0000000001% of what they earn and they possibly doing shaddy things of my back… So this is the perfect exemple for the IOT VLAN… I will throw this shit away but they promised me 20 bucks so I have to reach the goal before…

Do not do this
For real, I tried but this is really not something I would recommand, now if you want it anyway, lock down this in his own network and just give it internet acces and nothing else !

And that is the End of the Base / Physical part, let’s talk about the kubernetes cluster now !