I spent some time virtualizing everything at home. And by everything.. I mean everything, workstations included.
I decided to turn my PC into a Proxmox host. Previously I was running a Dell r710 with ESXi and separate server for pfSense. Both are sitting under my desk in a LACKRACK, which is just two IKEA side tables frankensteined into a server rack. Besides the noise, there wasn’t really anything bad about my previous setup, so this whole process was more of a “Why Not?”
This whole post is more of a summary of what I had done. I won’t be going into detail about how I set everything up as there is plenty of resources online that explain it all better than I ever could. Maybe someone will read this, think it’s cool, and decide do something similar.. who knows. Anyways I’ll get right into.
The hardware I’m running:
- AMD Ryzen 1700 8-Core
- 32GB DDR4 2400Mhz RAM
- Nvidia GTX 970
- AMD R9 280
- Bunch of Drives
- Intel Gigabit Single Port NIC
Nothing impressive, but everything is running pretty smoothly. The hope is to eventually upgrade to a 16-core Threadripper system with 64 PCIe lanes and double the RAM I currently have. I was considering building a dual socket EPYC system, but I priced everything out and my bank account had a heart attack. One day…
I decided on Proxmox as my hypervisor because it uses KVM under the hood. From my research, KVM reportededly has had the best performance when it comes to GPU passthrough, which is important for my workstations to run smoothly.
Prior to setting this up, I was already running Windows with GPU passthrough using QEMU, which also uses KVM under the hood, on my Fedora host. Since it was installed onto a SATA SSD using disk passthrough, migration was straight forward.
As a side note, NVIDIA display drivers hate you using passing through consumer grade GPUs. There’s a little bit of extra configuration involved to bypass their checks. I recommend checking out the Arch Wiki for this as it’s pretty comprehensive. It’s targetted for QEMU but since it uses KVM under the hood as well, similar applies to Proxmox. You’ll likely have to dump your GPU bios and set some parameters to “hide” that Windows is running on a hypervisor.
My Fedora host was also installed on an entire disk, which meant all I had to do was setup a new VM on Proxmox and passthrough my disk. No reinstallation required.
As for Proxmox, I picked up an m2 drive and installed it onto that.
Everything else I ended up rebuilding by scratch, but that didn’t take very long. Anyways, this is what the end result looks like:
- pfSense VM
- Fedora Workstation VM
- R9 280 passthrough
- Windows Workstation VM
- GTX 970 passthrough
- Kali VM
- Attacking VM
- Ubuntu VM running Docker containers:
Docker is great, but since Proxmox directly supports LXC containers, I’ll be making a switch to using those soon. That way I can just stick to using Ansible for deploying everything. I plan on turning as much of my setup as I can into a playbook so that in case everything does go wrong, or I decide to rebuild, I’ll save a bit of time.
The biggest thing I completely forgot to take into consideration was PCIe lanes. Since I’m using an AMD Ryzen 1700 CPU, it only has 24 lanes. 16 for PCIe, 4 for M.2, and 4 for the chipset. When I decided to pop in an Intel PRO/1000 VT Quad Port NIC from my pfSense, everything broke. My VMs refused too because I was using too many lanes. So in hopes of not having to give up until I could afford to build a new system, I ran out to Microcenter to pick up an Intel Single Port NIC. A small prayer and a hesitant push of the power button, everything was finally working fine.
Bit of a sidenote, but if you use an Ubiquiti Unifi AP, be careful not to push the reset button too hard. It’s extremely fragile and I managed to break it. I had to butcher my AP with a screwdriver and get it apart and short the reset switch pins. Here’s a picture of it during surgery.
On the networking side, I went a little bit overkill.. but overkill is fun so I setup 4 VLANs.
pfSense handles the VLANs and firewall rules. Proxmox, a managed switch, and my Unifi AP handle the VLAN tagging.
I made sure to setup my managed switch to have ports to the management, main, and lab networks just in case I royally screwed something up. I also use it to put my TV and Xbox on the Guest network.
I then setup OpenVPN on pfSense so I could access my Lab network from anywhere.
Like I said a bit overkill, but I only had to set it up once. That is until I get bored and decide to tear it all down and startover.
If I get some time over the holiday break I will be setting up a DMZ VLAN and a nginx reverse proxy on AWS. That way I’ll be able to expose my MkDocs and maybe my NextCloud instance over the internet. Once that’s done, I’ll probably write some ansible playbooks to make my life easier in the future.
Like I said, no real reason for this post, I just wanted to write about what I did. If anyone read this and decided they want to do something similar, free free to contact me if you need any help.