Categories
Tech Explorations

A server at home: childhood fantasy or genuinely useful?

Ever since I was a child, I always dreamed of having the sort of high-speed, low-drag, enterprise-grade equipment in my home network. For me it was like getting the best toy in the store, or having the meanest roller shoes in class (reference which I guess dates me). It was as if getting these devices would open my world (and my Internet Connection) to a world known only by IT professionals and system administrators; something that would show me some hidden knowledge (or class cred maybe) that would bring my computing experience to the next level.

Anyone who has ever worked in the IT field knows the reality of my dreams: while the sense of wonder is not entirely erased, the reality of mantaining such systems is at best dull. But, you know, I’m weird like that. If I weren’t you wouldn’t be here.

Almost a decade ago, I embarked on a journey of building and mantaining a home server. It has solved many of my problems, challenged me to solve new ones, and taught me inmensely about the nuances of running networks, mantaining machines, building solutions, and creating new stuff with it. This is my love letter to the home server.

This article is somewhat different than most of my other content; it’s more of a story rather than a tutorialized narrative. It’s not really meant to be a guide to build or maintain servers at home, it’s more of a tool for understanding the rationale behind having one.

Baby steps

As the designated “IT guy” in my friend group, I often found myself helping my friends and family with their computing related needs. I started helping others install software, customize their computers, using basic software like Office and stuff like that. We also gamed some, whatever we could pirate and get to run in crappy laptops and stuff like that. Our friend group was big into Minecraft at the time, as most kids were, and we loved to show off our worlds, different approaches and exploits to each other, sharing our experiences in the game. Inevitably, one day the inevitable question came: What if we made a Minecraft server for all of us to play in?

The writing was on the wall, so I set out to make one. At the time I was rocking the family computer, a respectable 21.5″ 2013 iMac. It was beefy enough for me to run the server and a client at the same time, and paired with LogMeIn’s Hamachi (which I hated, but didn’t know better), a highlight of my childhood was born. It barely worked, many ticks were skipped and many crashes were had, but it was enough for me and my group of friends to bond over.

Around the same time my parents bought a NAS, an Iomega StorCenter sporting a pair of striped 500GB hard drives for a whopping 1TB of total storage. Today that sounds quaint, but at the time it was a huge amount of space. For many years we kept the family photos, our music library, and limited backups in it. It opened my eyes to the posibility of networked storage, and after an experiment with a USB printer and the onboard ports on the NAS, I even toyed with basic services to provide. An idea was coming to my head, but I was just starting high school, so there was pretty much no budget to go around, so I just kept working around all the limitations.

At least, up to a point. Within a couple of months, both the RAID array in the Iomega NAS and my iMac’s hard drive failed, without any backups to recover. It was my first experience with real data loss, and many memories were forever wiped, including that very first Minecraft server world. Most of our family stuff was backed up, but not my files; It sucked. It was time for something new.

Building reliable servers from scrap

I was still in high school, there was some money to go around, but nowhere near enough me to get a second computer to keep online forever. So I went around looking for scraps, picking up whatever people were willing to give me and building whatever I could with it. My first experiments were carried out on a geriatric AMD Athlon from a dump behind my school from the early 2000s which wasn’t really up to the task of doing anything, but it gave me valuable information regarding building computers and what that entailed. My first real breakthrough was around 2015 when I managed to get a five year old underpowered Core i3 Tower with 4GB of RAM sitting in a dumpster outside an office block near my house. After a clean install of Windows and some minor cleaning I had, at last, a second computer I could use as a server.

I didn’t know much about servers at the time, which meant that my first incursion was basically an extension of what I’d seen before: using SMB to share a 1TB drive I’d added to it by removing the optical drive and rescuing a hard drive from a dead first-gen Apple TV. I added a VPN (ZeroTier One, it’s like Hamachi but good), printer sharing, VNC access and pretty soon I was running a decent NAS.

I added a second 1TB drive a few months after (which involved modding the PSU for more SATA power ports) and some extra software: qBitTorrent’s web interface for downloading and managing Torrents from restricted networks (like my school’s), automatic backups using FreeFileSync, and a few extra tidbits. I even managed to configure SMB so I could play PS2 games directly from it, using the console’s NIC and some homebrew software.

This was my setup for around four years, and It did it’s job beautifully. Over time I added a Plex server for keeping tabs of my media, And I even played around with Minecraft and Unturned servers to play with my friends. Around 2019 though, I was starting to hit a bottleneck. Using a hard drive as boot media for the server was dog slow, and I had run out of SATA ports for expanding my drive roster. I had been toying with the idea of RAID arrays for a while, especially after losing data to faulty drives. Mirroring was too expensive for me, so I my method of choice was level 5: single parity distributed between al drives, single drive failure tolerance. I just needed a machine capable of doing it. For a while I wondered about buying HBAs and just tacking the drives onto the old hardware and calling it a day. I ended up doing something completely different.

At last, something that looks like a server

In the end I decided that a better idea was to upgrade the motherboard, processor, the power supply and a few other stuff. I added a USB 3.0 card for external drive access, upgraded de processor from a Core i3 240 to a Core i5 650, and got a motherboard similar to the current one but with six sata ports, and got four 2TB video surveillance drives for dirt cheap, along with a beefier power supply to tie it all together. Around this time I got a Gigabit Ethernet switch, which vastly increased throughput for backups. It was mostly used, bottom of the barrel stuff, but it allowed me to create a RAID-5 array with 6TB of total storage, and slightly more room to expand my activities. Lastly, I replaced the boot drive with a cheap SATA SSD.

With it came an actual Plex library, a deep storage repository for old video project files, daily backups, a software library for repairs, MySQL for remote database work, and even a Home Assistant VM for home automation. I kept running servers and making experiments on it. This second iteration lasted me for around three more years, which was more than I expected from what was essentially decade old hardware running in a dusty cupboard next to my desk.

Soon enough however, new bottlenecks started appearing. I was getting more and more into video work, and I was in need of a server that could transcode video in at least real time. Most CPUs cannot do that even today, so I was looking into GPU acceleration. I also started suffering with Windows: It works fine for begginers, and I even toyed with Windows Server for a while, but it’s just way behind Linux distros for server work. It took up lots of resources doing essentially nothing, the server software I needed was clunky, and I lacked the truly networked logic of a UNIX-like OS.

Enterprise-grade hardware

Once again, I looked upon the used market. Businesses are replacing hardware all the time, and it’s not difficult to find amazing deals on used server hardware. You’re not getting the absolute latest and greatest stuff on the market, but most devices are not really that old, and more than capable for most home uses.

From the second-generation server I could still salvage the USB 3.0 card, the power supply, the boot drive, and the four RAID drives. All of those were bought new, and they were in excellent condition, so there was no need to replace them. I wanted something that would last me at least the next five years, which could accomodate all my existing hardware, and had plenty of room for expansion: PCIe slots for GPUs and other devices, proper mounting hardware for everything, and a case to keep everything neat.

I went for a tower server instead of a rackmount mainly because I don’t have a place for a rack in my home, and the long-and-thin package of most racked servers made no sense in my case. After a bit of searching I came upon the HP ProLiant ML310e Gen8 v2: An ATX-like server with a normal power supply, four drive bays with caddies, an integrated LOM, and even an optical drive (which given that my server now hosted the last optical drive in the house, was a must). It was perfect. I also managed to score an NVIDIA GTX1060 6GB for cheap, which is more than a couple generations behind at this point, but most importantly for me, it has NVENC support, which meant transcoding HD video at a few hundred FPS with ease.

Building the current server

My third generation server was built around the aforementioned HP tower, but many modifications had to be made in order to acheive the desired funcionality. After receiving it, I swapped the PSU and modified one of the accessory headers for the server’s proprietary drive backplane connector, so I could power the drives from the new power supply. Apart from increasing the max load from 350W to 500W, it also gave me PCIe power connectors to drive my GPU, which the previous PSU lacked.

Then, I installed the GPU and encountered my first problem: Servers like these use only a couple of fans and a big plastic baffle to make sure the air makes it to all the components on the board. This is a great idea in the server world, it reduces power consumption, decreases noise, and allows for better cooling, but it’s also interferes with the GPU: it’s not a server model, so it’s taller than a 2U rack, so the baffle cannot close. Not to worry though, a bit of Dremel work later I had a nice GPU-shaped hole, which I made sure to make as small as possible to not disturb the airflow too much. The GPU shape came in my favor too, as it redirects air perfectly onto the fanless CPU cooler.

Other than the four drive bays (in which I installed the 2TB drives) there isn’t much place for another boot drive, so I used the empty second optical drive bay to screw in the boot SSD. A lot of cable management and some testing later, I was ready to go.

For the OS, I went with Ubuntu Server 20.04 LTS. I’m very familiar with this whole family of Linux distros, so it made sense to use it here. My servers at work also use it, so I had some server experience with it as well. Once de OS was in, I installed the drivers for the LOM and the GPU, along with building the RAID-5 array again using mdadm. I was using Microsoft Storage Spaces for the array in the previous generation, so I had to rebuild it in Linux cleanly. After dumping the data into some spare drives and building the array on Linux I was in business.

For the software I installed Jellyfin (I was getting sick of the pay-to-win model of Plex and I wanted something with IPTV support), Samba for the shared folders, VirtualBox and the latest Home Assistant VM (don’t @ me about Docker, that version is crap and the supervised install is a pain in the ass, I’m done with Docker for now), qBitTorrent, MySQL, ZeroTier One, OctoPrint, and of course, a Minecraft server. I also installed a few quality-of-life stuff like btop for Linux and thus, my server was complete, at least for now.

The realities of living with a server at home

Despite my childhood aspirations, enterprise-grade hardware has a major flaw with respect to home equipment: noise. I can’t blame them too much, after all, these devices are made for production environments where no one really minds, or vast datacenters where noise is just part of the deal. I on the other hand, like sleeping. The first time I turned the machine on I was greeted with the cacophonous roar of two high RPM server fans. It dawned on me pretty quickly that this noise simply would not pass in my house, so I quickly set about fixing it.

Unlike desktop boards, the OS doesn’t have control over the fans. To the sensor package in Linux, it’s like they didn’t even exist. I did get some temperature readings and there might be a way to address them via IPMI, It just didn’t work right. The job of handling the fans is up to the iLO, HP’s name for a LOM, a tiny computer inside the server that allows for low-level remote management, including remote consoles and power cycling. The homelabbing community figured out years ago how to tune them down, and a bit of unofficial firmware later, my fans calmed down to a reasonable 20%, and I could sleep again. I took this oportunity to put a piece of tape over the POST buzzer, which had no right to be as loud as it was.

Closing thoughts

This has been a wild ride of old hardware, nasty hacks, ugly solutions, and wasted time, but in the end, the outcome is so much more that what I ever envisioned: a device from which I could automate the boring bits of my daily routine, keep backups and storage of all my stuff safely, and having access to all my services from wherever I happen to be. If you’re a nerd like me and willing to spend some time faffing around in the Linux CLI, I highly recommend you make yourself a home server, no matter the budget, device, or services.