Categories
Tech Explorations

A house run by computers: making all of your IoT devices play nice with each other

The current state of the Internet-of-Things scene can be sometimes mind-boggling: incompatible ecosystems, an endless reliance on cloud services (that will and have been shut down), and an uncomfortable feeling that you’re not quite in control of what your devices are doing. Then I got Home Assistant, and everything changed. This is a story about how smart-home should and shouldn’t be, along with a few tricks that will hopefully save you some blood, sweat, and tears.

The E in IoT stands for easy

If you’re into home automation, then you are probably aware of the absolute mess that the market has become: everything has its own ecosystem, there’s poor integration between services, and everything relies on the cloud all of the time: turning on a switch from 5 meters away usually has to pass through a server hundreds or thousands of kilometers away.

If you want to make an Ecobee thermostat work with a Sonoff temperature sensor, you’re all out of luck. Local network control? nope. Custom hardware? not really. While most of the internet has coalesced thanks to open and flexible standards like IP, DNS, and TCP, the smart home situation has been plagued with the xkcd standards problem: incompatible ecosystems, walled gardens, proprietary protocols, and an overall sensation that companies are prioritizing their own profit margins over trying to create a sustainable market.

Speaking of sustainable, why the hell does everything have to talk to the cloud? There is very little computing power involved to turn on a relay, why do I have to tell Tuya or Google Nest or eWeLink or whoever that I’m turning on my heating, or my room lamps? What will happen when, not if, these companies decide to retire these services? Are we all doomed to be left with useless pumpkins the second this market stops making money?

If you think I’m exaggerating, this has already happened before. Samsung went all in with an entire smart home ecosystem called SmartThings, which claimed, like all of them do, to be your one-stop-shop for all of your home automation needs. This was scaled back after they realized how much of a pain in the ass all of this is to maintain, breaking compatibility with many devices. I still have a v1 SmartThings Zigbee/Z-Wave hub that I cannot use because it’s not supported anymore: the hardware is perfectly fine, but that’s what Samsung decided, so we’re all fucked.

Even well-intentioned endeavors to standardize like Matter, Thread, and Zigbee have all become their own little niche because none of them are actual all-in-one solutions, they are just puzzle pieces: transport protocols, physical networking standards, computing services, whatever. They all have to talk to each other to work, and that is usually left to the end user.

Home Assistant

In comes Home Assistant: an open source project that aims to put this madness to an end. It’s a whole operating system built on Linux, that aims to create a truly complete smart home hub that will communicate with everything. By creating a standard interface for all of your devices to communicate with HA, you essentially get a single interface for all of your smart-home stuff.

We could spend days talking about all of the ins-and-outs of Home Assistant, but this is the gist of it:

  • Home Assistant runs on a device with access to your local network: it can be a Raspberry Pi, a virtual machine, or even the custom hardware solutions Home Assistant has to offer.
  • Each ecosystem connects to HA via an integration: these blocks connect to APIs or local devices in order to talk to a specific services. There are lots of officially supported ones, as well as some very good community implementations.
  • Integrations usually generate devices, which in turn generate entities. An entity is the base building block of Home Assistant: every sensor, switch, light, or whatever you want is represented as an entity. Entity data is stored on a local database and can be shown on custom dashboards.
  • Integrations also generate actions, which involve executing procedures on command, like refreshing data, actuating a switch, turning on a lamp, etc. There are standard integrations for the more common entities, but there are endless possibilities.
  • You can create custom entities called helpers, which further process information given by your entities. These are also entities themselves.
  • Entities can be analyzed and processed using automations, which run instructions based on the state of different entities. These instructions are actions executed on devices or entities.
My currently installed integrations.

Overall, it’s a fairly simple system, but it’s also highly scalable: you can make this as complicated as you want, as long as you follow these basic rules. Do you want to turn on lights with the sun? Make an integration that triggers using the sun entity during sunrise and sunset, that actuates the light switch. Do you want to make an automated sprinkler system? Sure, make helpers for all parameters and have an automation switch relays for sprinkler valves in order. Do you want to fire a notification to your phone when a temperature sensor is under 19.2 degrees but only during the evening and there are no dishes on your dishwasher? Sure, but I don’t know why you would, although I’m not here to judge. As long as it has some sort of connectivity, chances are, you can make it work with Home Assistant.

A sample trigger setup for an automation.
A sample action setup for an automation.

Why isn’t this the norm?

Well, there are complications to this, mostly stemming from the fact that Home Assistant isn’t exactly maintenance free: You need to have a device running the OS locally, which will require some tinkering. Also, as your instance becomes more complicated, your points of failure also sprawl, and while HA is overall fairly stable, it does throw the occasional tantrum.

There are also some companies that have obfuscated open access to their APIs in the name of “customer safety”. This is usually a measure to make their ecosystems even more of a walled garden, so I would recommend just avoiding these products: they are not in favor of right-to-repair, and I frankly have no sympathy for them. I do however recommend you to look around for custom integrations that give increased functionality in case you’re stuck with such a device.

There are also many finicky steps to get some integrations to work: handing over login credentials is one thing, but that’s sometimes not enough; Sometimes you need API keys, sometimes OAuth tokens, sometimes other stuff. These things are usually well-documented, but they are usually buried under layers of menus and interfaces that often feel like afterthoughts. The worst one for me is LocalTuya, a custom integration for Tuya devices that allows for, admittedly, a very useful increase in functionality over the official Tuya integration, but it requires many steps to get the API to work, and the entities have to be set up by hand, without much in the way of help. I only have a single device at the moment, but configuring 12 entities in the blind was an absolute nightmare, and my stomach turns a little when I think of adding more devices.

I also have some issues with using SQLite as a main database. Sure, it’s easy and fairly uninvolved to get working, but my Home Assistant database has corrupted itself one too many times for me, without much in the way of me doing anything. I switched to my local MySQL server about a month ago and my database has been much more reliable since.

One of my custom dashboards for monitoring devices.

Making custom devices: ESPHome

If you’re weird like me (and let’s face it, if you’re reading this you already are), you probably want to plug everything into Home Assistant, even those things that weren’t really meant to be connected to a network. For me, it was a pool, but it can be anything that has sensors and/or actuators of some kind.

I have a pool with a solar thermal collector, a lamp, and a copper electrolytic cell. This means we have quite a few variables we would like to integrate to HA:

  • Temperature values for input, panel return, and output pipes. This not only allows me to get the current pool temperature, but also to get rough estimates about panel power capture.
  • Actuating relays for turning on the pool pump and lamp.
  • Controlling an H-bridge connected to the copper cell, allowing for control of voltage and polarity of the cell.

Obviously there isn’t a device that can do exactly this, and while you could make it work with various devices it would be a janky mess, so actually building a custom IoT device actually makes some sense here.

In comes ESPHome: using the Espressif ESP32 or ESP8266 family of microcontrollers, Home Assistant can create custom devices using simple YAML config files and a ton of supported devices. Just connect the sensors and devices you want to the ESP, set everything up on the config file, and that’s it! you have your own IoT device, and a surprisingly flexible one at that: sensors can be filtered, automations can be configured directly on board the controller, and so much more. I plan on doing a detailed review of the ESPHome suite at another time, but suffice it to say that it allows to make absolutely anything that can be reasonably connected to a microcontroller HA-capable.

Both of these devices are custom ESP32 implementations running complex tasks of data acquisition and actuation.

Why hasn’t anyone made this simpler?

It boggles my mind a little bit that no one has thought of a more hands-off approach to this. For most intents and purposes, a hardened Home Assistant brick that can just sit in a corner and get online via the cloud subscription is enough for your usual tech enthusiast.

Nabu Casa, Home Assistant’s main developers, have already started offering plug-and-play hardware solutions in its Green and Yellow packages, but in my opinion there are way too many growing pains in HA for it to be a truly fire-and-forget solution: there is way too much tinkering still if you want to do everything HA is capable of.

So I wonder, why hasn’t it been a push to standardize IoT ecosystem interactions between different brands? Why have pretty much all IoT brands gone for any sort of interoperability? Well, money is the reason, but I wonder if this is a losing strategy: what’s the point of having three thousand apps to turn on an A/C unit and a couple of lamps? How does this nightmare of usability not impact sales of these systems?

While it probably does, the answer is they don’t really care: they sell enough to be a revenue stream, but the extra work that would create actually making a sustainable product is just too much upfront cost to justify it: in the end, all sustainable IoT ecosystems are passion projects: open-source and free software which challenges market incentives. There is this undercurrent of skepticism in my writing about tech, but it comes down to this: the market supports whatever is cheaper, not whatever is best, and there are consequences to be had if the tech sector keeps going after it at all costs.

Closing thoughts

If you’re the tinkering type like me and you haven’t set up something like Home Assistant, please do! It’s genuinely useful and quite fun, but be prepared for a bit of head-banging: it’s not for free. It’s now an essential part of my home, and it provides amazing data collection and intelligent operation, allowing for increased efficiency and automation, even if it came at the price of many hours of staring at YAML config files, and corrupted logs.

So please, if you work in this sector, remember what has made our Internet work: open, flexible standards which work everywhere and with everything. If these principles can be applied to IoT, I am confident in saying that IoT will be a mass-appeal product.

Categories
Tech Explorations

Business in the front, party in the back: optimizing desktop setups for multi-PC applications

I recently started a new job as a network engineer and with it I got my first work laptop: a fairly decent Lenovo ThinkPad T14, and while I am a fan of portability and uncluttered workspaces, I much prefer to use external input devices along with a second monitor, especially if it is where I usually work.

Luckily I do have all of these things, a nice keyboard, a big 4K monitor, and a very smooth trackball (come on, you’ve read my articles already, you know I’m a weirdo like that). They are however connected to my personal laptop, and I don’t have a big enough desk (or wallet, for that matter) to duplicate it all. Some sharing is in order.

In my desperation, I reorganized my desk with the help of a few gizmos, which allows me to quickly switch my input devices and monitors between laptops, while maintaining independence between both systems, and in a way that doesn’t turn me absolutely crazy. This is how I did it.

Problems, half measures, and a sore back

My job is essentially entirely remote: I’m basically half front-end developer, half tech support; I answer emails, read and compose docs, stare at code, and participate in meetings. Since I didn’t have much experience in having two computers, I just plopped it next to my big rig and just went to work for a couple of days. Immediately, many problems appeared:

  • My back was sore: Laptops on desks usually have you facing downwards at the screen, and since it’s a small screen and a big desk, my back was certainly feeling it.
  • Laptop keyboards and trackpads are a pain in the long run: they are small, key travel is tiny, and they usually don’t have a numpad. The T14 certainly has one of the better keyboards in the market right now, but an actual key switch would be much better. The trackpad is certainly good (especially with its hardware buttons on top), but it’s fairly small and cramped (and don’t even get me started on the ThinkPad Nippleā„¢). Also, raising your computer up to eye level makes them even harder to use.
  • Limited screen real-estate: the screen is a 1920×1080 IPS 14-inch display, which is great, but it’s small: the scaling has to be big in order for text to be legible and being accustomed to dual-monitor setup just made it a pain overall.

Because all of my current setup works with a single USB-C port (more on that later) I just put my work laptop on my stand and used it like that for a while, but that quickly made evident that switching devices all day was going to be a messy and non productive solution. What are my choices here?

Well, I could just use my personal laptop for work, but that is a recipe for disaster: mixing business and pleasure in general is a bad idea for privacy and security reasons, but there’s also other security measures in my work device that would make it difficult, if not impossible, to get everything running as it should.

I then turned to the idea of using the laptop as a pivot computer: it just sits in a corner chugging away and I just have to open a Remote Desktop Connection to it: RDP has a sophisticated feature set including device and clipboard sharing, bidirectional audio for calls, the works. This seemed like a great idea, I could share all of my devices on my personal computer to my work computer and everything would be sorted, right?

Not so fast. My work device runs Windows and you can go to Settings and enable RDP, but the real problem was Active Directory: all of my login data is on a company server to which I have no access to, and the Remote Desktop Server on my just refused to play ball with it: I got certificate issues, authentication issues, connection issues, and I just couldn’t get it to work. If this was Windows Server, I could probably massage the service enough to make it work, but it isn’t, and it’s probably for the better: if a remote desktop is compromised, you can cause catastrophic damage on everything you have access to, as the device doesn’t really distinguish between a remote session and a local one, so back to the drawing board it was.

I tried other solutions, but they all failed in one way or another: switching inputs on my monitor? Doesn’t solve the device problem. Other remote desktop protocols like VNC or AnyDesk? either they didn’t have device support or I had to pay subscriptions, along with having to install unauthorized software on my laptop, a big no-no.

My only recourse was hardware: a dedicated device handles sharing and switching devices between computers, while the target computers are none the wiser. But how was I to implement this and have it play ball with my current setup?

My previous setup

My personal laptop is an HP Victus 16, sporting quite a few peripherals:

  • Logi G513 Carbon with MX Brown switches and custom keycaps. (over USB)
  • Kensington Trackball Expert wireless pointer device. (over USB)
  • Dell S2721QS 4K 21-inch monitor running at 1440p for better performance.
  • Behringer XENYX 302USB audio interface (over USB)
  • Gigabit Ethernet over USB from my dock.

This setup has my laptop screen as the main display, with the monitor over to the side, with all devices connected via a USB-C dock. This allows me to have everything connected with just two cables (the other one being a power supply, being a “gamer” model with high power consumption). I really like docks for their flexibility, and with USB-C native video transport and the high speed USB 3 data link, I can switch from on-the-go to static and vice versa in mere seconds, all while having hidden cables and reduced clutter.

This is very much a tangent, but I’ve always found docks the coolest thing ever. Ever since I saw an OG IBM ThinkPad in my dad’s office desktop rocking Windows XP and a massive dock back in like 2006 I’ve appreciated the massive advantages in commodity and portability. My first laptop had a massive dock connector on the side, and USB-C has finally given me the possibility of running power, video, and data over a single cable. If you have a laptop sitting semi-permanently in your desk, I highly recommend you get one. Sure, laptops are loud and underpowered compared to equivalent desktop PCs, but if you need portability, it doesn’t really get much better than this.

I’ve been using a Baseus Metal Gleam 6-in-1 USB-C dock: they have 3 USB 3.0 ports, USB-PD passthrough, an HDMI output, and a Gigabit Ethernet port. It’s enough for my needs and are also small, which meant I could mount it directly to the stand the laptop sits on top of.

Now I had to decide on a new layout: how exactly was I going to place two laptops and a monitor in my desk without losing all my space?

Introducing the KVM

Having all of these in mind, these were my objectives:

  • The monitor will now become the primary screen, switching between devices as needad.
  • The keyboard and mouse must switch between laptops in sync with the screen.
  • I need to hear the audio of both computers simultaneously, although the main one would be the personal one.
  • Whatever device does the switching must have some sort of remote, in order to hide it under the desk for better cable management.

For my work computer, I just duplicated my setup for my personal computer: a laptop stand and another of those USB-C dock things. The audio situation was also simple, as the Behringer audio interface I’m using has a secondary stereo output called 2-TRACK. using a simple USB sound card, a ground loop isolator (for preventing buzzing sounds) and some random 3.5mm to RCA audio cable I had both devices in my headphones without issue.

For the screen and the USB devices, I needed a KVM switch: a clunky acronym standing for Keyboard-Video-Mouse, it’s exactly what it sounds like: you press a button, and your keyboard, mouse, and monitor are now connected to another machine. These are fairly niche devices mostly relegated to server racks and other specialized applications, but they can still be found for cheap in the power user electronics market.

I got a UGREEN CM664 HDMI-USB KVM switch from AliExpress for cheap, and despite it’s low price it has everything I need: HDMI video switching, USB 3.0 switching, and a cute little wired remote perfect for adding to my keyboard. It’s also fairly small, only big enough to fit all the large connectors, and requires no software, it’s just an HDMI pass-through and a USB hub that can switch devices.

Not to get too deep into the weeds in here, but this device physically disconnects all the interfaces during switching. This means devices have to be recognized and initialized, and a second screen must be instantiated and all windows reordered, something that takes a couple of seconds in total. This is not a problem for me, but there are some KVM switches that emulate the presence of a device while another computer is active in order to make the transition almost seamless, but that seemed a bit excessive for this application, especially for the considerable price hike.

Now it’s just a matter of hooking up everything together and we’re done, right?

A cable management nightmare

Well, not so fast. You may have noticed there’s a lot of cables in the mix: tons of USB cables, network cables, audio cables, power bricks, the whole shebang. If not kept in check, this could quickly become a giant octopus of messy cables that can quickly eat up desk space and just flat out look ugly.

My desk also has some storage underneath that must be able to be slid out, so it’s flat out not an option to have cables dangling behind it. To solve this I just used zip ties and a clever twist on the usual mounting clips: I really wish those plastic mounts with adhesive backing worked: I really like them, but having cables pulling permanently on a piece of double sided tape just guarantees they’ll pop off at some point.

A better solution for me was a box of short washer-head screws: the wider head makes it easy to grab a zip tie under it, while being discreet enough to grab a bunch of cables without pulling out. Granted, you’re putting holes in your furniture, but I have found time and again that it is a worthwhile sacrifice in order for the cables to stay put for long periods of time. The screws are also reusable: just back them out a turn or two and the zip tie will come right out.

Once I got my enormous bundle of cables under control, it was time to test it out.

Performance and quirks

Overall, the whole thing works great: I can quickly switch between both laptops, sharing devices without an issue. I attached the remote to a corner of my laptop, which gives me a clean look and easy access to it. The switching is fairly quickly and all apps quickly rearrange when the second display is detected, which is very useful when returning to a computer after a switch. Also, having the laptop screen still showing is great for situational awareness when you’re working with both laptops at the same time. The entire setup uses slightly more space than it used to, but it’s a marginal difference in comparison to all of the advantages it has brought.

I thought having shared audio for both devices would be a bit of a mess, but surprisingly no: hearing notifications from the other computer while playing music or keeping a call going while switching computers is extremely useful, and the expected overlap of sounds have turned out to not really be a problem.

The KVM switching process, with it’s rediscovery and rearrangement of devices and applications, takes a couple of seconds, but it isn’t really a problem, at least for my sensibilities. I do wish the KVM had some sort of optimization for preventing the lag in USB devices, which I feel is slightly too much.

There is also the problem of sleep: you have to tweak your settings to prevent the computers from going to sleep while you’re looking at them: since it’s very much possible that I’m not interacting with the device for a while, it’s not an unreasonable assumption that the device is ready to sleep, even if it isn’t.

Closing thoughts

Overall, this KVM solution has pretty much solved all my problems of parallel laptops: the devices are shared without a problem, and my desk has not been entirely consumed in the process. There are some quirks, but overall the device does exactly what it should.

I do feel however that it’s very involved process: as work-from-home gets turned into an ubiquitous form of labor, I fell that a hardware solution that just does this for you, with some degree of ability for customization, could be a real game changer for all of us in this situation. This is a thing that should be so much easier, but it just isn’t, and there aren’t many approaches in the market that don’t require this kind of tinkering, but if you are so inclined, you can make it work.

I just hope I never see the day when a third computer has to be integrated.

Categories
The Trenches

How 400 lines of code saved me from madness: Using computers to organize high-attendance events

Note to readers: I can’t really show pictures of the event because most participants were underage and I don’t have a release for having them here. This makes pictures look slightly off-topic, but they are the only ones I can actually show.

About a year ago, I got myself involved as a logistics officer for an event with an attendance in excess of 400 people. For five days, we had to make sure all of them had food, transport, merchandising, a place to sleep, a group to work with, and a good time.

We were constrained in budget, time, and manpower: we wanted to do everything with nothing, and there was just way too much data to crunch by hand: brute-forcing this wasn’t going to work; exactly the kind of situation that gets me excited for some outside-of-the-box thinking.

We used computers, and they worked beautifully: this is how we kept track of the lives of 428 people for five days without losing our minds.

Dreams, experiences, and realizations

Carlos Dittborn, one of the people responsible for hosting the FIFA World Cup in Chile in 1962, had a quote that has resonated in this country for more than sixty years:

Because we have nothing, we want to do everything.

Carlos Dittborn, on an interview during qualifiers for deciding the host of the 1962 FIFA World Cup.

If you’re anything like me, this quote makes transparent the ethos of makers: It’s not about anything other than solving a problem; the thrill of making something work is enough, and that makes us do whatever it takes to achieve it.

This project was born out my work as a consultant and youth leader in a religious organization: we had groups scattered all across the country, and they all knew of each other, but there hadn’t been an instance for all of us to come together as one. The last time something like that had been attempted was back in 2015, and most people who were currently in the lines of our youth groups were way too young to be there.

Overall, the first time was a success, but it highlighted a key aspect of why using machines matter: there was just way too much information to keep track of using just the mind and pieces of paper. It’s a remarkable achievement that it worked anyways, but it took a tremendous effort by around 40 people a couple of months to get all the data crunched, and many mistakes were made during the event; things got lost, food got delayed, and people didn’t know where to be. Keeping up tabs on 400-ish people is a task that is outside of what the human mind can manage on its own, so better solutions were needed.

There’s also the age variable here: these people are mostly kids. While they are old enough that they don’t need permanent supervision, adolescents are not exactly known for following the rules or caring much about what the adults have to say, so the people in charge needed to be free of burdens of information that could be better managed by computers.

In practical terms, we needed:

  • Receiving all inscriptions to the event in quasi-real-time, so we could set up sleeping accommodations for all participants and their leaders, along with food and health requirements, transportation, etc.
  • All participants had to be assigned groups in which all the activities were to be carried out. These also had requirements regarding gender, age, and place of origin.
  • We needed to keep track of how many people ate during all meals down to individual people, both to ensure correct payment of food and to ensure no kid skipped more than a single meal.
  • We had to transport everyone to a different location during the second day, which meant getting them all into the correct buses both heading to and leaving from the new location.
  • For some activities, the kids would rank their preference from a number of choices in order to participate in events. These answers had to be tallied, the events assigned, and the attendance lists handed out to participants and organizers alike.
  • Supplies had to be handed out individually to each participant, keeping track of who had what.

This is all pretty par for the course for any event, but there was another variable for us to wrangle: this all had to be done at a breakneck pace, by only five people, and with a budget of around 500 USD.

A need for speed (for cheap)

If it were maybe a couple dozen people, all of this could easily be made with a copy of Microsoft Office and some macros. The main problem here was speed, groups had to be assigned as soon as the inscription data was ready, all 400 people had to be checked in every meal in less than an hour, and loading people into the buses had to be done in under 90 minutes. It was an ambitious plan, but doing it any other way would have meant to take away time from the things that actually mattered. Another problem was concurrency: for the math to work out, we needed multiple people filling in data in our tables in real time, while maintaining operator independence. For some applications, we also needed real-time monitoring that gave us insights of what exactly was going on, along with methods for ingesting data from many different sources.

You kinda-sorta can do this with a cloud solution like Google Sheets, but making complex operations like assigning groups with set criteria is something that a simple spreadsheet just can’t really do comfortably: a cell is a cell, and operating with data arrays as results is just out of reach. I do know about Excel macros, but they have always felt like a hack to me. An actual programming language with a database is way more flexible.

On the other hand, how do you make the user be faster? Even asking for something like a name was way too much time for this, and memorizing a number would bring us to a standstill the second someone forgot theirs.

A solution materializes

Not everything in life is having fun and playing the guitar.

The backbone of all of this had to be data storage and manipulation solution: A way to store structured data, run queries, and that satisfied our speed and concurrency requirements: we needed a database.

Fortunately, we already had a MySQL server on an internet-facing machine that we could take advantage of. Database engines are like an Excel spreadsheet on steroids: they can process truly mind-boggling amounts of information, run queries, automatically calculate data, answer multiple queries? you name it, it can do it. Unlike a spreadsheet though, accessing a database requires the use of the Structured Query Language (SQL), which meant if we were going to use this with non-engineers, a frontend was needed.

For this, I need to come clean: I’m not a very good programmer. Sure, I can build things that work, but they aren’t going to be pretty. I picked Python as my language of choice, interacting with MySQL servers is very well documented and data processing can be done fairly quickly if you’re not sloppy about where your data goes. For a while I toyed with making a graphical interface using Tkinter and then curses, but as I was only looking to accept keyboard input and display basic data, a simple terminal would do, inputs and prints was the way to go. This would prevent me from wasting hours debugging the interface, and gave me much more speed when it came to do new iterations and modifications.

A sample interface using print and input. The entire interface was reprinted which each cycle, but this was acceptable for our needs.

To get a high throughput where counting people was required, I turned to the retail industry: I used Code-39 barcodes, each one encoding a four digit number assigned to each participant. Barcode readers are cheap and most of them support Code-39, and making it work with my software was dead easy: when you plug in a barcode reader to your computer, it detects a keyboard: all characters are written as if someone had typed them, and then it automatically presses the Enter key. This had two key advantages: a simple input field could read barcodes pretty much as fast as we could scan them, and if a code became damaged, it could be typed out manually.

A sample label for participants. The number next to their name is encoded in the barcode. G and B fields correspond to their group and bus respectively.

All the rest of the data processing was made using Python scripts that read the database and calculated everything we needed. There were two types of scripts; the first were frontends for real-time operations, such as:

  • Keeping track of meals.
  • Giving out t-shirts and notebooks.
  • Check-in for bus transportation.

The second type were one-time scripts that calculated new data from what we already had:

  • Assigning work groups for each participant, along with their leaders. These groups had age, sex, and origin requirements.
  • Assigning recreational activities and lectures for all participants based upon a list of preferences.

This gave us a new challenge however: We had never arrived to explicit criteria for making groups like these before, so how could we make a machine think like a human?

A machine thinking like a person

Everyone that has had to organize an event of pretty much any kind has almost surely come to this conundrum: How do you split a large group into smaller groups?

For small groups, rules of thumb and educated guesses is enough, but beyond maybe 70-80 people, its starts to get incredibly tedious, and mistakes are pretty much bound to happen. Computers are deterministic machines however, so they could make a classification problem like this one, provided we can explain all criteria for it. So, how do we do it?

Workgroups

Let’s look at our first example: we need to divide all participants into groups of 10 people. Before we get into the weeds with programming, we need to ask the fundamental question of this problem: What makes our groups better? Let’s see what we know:

  • All groups are to perform activities under the supervision of two adult leaders. These tasks are centered around knowing each other, sharing opinions and experiences, looking to find common experiences among different groups.
  • All participants are teenagers aged 14 to 18 years old, and they’re usually split into age groups within their places of origin, so mixing them up would be preferable.
  • Participants are of mixed gender, with a slight majority of women.
  • Participants are very likely to know people from their own place of origin, and likely to know people from nearby places of origin, as they often have joint activities. A key aspect of this event is for them to get to know more people, so we need them as separated as possible.

With this in mind, we can arrive at some criteria:

  • Groups will be of 10 or 11 people, a necessity in the very likely scenario that the number of participants is not evenly divisible by 10.
  • Groups have to conserve the gender ratio as much as possible: if the whole is 40% men and 60% women, then all groups should have 6 women and 4 men.
  • Places of origin should be split as much as possible along all groups, to avoid concentrations of people that know each other. Peoples of the same local area should also be split if possible.
  • Ages of participants will have to be as mixed as possible.

Great, now we have our criteria. How can we turn this into code?

The approach I went with involves sorting the table of participants and then assign a group to each one in a rotating fashion. I will call this method the sort-and-box, as those are the two key steps involved. Conceptually, there is a box for each group, and each participant will be assigned a box sequentially: the first person goes to group 1, the second to group 2, and so on. Once we’re out of boxes, we roll over back to the first one and we stop once all people have been assigned groups. If we have sorted the table correctly, this will guarantee maximum dispersion among participants, but it creates a new challenge: how can we sort the list of participants?

conceptual view of the sort-and-box method

This approach has a single core tenet: if you group people together, they will end up in different groups; there is no way to consecutive participants in the table end up in the same group, so to get maximum dispersion, we need to sort all the people that need to be apart, together. Also, we can sort by more than one criteria by doing recursion: we sort inside the already sorted blocks. This creates a tree of sorted participants, where the first sort has the first priority when splitting, then the second, and so forth. An additional layer of separation can be achieved by doing the larger groups (by place of origin) first, which ensures large groups (which are the hardest ones to split) are split as evenly as possible, without interference from smaller groups.

With this, a Python script was created that did the following steps:

  1. Get the list of participants from the SQL database.
  2. Calculate a multiplier for each place of origin. This value corresponds to the multiplication of two numbers: the number of people from that place of origin divided by the total number from the local area, and the number of people in the local area divided by the total number of people in attendance. If we sort by this number, the bigger groups and zones will be sorted first. These numbers are appended to each participant.
  3. Sort the table by origin multiplier, gender, and age, in that order.
  4. Create empty groups by dividing the number of participants by 10 and discarding decimals.
  5. Assign each person a group using the round-robin method described above.
  6. Add a column to the table with each participant’s group.

At last, our work groups were complete. Sorting all 400-ish participants took around three seconds, and I’m sure you could make it faster if you wanted. This meant more time to receive inscription data and more time to print out the barcode labels we needed.

Activities

In two instances, groups would separate among different lectures and recreational activities respectively. This was thought both as an opportunity for them to share experiences within the event and also to generate spaces in which people could meet outside of their groups. For both cases, these were the requirements:

  • Each activity had a limited maximum number of participants.
  • Participants had to decide which activity they wanted to attend, but the last call was made by us.
  • Participants who got their preferences in first would get priority when separating groups.

As we had been using Google Forms for most of this, participants were asked to rank all possible choices from first option down to last. That data was then imported into a SQL table and then processed as follows:

  • Get the maximum number of participants for each activity.
  • Each participant is analyzed separately, with the earliest ones to respond going in first.
  • For each participant, the first option was checked. If there were remaining seats, then the person would be assigned that activity.
  • If there weren’t available seats, the program would move to the next option until one is found.

Because there were more seats than participants, everyone was assigned a place, even if it was their last choice. Having a program do this meant that the lists could be collated mere minutes after the poll was closed, which meant more time to sort out all activities and more time for participants to get their choices in.

An excerpt of the table given to participants (with their names hidden) for checking into their activities. This was made entirely automatically.

Overall, this is an exercise in teaching ourselves about our own process of thought, so we can teach some sand how to do it for us. Best of all, sand can usually do it faster and more consistently than a human can.

Want to know more? Check out my GitHub repo! It has all scripts I used for this.

Duct tape and cardboard

Prussian general Helmuth von Moltke the Elder, one of the key figures in the unification of Germany, once said:

No plan survives first contact with the enemy.

Helmuth von Moltke the Elder, Kriegsgechichtliche Einzelschriften (1880)

There is some nuance to this quote: there are obvious advantages to planning ahead, but all plans will soon have to face unforeseen challenges; the real world is too complex, here are too many variables at play for all of them to be accounted for, so you have to be flexible in order for your objectives to come true.

This project was certainly no exception. For one, all of these scripts and data analysis tools were very vulnerable to the Garbage-In Garbage-Out problem: if inscription data had errors or voids, everything started breaking real fast. Because this data ultimately came from humans, we needed to make sure the user could not make a mistake even if they wanted to: Each place of origin received a Google Sheets spreadsheet in which they had to type all of the information regarding their participants, so how could we idiot-proof it?

In comes data validation: spreadsheets have become leviathans of features, and one of them is the ability to inspect the data as it is typed so that nothing that doesn’t match specific rules can be inserted. First, all cells that were not to be edited were locked, preventing the user from breaking things like calculated fields or sheet concatenation. Then, all input data was assigned a type: date values had to actually work and have a specific format (DD-MM-YYYY), phone numbers had to have a specific length and area code (using regular expressions to match characters, something Excel can’t do but Google Sheets can), emails had to be of valid syntax, and so on. Also, once you filled out a single field in a row, you had to fill all others, otherwise you got an error.

Once all the sheets were received, a big table was made using all the data, which of course had to be skimmed by a human before ingestion: you make something idiot-proof, and the world makes better idiots. Fortunately most of the errors had been caught in time and only minor corrections had to be made.

Then came the barcodes. Our initial plan was to make wristbands and printing the codes on them: we even had test runs made in order to check if our readers could pick up the codes correctly. However, a week before the event, the printing press informed us that they would not be able to fulfill our order in time. This not only meant we needed a different way to get the barcodes to the participants, it also meant we had to design our own labels too, since the press was going to handle that in-house.

We quickly solved it using the provided notebooks we were giving out: a simple paper label on the rear cover had us covered no problem, but how could we make 400 personalized labels in just a couple of days?

The answer is very simple: Microsoft Word. As it turns out, it has an incredibly powerful label maker, which can take a table of raw data and churn out labels to your heart’s content. It can even make barcodes on the fly, which was very handy for this occasion. In about two hours we had all labels designed and printed, and in the afternoon all of them had been placed inside of the notebooks. It was tight, as it was finished the day before the event, but it was enough to save us from scrapping the entire control system.

The day comes

I was wearing many hats during this whole ordeal, live sound included.

Our first task turned out to be excellent for fixing late bugs and smoothing out errors: each place of origin arrived separately, and each participant was given a t-shirt and the aforementioned notebook. For each one, the barcode was scanned in order to keep track of who had received what. Because all places of origin arrived at different times, the pace was relatively quiet and we could manually ingest data if need be. Some bugs were found and quickly patched in anticipation for the big event: the meals.

The problem we had with meals was one of speed: our calculations showed that in order to feed everyone in the allocated time, we would be pushing out a meal every 13 seconds. Our catering crew was good, but if we slowed down even by a tiny bit, we could jeopardize our ability to serve everyone in time. Failure was out of the question. Our excitement was palpable then, when the queue started backing up, but not because we couldn’t keep up, but due to the serving line being at capacity and reaching where we were scanning the barcodes. Even with a single scanner we could keep up with the serving rate, with two being enough for pretty much all situations.

Assignment of activities was also a huge success: from composing the inscription form to distributing results to participants was done in a couple of hours, with only minor tweaking needed to be done for each time, which was now possible due to the massive time savings. Overall participants were very satisfied with the distribution of activities and our policy of transparency regarding the rules for election made it so that we got almost no complaints or last minute rearrangements.

Our biggest hurdle speed-wise was the boarding of busses: 10 buses of 45 people each had to be completely filled up and emptied out in both directions, with no more than 60 minutes each for every one: our biggest hurdle was actually getting people to move fast enough to each bus, but after a few slower runs, we had found our rhythm, and the return journey was even done ahead of schedule, with all buses boarded in under 50 minutes.

Even after the event itself our control scheme kept being useful: getting attendance numbers for activities helped us review the most popular ones, and the meals were within 5% of what the catering crew actually served, which convinced us and them that we were paying the right amount of money for over 2000 meals served.

Lessons learned

This entire project was a massive success from beginning to end: while most tech-savvy people will agree that this is not a particularly complicated project, introducing a rather modest amount of computational power to an event that requires this much information processing generates enormous time savings, which has one key consequence: we, the people making this event work, had more time to attend to the kind of issues you cannot account for beforehand, instead of focusing on trying to wrangle an overgrown spreadsheet.

Another advantage is one of consistency: when the rules of the game are clear, computers make a better job at making decisions than we do, and if we make a conscious effort to eliminate bias within our algorithms, we can create fairer solutions that maximize your chances of consistently good data. Being transparent about what your code does also generates legitimacy to an unknown software: if you can explain what your code does in layman’s terms, chances are people will trust and follow the instructions it gives. Be cautious however; computers are bound to the biases and prejudices of whoever programs them, do not put trust in them blindly.

Even when problems came up (uncaught bugs, lack of functionality, and a need to adapt to changes in the event program), our software was so simple that a couple of lines of code was usually enough to get the software to do what we wanted. We even ran the assignment scripts multiple times when bugs were caught, and everything was so fast that we could redo everything with minimal time loss.

Conclusions

Perhaps the most astute observation from all of this is one of systems architecture: machines have to be designed in order to serve you, not the other way around. If you create an automaton that takes advantage of its strengths, offloading mind-numbing work from people makes them in turn more useful, because you’re taking advantage of their humanity; More time for us meant we could plan everything out better in advance, the trust we placed in the machine meant we had more time to think about what we were doing and why: we wanted to give this kids an unforgettable experience, and to give them the chance to grow as people, together, and that is something that machines can’t do.

What we do with systems and machines and automation we do because it gives us our humanity back from the void of data. To adopt technology is an imperative, not because it’s just fun or just useful, it’s because it gives us the tools we need to comprehend and interact with an increasingly complex world.

I sometimes feel that right now technology is on trial in the public consciousness: the endless trot of innovation often makes us jaded and skeptical of adopting these tools and for good reason; like all tools, they are values-neutral, it’s up to us to decide how and why we use them.

They are also many reasons to be distrustful: we’re getting a better understanding on how social media affects negatively affects our relationships and self-image, we’ve seen what tech companies are capable of doing for a quick buck, and after a pandemic that had us staring at screens for sixteen hours a day, it’s understandable that we wish to escape these machines for good.

But these machines also gives us many unprecedented abilities: To communicate at high speed to and from anywhere in the world, to generate models that help us understand the world around us, to create new solutions, to save lives, to use them as tools of freedom and dignity, to preserve our past, to shape our future, and to allow us to focus on being human.

What we created here is not any of these things, but it allowed us to create an event that I’m sure will be a high-spot for everyone who attended it. We had the tools at our disposal, and we used them effectively to create an amazing experience for everyone, without losing our sanity in the process, and that feels great. It fills you up not only with pride and joy, but with a tremendous sense of accomplishment and purpose.

So please, if you can, use them to create systems that decrease the amount of suck in this world. Craft new experiences, push the boundaries of what is possible, be amazed by its sheer power, and maybe you will create something amazing in the process.

Categories
Tech Explorations

It’s Free Real Estate: DIY Solar Pool Heating System

More than five years ago, I set out to solve one of the biggest grievances with my home: I had a very very cold pool. Even in the summer it was unbearable to bathe in for more than a few minutes; we even considered filling it up with dirt.

Here’s how i fixed it on a tight budget, and what I learned doing it.

The issue

Our house came with a very nice 32.6m3 freshwater pool; it was a big selling point and one of the main reasons we bought it. We imagined it would be great for the hot summers of central Chile, and a centerpiece of household social activities. It soon became clear that it would not be so.

That pool would, in the hottest of summer days, never really get past 21ĀŗC. Getting into that might be refreshing for a while, but it soon chilled you to your bones. Most sources on the Internet indicate that a reasonable temperature for a freshwater pool is at least 24ĀŗC, and those three degrees made a huge difference. Remember, 21Āŗ is the best case, in practice the actual temperature was quite a few degrees lower.

For one, the pool’s is lightly colored, and painting it anything short of pitch black wouldn’t have really made any difference, because there is a large tree that gives it shade most of the day. Fixing any of these problems was out of the question, as it would not pass aesthetic inspection (my mom). For a while, we even considered filling in the pool to get some extra garden space, but it always felt like a waste. The hunt was on then, a new way of heating the pool was needed.

Choosing the right way

The first question was which energy source was I going to use: It had to be cheap both upfront and over time, and already available at my house. This basically meant (at least in principle) either gas or electric. For gas-powered systems, you can install what is essentially an oversized boiler, while electric solutions involve resistive heating (like a hot water tank) or heat pumps. All of these systems quickly made no sense for my budget; both installation and running costs would have been massive, as energy is expensive here.

In comes solar heating. This boils down to circulating water through a black pipe placed in the sun; the pipe heats up and transfers its heat to the water. The advantages were clear: no energy costs and very basic infrastructure. Next to our pool filter lies our roofed driveway, which despite being on the south side (the worst side in the southern hemisphere) of a tall house, had enough space to clear its shadow for most of the day. This was the way to go.

Designing solar water heaters from scratch

You can buy ready-made solar pool heaters which are essentially a flat panel of small tubes (about 8mm in diameter) which can be laid on a roof and piped to and from the pool filter, but these are expensive and usually hard to get if you’re not an installer (at least over here). Also, you read the title, you know where we’re going with this.

To make low-temperature solar thermal collectors, we need something that can withstand UV light, be somewhat flexible for ease of installation, and ideally, be black: in comes polyethylene pipe, a flexible-ish black pipe meant for irrigation. Smaller pipes gives you better surface area per water volume, so the smallest size easily available, half-inch, was used.

Then came the question of area: how much roof space do you need to fill with panels to get good temperatures? My reflex answer is as much as you can, but there are some difficulties with this approach:

  • The more panels you put, the bigger the pump you will need to push water through them, and the higher the operating pressure you will need.
  • Water is heavy and your panels will be full of it; be careful how much weight you place on your roof.
  • For this application having panels in the shade is not really harmful, but it will be wasted space and pressure; try to put only as many panels as you actually need.

Figuring out how many panels you need to heat up a pool is rather difficult: you will most certainly end up partially eyeballing it. However, there are some important facts you need to consider:

  • How big is your pool and how much of a temperature difference you actually want.
  • The angle of your roof and the direction your roof is facing.
  • The height of your roof and the power for your pump, as it will dictate your flow rate.

For us, what made sense was around 500m of total poly pipe exposed to the sun, we also had a roof that was readily accessible right next to the pool pump. That number is somewhat arbitrary and more to do with how we went about doing it, but it ended up working out in the end.

Designing the panels

To make panels that would actually work, we set the following criteria:

  • The panels must be small and light enough to be lifted to the roof by a single person.
  • The panels must be able to be replaced if necessary.
  • The panels must be arranged in such a way as to have the smallest possible impact in flow rates and pressures.

Because we went with half-inch poly pipe, putting panels in parallel was pretty much mandatory, so we decided to make lots of small panels we could haul up into the roof and then connect into an array; after some quick calculations we realized that a flat spiral a meter in diameter would have roughly 50m of pipe, which meant we could build 10 lightweight spirals: the pipe would be tied using steel wire to a wooden cross every four turns, and after many, many hours of rolling, we had our panels.

10 panels also turned out to be a bit of a magic number, as it meant that doing 5 sets of two panels would equal to roughly the same cross-sectional area of the 50mm pipe coming to and from the pool, which meant pressure loss would not be that bad. The total internal volume of the panels was around 350L, which meant the waterline would recede by around a centimeter. This was the winning combination.

Connecting it to the pool filter

There are three key features regarding the connection to the pool: first, the water circulating through the panel must have already passed through the filter, as to prevent blockages. Second, the user must be able to control not only whether the panels get water or not, but how much water gets up to them, to be able to control the temperature without sacrificing too much flow and pressure. Third, attention must be taken in order to get the shortest runs of pipe possible; every fitting and every jump in height reduces flow and pressure.

With all of this in mind, and blessed with a roof just next to the pump house, the output of the filter was teed off in two places, with a ball valve installed in the middle: this will be our mixing valve, allowing us to mix cold water from the pool with warm water from the panel in order to control the temperature. Then, the first tee in the chain would be connected to the panel valve, and then up to the panels, in which there are five manifolds in order to hook up the poly pipe spirals, with a matching set of inputs downstream after the panels. The return from the panels would enter into the second tee, and then back to the pool.

There are some considerations here: A ball valve for mixing is not the most precise way of controlling temperature: something like a gate valve gives you more control, but they are a lot more expensive and you can still adjust the temperatures just fine with a little finesse on the valve handle. Also, when the pump turns off, a vacuum forms inside of the panels, as the water descends from gravity and nothing replaces it. For these panels, I found that the back pressure from the return lines was enough to break the vacuum and prevent an implosion, but for taller roofs, I would recommend adding a vacuum breaker (essentially a valve that opens when the pressure inside of the panels goes below atmospheric and lets air in) just in case.

And, well, that’s it! By opening the panel valve and slowly closing the mixing valve, water will start to go up the panels, and heat capture will commence.

Using the system in practice

Bernoulli’s equation of hydrostatics tells us that if we increase the height of a fluid, it’s pressure must go up. For us, this means that there will be a minimum operating pressure in which the panels will actually get water, otherwise a column of water will peacefully reside in your pipes without overtopping the highest point in your system. The same equation gives us the answer:

Pcritical [Pa] = Ļ[kg/m3] Ā· g[m/s2] Ā· h [m]

Where P is the minimum pressure you need, Ļ is the density of the water, g is gravitational acceleration, and h is the difference in height between your pump and the tallest point of your panel. You can also kinda eyeball this: close the mixing valve bit by bit until you start hearing water coming back through the return pipe, and then back off until you can’t hear it anymore: that’s your minimum pressure.

With panels this thick, passing the entire flow of water through the panels is somewhat unnecessary: there are diminishing returns once the water starts heating up, so you want to close the heating valve just enough so that the panels don’t get above ambient temperature (so you don’t lose heat) for maximum performance of both the panels and your pump. If the pool gets too hot, then losing heat is what you need, so just open the mixing valve a little bit more and in a day or two you will have a cooler pool.

Unforeseen circumstances

Great, your pool is now warm! Unfortunately, this is not without consequences. For one, warmer pools lose water by evaporation a lot faster than cooler ones, so expect to fill it up more often, be mindful of your water bill. Also, warmer pools are much more attractive to algae, which grow a lot faster in these waters: maintaining good chlorine levels, incorporating some sort of electrolysis cell for adding copper ions, and cleaning the pool regularly are a must, unless green and turbid water is what you want.

After much experimentation, I have found to be the winning combination: one and a half tablets of TCCA per week, addition of copper ions via electrolytic cells, and weekly vacuuming and sweeping is enough to keep algae at bay. Remember that the actual quantities are going to be dependent on your temperatures and volume of the pool.

Albeit not my case, if you happen to live in a place where freezing temperatures are common, it’s very important that the panels are drained during the winter season: usually popping open the cap on the filter and the drain plug on the filter for a couple of hours is enough, otherwise prepare for burst pipes and cracked joints. On that same vein, remember to paint your PVC pipes every so often, UV light is not nice to polymers, so try to avoid exposure if possible.

On a more humorous note: my panel usually drains almost completely at night, which means every morning the pump removes all of the air out of the pipes, which results in a very unique noise every morning: my pool is farting!

A five-year retrospective: closing thoughts

This project turned out to be a huge success not only for my household, but because it made me learn many useful skills not only building it, but designing it: the art of the educated guess cannot be understated, and sometimes the only thing you need to succeed is some ballpark back-of-the-envelope calculations. By applying some high school physics and a bit of blood, sweat, and tears, we ended up with a pool which regularly hits 28ĀŗC and beyond, and it became a centerpiece of our beautiful garden. If you want to get into some low-stakes plumbing, the low pressures and big pipes are a great way to get started, and even a large pool can be done for relatively cheap, definitely more so than hiring someone to do it. Best of all, you’ll be doing it in an environmentally friendly way.

Categories
Tech Explorations

Building a lab-grade power supply from old computer parts

A bench power supply is a fundamental tool for testing electronics, allowing for flexible power delivery to a range of different devices that could make their way to your bench. As electronics became ubiquitous DC power supplies have become easy to find, building capable devices from scrap electronics becomes a very budget friendly way to expand the capabilities of your setup.

I’m not beating around the bush: this isn’t how to make a fully-featured power supply for cheap, it’s a hacky, cobbled together device that could be so much more powerful, but I just don’t want it to: it’s just so I can charge batteries, power junk on to see if it works, and just get some voltages out to the world when I’m too lazy to go get a power brick. It’s ugly and profoundly utilitarian, but it works.

I’ve got a ton of ATX power supplies, and you probably do too

I’m willing to bet that when IBM launched the PC AT in 1984, they didn’t expect that it’s overall layout and design would become the de facto standard for computers, especially forty years later. One would be forgiven for questioning how we came to this predicament: there are many things to hate about the AT standard: The card risers are barely adequate for holding modern GPUs, the power supplies are way too bulky and have a rats nest of wires that you may not need, the connectors suck, and so, so much more. However, it is what stuck, so we’re stuck with it too.

This means that pretty much every desktop computer that has a tower form factor has an ATX (AT eXtended, basically a beefed up AT standard for slightly less crap and more modern applications) compatible power supply and pretty much everything is more or less in the same place inside the chassis, which makes it great for finding parts that more or less all work with each other.

If you’ve ever disassembled a desktop computer (and let’s face it, if you’re reading this you probably have), you probably ended up throwing the PSU into a pile of them that you look at every so often thinking “I should probably do something with them”; well, here we are.

Contemporary power supplies usually have a few components in common:

  • A 24-pin motherboard connector. (+3.3V, +5V, +12V, -12V, 5Vsb)
  • A 4 or 8-pin processor connector. (+12V)
  • A PCIe power connector, either 6-8 pins, with higher power models having multiple connectors. (+12V)
  • Accessory connectors, usually SATA and/or Molex connectors, for stuff like storage drives, optical drives, fans, etc. (+5V, +12V)

These devices are extraordinarily dumb: while the motherboard does have some control over its operation, the protocol is extremely simple: a +5V standby signal powers the control circuitry, which turns on the supply by pulling the PWR_ON line to ground, and it is notified that the PSU is ready to go when the PG line is pulled to +5V. That’s it. The wide array of voltages and simple communications make these supplies an exceptional way of powering almost everything. Almost.

Most bench power supplies are adjustable, having both voltage and current control over a wide range of supply conditions, which is very handy to get that pesky device that uses a weird voltage to power up, or even running tests under different conditions. There could be ways of modifying the feedback circuitry of the switchmode power supply inside, but I’m not knowledgeable enough in electronics to know how to do so, and from what I’ve seen, it might not even be possible.

Some jellybean parts from AliExpress, a box, and some soldering later

With all these factors taken into account, the requirements are as follows:

  • I want to use a ATX power supply from an old computer.
  • I want all the voltages from the ATX standard available for use.
  • I want an adjustable regulator that can do both Buck and Boost, so I can get a wide range of voltages.
  • The regulator must have both constant voltage (CV) and constant current (CC) capabilities.
  • Having two regulators would be nice.
  • The power supply must be at least 150W total.

From my pile of scrap I fished out a FSP 250-60HEN 250W ATX power supply. It’s fairly old, but it has a couple features I like:

  • It has a big fan on the top, which makes it quieter.
  • It has two 12V rails: one for the processor connector, another for everything else.
  • the wire gauges are all fairly similar, which makes it easier to bundle afterwards.

With this, I cut off all connectors and separated the rails: orange is +3.3V, red is +5V, yellow is +12V, the lonely blue wire is -12V, black is ground, and all the status cables (green for power on, gray for power good, purple for +5Vsb, and a ground for making it all work) were separated by color and soldered to ring terminals for connecting to banana plugs on the front. The +12V rail from the processor connector was also kept apart. Some cheap binding post/banana plug combos from AliExpress and a heinous 3D print job that peeled off from the print bed halfway through, and I had some voltages to work with. The power on signal went to a toggle switch that connected it to ground (this is my main power switch), the 5Vsb went to an indicator to show the device has AC power, and the power good lights up another indicator to show the device is ready to be used.

For the regulators, I went for some nifty panel mount regulators I found on AliExpress for cheap: they can handle a decent amount of power, they have a usable interface, and they have an extensive range: they can to 0-36V at 0-5A, and all from the second +12V rail. Pretty cool! Add some banana plug cables, alligator clips, some other accessories, a couple of zipties, aluminum tape, and some swearing later, we have a supply!

Ups and downs

I’m not going to sugar coat this: this is a quick and dirty project. The thing is ugly, it looks like it’s going to kill you, and it very much gives a “rough around the edges” vibe to it, but it works exactly as I had hoped for: the regulators work great, the fixed voltages are no problem, and all the control devices work as they should. There are a few things worth noting though:

  • The regulators have an interesting way of performing constant-current duties: instead of some sort of control loop to keep the current stable along a desired value, the devices just shoves whatever voltage you gave it and then it observes; it changes the output voltage to give a current lower than your target and then measures again, reaching your desired current in steps. This perturb-and-observe model is very much useful for steady-state applications, but if you have sensitive electronics like LEDs or integrated circuits, be mindful to set your voltage to a safe level before activating the CC mode, failing to do so could result in an unsafe voltage in your terminals.
  • The measurements from the regulators are accurate, but not perfect, if you need precision, use a multimeter and short leads.
  • The fixed outputs have no onboard measurement other than what is needed for protection, so be careful about shorting these out.
  • I messed up the settings on my print and it came out really deformed. If I wasn’t lazy, I’d redo them with better adhesion to the bed, but I’m not. Nothing that some tape won’t solve. I might change them later, but the thought of undoing all the binding posts makes me queasy of doing it.

Overall, it’s like having a cheap AliExpress power supply, for about a quarter of the price. Pretty good overall, I’d say.

The tools you have are better than the tools you don’t

I’ve been working with this for about a month now, and I wonder how I made it this far without a bench power supply. Building my own tools gives me tons of satisfaction, and I hope to keep using and improving this device in the future. Sometimes the tools you can build with what you have is the best tool you can possibly get, and it will probably get you farther than waiting for the shiniest gadget.

So yeah, if you have a pile of junk computer parts, build a power supply! you’ll get lots of mileage from it and it will open lots of doors in your electronics adventures, not to mention the money it’ll save you.

Get building!

Categories
Unwarranted Opinions

Personal computers are done and using a 15 year old computer for a week made me realize it

When I started writing the draft for this article back in March, I was spinning up this narrative that old laptops still have uses as writing machines; devices used for distraction-free text composition, especially if you could get a higher-end one with a good keyboard and a decent screen. This was a mostly uncontroversial write-up on my experience using a fifteen year old MacBook Pro for writing this blog, among a couple of other tasks.

But then I saw Cathode Ray Dude’s video on HP’s QuickLook, and my head was thrown in a flat spiral straight into madness. Despite the A plot being about an absolutely heinous abuse of UEFI and an eldritch nightmare of stopping Windows’ boot process in order to get to your email slightly faster, it was the B plot that put my undies in a twist: The second half of the video is this wonderful opinion piece hinging on the fact that for most people, computers are pretty much at their endgame: for most normal applications, a computer from a decade ago is indistinguishable from another one that came out last year. I highly recommend you watch that video as well.

While I could conceptually wrap my head around it, as the resident turbo-nerd in my group of friends I have been used to chasing the bleeding edge for years, if slightly hampered by budget constraints. The idea of an “office-use” computer from fifteen years ago still being perfectly cromulent seemed absolutely insane: after all, that’s pretty much all I do these days involving anything but what I do in my spare time.

So I set out to prove it, and in the way I stumbled across many perspectives, a new dread regarding late capitalism, and maybe some lessons for the less tech-savvy along the way.

The setup

To put this thesis to the test the experiment was simple: use an old computer for daily tasks and see how we fare. For this, I chose an Apple MacBook Pro from mid 2009. It sports a dual-core Intel Core 2 Duo P8700, a crisp 1280×800 display with amazing colors, and a surprisingly well-kept exterior. Inside, I made some modifications in order to better my chances:

  • The RAM was upgraded from the factory 2GB of 1066MHz DDR3 to 4GB.
  • The battery was replaced, as the old one had died.
  • The original 250GB 5400RPM hard drive was replaced by a 250GB SATA SSD. It’s DRAM-less and I got it for cheap, but it turned out to be more than enough.

This wasn’t some lucky find either, it had sat on my junk bin for a while, and you can find many (usually better) choices of Intel Macs for cheap pretty much anywhere that sell used goods, but this made a good starting point for this experiment. Also the new parts came cheap, it was pretty much what I could scrounge from other devices, only the battery pack was bought new, overall I spent around 100 USD total.

Obviously, this particular computer is officially obsolete, no new builds of macOS exists and haven’t existed for a while, and unwilling to induce a blind rage by wrestling with deprecated software, expired SSL root certificates, and poor performance, I decided to load it with Ubuntu 22.04LTS as it’s operating system; there are perhaps better choices performance-wise, but this will do for testing out this theory.

The realization

The idea behind all of this was using this machine as I would my main laptop (which I bought in 2022) for office-related tasks. This means essentially:

  • Writing for this blog.
  • Watching YouTube videos.
  • Writing university assignments.
  • Watching media for sorting my library.

All of this had to be made without significant sacrifices to performance and/or time spent, and had to be done while using the laptop to its fullest: on the go, on battery, and while listening to music or videos in the background so my Gen Z brainrot wouldn’t get to me.

And yeah, it just works.

Sure, it’s not blazing fast, but it’s perfectly serviceable, Ubuntu offers many applications for productivity and with most services relegated to the cloud, pretty much everything worked no problem using just a web browser. Even connecting to my NAS, doing wireless networking (something old-time Linux users will remember with absolute hate), even the infamous display drivers were preinstalled with the OS. Everything pretty much worked out of the box, with minimal CLI nonsense, so even the standard consumer could get this experience without much hassle.

A lesson for nerds

Look, I get it. Computers are fun for us. We like to take them apart, put them through hell and back, create abominations for shits and giggles, and sometimes even turn ourselves into bona-fide data center administrators of our little kingdoms of silicon.

But for most people, computers are no more interesting than a pen, or a saw: it’s a tool.

No matter how much we complain about obscure CLI procedures, or endlessly pontificate about the inevitability of Linux on the desktop, let’s not deceive ourselves, we enjoy doing this, and we do it because it’s fun.

So great, most people just want to turn on their computers, use them to do their job, turn them off, and move on with their lives. But that still leaves a question: Why a 15 year-old computer is still enough to do this? With the relentless push of technology, one would expect a continual state of progress, as it has been the case for many years in the electronics sector.

But it isn’t, we have just demonstrated that you don’t need that. A Core 2 Duo turns out to be more than enough for office-related tasks, and that just doesn’t jive with our collective idea of “who knows what the future holds?”

The assumptions of capitalism

No matter what your opinions are regarding capitalism, it is undeniable that our modern society is fundamentally shaped by the forces that govern supply and demand, yet most of us seem to ignore that its axioms, the postulates we take as a given in order for capital to do its thing, do not always apply to all industries at all times, especially when they fail to properly account for human nature.

One of these core tenets is the idea of perpetual innovation: the idea that as humanity progresses, so does the economy: all market products are bound to get better over time, and the people who can adequately harness new technologies and techniques will be rewarded with capital.

But what if that idea is wrong? What if there is nothing to add to a product? What if we have made something that is so good, that there is no market pressure to innovate?

In classical economics, we would call these products commodities: goods that hold no value regarding its origin or manufacturer. Things like steel, wheat, and gold are all commodities; steel is steel, and no matter how much you revolutionize the steel industry, people will still want steel plates and beams, you can’t really innovate with that.

If you use your computer for writing essays, working spreadsheets, creating presentations, and the odd YouTube video here and there, computers peaked for you in around 2010. At that point, your computer did absolutely everything you wanted it to do and then some. Don’t believe me? Get a copy of Word 2010 and you’d be amazed at what it can do. Wanted the full web experience? Fully HTML5 and JavaScript powered pages run fine on computers from 20 years ago. Spreadsheets? Have you seen Excel? And of course, if you needed something more, the Linux CLI was very much mature by that point.

The “Office PC” has been a commodity for more than a decade.

Implications and a call to action

This ubiquitousness of raw computer power gives us turbo-nerds a prerogative: There is pretty much no computer from the last 15 years that cannot be put to some use. Webserver? No problem. Minecraft server? Sure, my first one was on an old Vaio Laptop from 2011. NAS? Yeah, especially if it has USB 3.0.

We live in a world where everything is absolutely disposable. Things are meant to be used and then absorbed into the void of uselessness. The idea that our trash goes somewhere is alien to pretty much anyone in the western world. These computers however show us that they don’t have to end up there, new life can be created from these devices.

So please, if you can, rescue these devices from landfill. Get some SSDs and some extra RAM and fix them up. Get them current OSes and software, do goofy things with them, give them out as gifts or sell them for a profit on eBay. Give a Linux machine to your little brother, or sister, or cousin, Use them as embedded devices, the sky’s the limit.

We look to devices like Raspberry Pis as the be-all end-all of tinkering computers, but compared to any x86_64 computer from the last 15 years, Pis are exceedingly anemic. The form factor is compelling, but I would argue that unless you have exceedingly stringent size constraints, any laptop motherboard can give you better results. If you get some of the better ones, you can even toy with graphics acceleration, PCIe peripherals, and so much more.

Conclusions

Computers have gotten insanely powerful in the last decade, but it seems that every generation of new processors and graphics units and RAM feel less like a quantum leap and more of an incremental improvement, a new stepping stone that is even closer to the last than the previous one.

This is not because of stagnation however, it’s just that computers have gotten too good for our own right, and capitalism sort of fails when innovation is not desirable. Having more powerful computers is just something that makes no sense anymore for the average user.

I’m not falling for the Thomas Watson trap here: there will be a time where more powerful computers will be a necessity once again, but that time is not now. The time is now however for giving life to those old computers; they are not dead, and they deserve new chances at life as long as we can keep them running, there is just too much power on tap to leave it rotting on a landfill for the rest of eternity.

So, if for a moment, forget about Raspberry Pis, Chromebooks, and NUCs, and go get a laptop from 5-10 years ago: you’ll probably get the same performance for more than half the price, and you’ll save some silicon from hitting the trash before its time truly comes. Computers have become a fundamental tool for communication and for interacting with humanity at large; if we can get it to more people, maybe a different world is possible.

Categories
Tech Explorations

Dirt-Cheap Livestreaming: How to do professional quality streaming on a budget

A couple of years ago I wrote an article on how I cobbled together livestreaming hardware at the very beginning of the pandemic. Finding AV equipment was very difficult, so I did what I could with what I had. Almost three years have passed since then, and in the meantime I built a multi-camera, simulcasting capable, live event oriented livestreaming solution on a shoestring budget. Many compromises were made, many frustrations were had, but it worked.

This is how I built it and made it work.

Why bother changing it?

After my initial stint doing a couple of livestreams for small events, the requirements kept popping up, for simple one camera setups my equipment would do, but it quickly started falling short; and as venues slowly started filling back up with people, I just couldn’t rely on building the event around the camera. I had to find a way to do this without being intrusive, and to be able to fulfill my clients’ needs.

Lessons from the previous setup

The setup I described in the previous writeup was not free of complications. On the video side, I was limited by a single HDMI input, and any switching solutions I had were too rough for my geriatric capture card; it would take way too long to reacquire a picture. On the audio side, my trusty USB interface was good, but way too finicky and unreliable (the drivers were crap, the knobs were crap, and while I got it for free it just wasn’t worth the hassle) for me to be comfortable using it for actual work. I also had a single camera, very short cable runs, no support for mobile cameras, and an overall jankiness that just would not cut it for bigger events.

A new centerpiece: ATEM Mini

My first gripe was my capture card. I was using an Elgato GameCapture HD, a capture device I bought back in 2014 which could barely do 1080p30. I still like it for its non-HD inputs (which I have extensively modified and a story for another time), but the single HDMI input, the long acquisition times, and the near three second delay in the video stream made it super janky to use in practice.

After a month and a half on a waiting list, I managed to get my hands on a Blackmagic ATEM Mini, the basic model, and it changed everything: it has four HDMI inputs, an HDMI mix output, USB-C interface, and two stereo audio interfaces, along with fully fledged live video and audio processing: transitions between sources, automatic audio switching, picture-in-picture, chroma and luma keying, still image display, audio dynamics and EQ control and so much more. Its rugged buttons and companion app make operating the ATEM Mini an absolute breeze, and its extensive functionality and integration makes it the closest thing to an all-in-one solution. Many things that I did using lots of devices and maximum jankiness were consolidated in this one device. Anyone who is getting into livestreaming should get one of these.

Dejankifying my audio

Having good audio is almost more important than having good video; a stream with mediocre video but good audio is serviceable, one with good video and bad audio is almost unbearable. Because most events I work on have live audiences on-site, there is no need for me to handle microphones or other audio equipment directly: most mixers have a secondary output I can tap into and get a stereo mix that I can pipe into the ATEM Mini. For line-level unbalanced inputs I can connect them straight into the ATEM, and if I needed something more involved like multiple audio sources, preamplification, or any sort of basic analog processing I keep a small Phonic AM440D mixer in my equipment case, which gives me endless flexibility for audio inputs.

One of the advantages of using common hardware for audio and video is that both streams are synchronized by default, which removes the need for delaying the audio stream entirely, which once again reduces the complexity of setting up livestreams in the field.

New cameras and solving the HDMI distance limitation

For a while, a single Panasonic HDC-TM700 was my only video source, with an additional Sony camera on loan for some events. This was one of my biggest limitations, which I set out to fix.

Most semi-pro/pro cameras are way too expensive for my needs: even standard consumer cameras are way out of my budget, a single camera like the one I already have would need a couple of months worth of revenue, which given that I’m still at university I couldn’t ramp up. There are ways out though.

For one, I thought about USB webcams. There are some good ones on the market right now that are more than enough for livestreaming, but they are very much on the expensive side and I have never liked them for something like this: poor performance at low light, small, low quality lenses and fixed apertures, and low bitrates are just a few of my gripes. Also, I had a better capture card that could take advantage of HDMI cameras. So I looked around AliExpress, and found exactly what I was looking for: Process Cameras.

A process camera is essentially a security camera with an HDMI output. They have no screen, fixed (although decent quality and reasonably large) lenses, and a perfectly usable dynamic range. Since the do not have a screen or autofocus capabilities they are best used for fixed shots, but most of my streams rarely require movement (for which I have the other camera). Best of all: they were very cheap, at around $100 a piece if you included a tripod.

Now, we need to talk about HDMI: It’s a perfectly good standard for home use, but it has some problems in this use case (which we can forgive, this is very much an edge case), the biggest one being max distance. HDMI rarely works above 10m, and even 5m is challenging without active cables and devices that can actually drive cables that long. There are optical cables which can take them over the 10m mark, but these are expensive, bulky, and stiff, which complicates using them in events where they could end up in the way. The solution is somewhat counter-intuitive: just don’t use HDMI cables. But isn’t it the best way to do it in this case? Yes!

See, just because we’re using HDMI signals doesn’t mean we need to adhere strictly to the electrical specification as long as we can get the message across while converting it to a physical medium better suited for long distances. There are many ways of doing this, some use coaxial cables and HD-SDI, others use simple fiber optic patch cables, but I went for old twisted-pair Cat5e: It’s cheap, it’s available, and there are ready-made converters with an HDMI connector on one side and a 8P8C plug on the other. Add a 3D-printed bracket for mounting it on the side of the camera and some small HDMI patch cables and we’re off. With these converters I can get 25m runs no problem, and even 75m in extreme cases, which is enough for most venues.

This was not the only use for a 3D printer: I made custom power bars which hang from a tripod’s center hook, for powering cameras and converters.

Better networking and server equipment

In my previous article I used a Raspberry Pi 3B+ to run an NGINX server with the appropiate RTMP module and some extra software to make a simulcasting server, where a single stream can feed multiple endpoints. This worked great, but Raspberry Pis are a bit anemic and I wanted something with a bit more oomph in case I wanted to do more with it. The idea of a portable server is useful for me not only for streaming, so I grabbed a 2011 Mac Mini on Facebook Marketplace, swapped the hard drive with an SSD and off I went. The additional RAM (4GB instead of just 1GB) allows me to have more services set up without worrying about resources and the beefier Intel Processor allows me more freedom to run concurrent tasks. There is even some QSV work I could do to use hardware encoding and decoding, but that’s a story for another time.

I also ditched my 16-port rackmount switch in exchange for a cheap Netgear WNDR3400v2 wireless router, which gives me a nice hotspot for connecting my phone or in case someone else needs it; the new router is much lighter too.

A portable camera jank-o-rama

For a couple of scenarios, I really needed a portable camera that was fully untethered; maybe for showcasing something, or to keep an eye on the action while on the move. There are some wireless HDMI solutions but it always felt like loosing a good camera for an entire shoot (I usually run a one-man operation, so it was pretty much always a short run for the portable camera), and the cost argument kept popping up.

The way I solved it is to me as janky as it is genius: just use your phone. Most modern phones have excellent cameras, decent audio, and even optical stabilization. I used Larix, a streaming app to create a streaming server that I broadcast over WiFi (see why I needed a wireless router?) to be picked up by OBS. Unreliable? a little bit. Has it ever mattered? Not really, this capability is more of a novelty and a fun thing to add to my repertoire, but not meant as a centerpiece. I have even toyed with a GoPro Hero 7 Black streaming to my RTMP server and picking it up from there, which works, albeit with lots of lag. It’s a bit of a pain to not have it in my ATEM switchboard and having to switch it over OBS, but, you know, it’ll do.

Miscellaneous

Until now I carried everyting on a duffel bag, which just wasn’t going to work anymore: the weight killed my back anytime I went near the thing and there just wasn’t enough space: so I needed something like the big wooden cases that the pro audio industry uses, without breaking the bank. I just took an old hard-side suitcase and crammed everything in it. It’s big enough for me to house most of my stuff but not too big as to be bulky, and allowed me to keep everything tidy but without wasting space.

Because my new cameras don’t have a screen, setting up the shot and focusing can be a challenge. I usually resorted to using my second monitor to do so, but it was always janky and time consuming. To solve this, I bought a CCTV camera tester with an HDMI input. This is essentialy a monitor with a battery, for way less than a professional one.

I needed lots of cables, some of them really long. I ended up buying rolls of power and Cat5E cable and made them myself. My standard kit includes four 25m Cat5E rolls and a 75m one in case the network jack is far away, plus three 20m extension cords so I can place the cameras wherever I want. This is not including the three power bars for the cameras and a fourth one for my computer.

So what comes next?

To be absolutely honest, I think this is as far as this setup goes. Livestreaming jobs have dried up now that the pandemic has quietened down, and pursuing more stable ventures would require lots of investment, which I’m not really in a position to make. I found a niche during the pandemic, and I milked it as much as I could, I’ve paid for the equipment two or three times already, so I’m not complaining, but until I find the time to do that YouTube channel I’ve always wanted to do, I don’t think it’s going to see the light of day for a while.

Closing thoughts

I’ve had some tremendous fun building up this setup, and for my uses, it has proven itself time and again as a dependable if basic setup. Maybe you can get your own ideas to get creative; many of the lessons learned here are very much applicable to other streaming opportunities and who knows, maybe you’ll get some ideas to get creative with this media.

Categories
Tech Explorations

Building a better Elgato Game Capture HD

Back in 2015 I got myself a brand new Elgato Game Capture HD. At the time, it was one of the best capture cards on the consumer market; it has HDMI passthrough, Standard definition inputs with very reasonable analog-to-digital converters, and decent enough support for a range of different setups.

Despite its age, I still find it very handy, especially for non-HDMI inputs, but the original design is saddled with flaws which prevent it from taking advantage of its entire potential. This is how I built a better one.

Using this card in the field

After a few months of using it to capture PS3 footage and even making some crude streaming setups for small events using a camera with a clean HDMI output, two very big flaws were quickly apparent: First, the plastic case’s hermetic design and lack of thermal management solutions made it run really hot, which after prolonged operation resulted in dropouts which sometimes required disconnecting and reconnecting the device and/or its inputs, and second, the SD inputs are very frustrating; the connectors are non-standard and the dongles provided are iffy and don’t even allow for taking full advantage of its capabilities without tracking down some long discontinued accessories.

My first modification to it was rather crude: after it failed on a livestream, I took the Dremel to it and made a couple of holes for ventilation, coupled with an old PC fan that I ran using USB power (the undervolting of the fan provided enough cooling without being deafening). This obviously worked, but it introduced more problems: the card now made noise, which could be picked up by microphones, and it now had a big gaping hole with rotating blades that was just waiting to snatch a fingernail. This wouldn’t do.

Solving thermal issues

It quicly became clear that the original case for the Elgato Game Capture HD was a thermal design nightmare: it provided no passive cooling, neither by having heatsinks or vents. The outer case design was sleek, but it sacrificed stability on the way.

This device is packed with chips, all of which provide different functions: HDMI receivers and transmitters, ADCs, RAM, and many other glue logic parts, which meant that power consumption was going to be high. Having a custom LSI solution or even using FPGAs could have been better in terms of power consumption, but this is often way more expensive. Amongst all of the ICs, one stood out in terms of heat generation: a Fujitsu MB86H58 H.264 Full HD Transcoder. This was doing all the leg work in terms of picking up a video stream and packaging into a compressed stream and piping it through a USB 2.0 connection. It’s pretty advanced stuff for the time, and it even boasts about it’s low power consumption in the datasheet. I don’t know exactly why it runs so hot, but it does, and past a certain threshold it struggles and stutters to keep a video signal moving.

There was nothing worth saving in the original enclosure, so I whipped up a new one in Fusion 360 which includes many ventilation holes, and enough space above the chip so I could add a chipset heatsink from an old motherboard. I stuck it down with double sided tape, which is not particularly thermally conductive, but along with the improved ventilation is enough to keep the chip to frying itself to oblivion. I ran another protracted test, and none of the chips got hot enough to raise suspicion, and even after three hours of continuous video, the image was still being received appropriately. I initially though there could be other chips in need of heatsinks, but it appears that the heat from this transcoder was the one pushing it over the edge, without it the other ICs got barely warm.

Since we made a new enclosure, let’s do something about that SD input.

Redesigning the SD video inputs

This card hosts a very healthy non-HDMI feature set: It supports composite video, S-Video, and Y/Pb/Pr component video, along with stereo audio. The signal is clean and the deinterlacing is perfectly serviceable, which makes it a good candidate for recording old gaming consoles and old analog media like VHS or Video8/Hi8. However, Elgato condensed all of these signals into a single non-standard pseudo-miniDIN plug, which mated with included dongles. Along with a PlayStation AV MULTI connector, it came with a component breakout dongle which allowed any source to be used. With the included instructions you could even get composite video in this way. S-Video however was much more of a pain; while it was possible to connect an S-Video signal straight into the plug, it left you without audio, and the official solution for this was to purchase an additional dongle which of course by the time I got it no one had.

To solve it, I started by simply desoldering the connector off the board. I saw some tutorials on how to modify S-Video plugs for the 7-pin weirdness of the Elgato, and even considered placing a special order for them, but in the end I realized that it was moot. The dongles sat very loosely on the connector, and any expansion I wished to make on it was going to be limited by that connector, so I just removed it.

To the now exposed pad, I soldered an array of panel-mount RCA and S-Video connectors I pulled out of an old projector, so I could use them with whatever standard I pleased: three jacks for Y/Pb/Pr component video, a jack for S-Video, a jack for composite video, and two jacks for stereo audio, complete with their proper colors too. The SD input combines the different standards into a single three-wire bus: Pb (component blue) is also S-Video chroma (C), Pr (component red) is also composite video, and Y (component green) is S-Video Luma (Y), so the new connectors are electrically connected to the others, but for simplicity I much prefer it to having to remember which one is which, or having to keep track of adapters for S-Video (which I use a lot for old camcorders).

Final assembly and finished product

After printing the new enclosure I slotted in the board (it was made for a press fit with the case, to avoid using additional fasteners), and soldered the new plugs to the bare pads of the connector using thin wire from an old IDE cable. The connectors were attached to the case using small screws, and the design was such that all of the connectors were on the bottom side of the case, which meant no loose wires. The top stays in place using small pieces of double sided tape and some locating pins, which makes dissassembly easy, great for future works or just showing off.

I wish this was the product I received from Elgato. It allows the hardware to work to its true potential, and it makes it infinitely more useful in daily usage. No more faffing around with dongles, no more moving parts, or dropouts on a hot day. It feels like this was what the engineers at Elgato envisioned when they came out with this thing. The Elgato Game Capture HD is now my main non-HD capture device and even for HDMI stuff it still gets some usage, when I can’t be bothered to set up the ATEM switcher.

Finishing thoughts

I love the Elgato Game Capture HD, both for what it is capable of doing and what it did to the nascent streaming and video creation scene back in it’s day. I love its featureset and I’m even fond of its quirks, but with this mod I feel like I have its true potential available without compromises. It changed its place in my toolkit from a thing I kinda know how to use that stays in the bottom of my drawer to a proven and reliable piece of equipment. If you have one of these devices and feel unsatisfied with its performance, I urge you to give it a try, you will no doubt notice the difference and maybe you’ll keep it from going into the bin.

Categories
Tech Explorations

A server at home: childhood fantasy or genuinely useful?

Ever since I was a child, I always dreamed of having the sort of high-speed, low-drag, enterprise-grade equipment in my home network. For me it was like getting the best toy in the store, or having the meanest roller shoes in class (reference which I guess dates me). It was as if getting these devices would open my world (and my Internet Connection) to a world known only by IT professionals and system administrators; something that would show me some hidden knowledge (or class cred maybe) that would bring my computing experience to the next level.

Anyone who has ever worked in the IT field knows the reality of my dreams: while the sense of wonder is not entirely erased, the reality of mantaining such systems is at best dull. But, you know, I’m weird like that. If I weren’t you wouldn’t be here.

Almost a decade ago, I embarked on a journey of building and mantaining a home server. It has solved many of my problems, challenged me to solve new ones, and taught me inmensely about the nuances of running networks, mantaining machines, building solutions, and creating new stuff with it. This is my love letter to the home server.

This article is somewhat different than most of my other content; it’s more of a story rather than a tutorialized narrative. It’s not really meant to be a guide to build or maintain servers at home, it’s more of a tool for understanding the rationale behind having one.

Baby steps

As the designated “IT guy” in my friend group, I often found myself helping my friends and family with their computing related needs. I started helping others install software, customize their computers, using basic software like Office and stuff like that. We also gamed some, whatever we could pirate and get to run in crappy laptops and stuff like that. Our friend group was big into Minecraft at the time, as most kids were, and we loved to show off our worlds, different approaches and exploits to each other, sharing our experiences in the game. Inevitably, one day the inevitable question came: What if we made a Minecraft server for all of us to play in?

The writing was on the wall, so I set out to make one. At the time I was rocking the family computer, a respectable 21.5″ 2013 iMac. It was beefy enough for me to run the server and a client at the same time, and paired with LogMeIn’s Hamachi (which I hated, but didn’t know better), a highlight of my childhood was born. It barely worked, many ticks were skipped and many crashes were had, but it was enough for me and my group of friends to bond over.

Around the same time my parents bought a NAS, an Iomega StorCenter sporting a pair of striped 500GB hard drives for a whopping 1TB of total storage. Today that sounds quaint, but at the time it was a huge amount of space. For many years we kept the family photos, our music library, and limited backups in it. It opened my eyes to the posibility of networked storage, and after an experiment with a USB printer and the onboard ports on the NAS, I even toyed with basic services to provide. An idea was coming to my head, but I was just starting high school, so there was pretty much no budget to go around, so I just kept working around all the limitations.

At least, up to a point. Within a couple of months, both the RAID array in the Iomega NAS and my iMac’s hard drive failed, without any backups to recover. It was my first experience with real data loss, and many memories were forever wiped, including that very first Minecraft server world. Most of our family stuff was backed up, but not my files; It sucked. It was time for something new.

Building reliable servers from scrap

I was still in high school, there was some money to go around, but nowhere near enough me to get a second computer to keep online forever. So I went around looking for scraps, picking up whatever people were willing to give me and building whatever I could with it. My first experiments were carried out on a geriatric AMD Athlon from a dump behind my school from the early 2000s which wasn’t really up to the task of doing anything, but it gave me valuable information regarding building computers and what that entailed. My first real breakthrough was around 2015 when I managed to get a five year old underpowered Core i3 Tower with 4GB of RAM sitting in a dumpster outside an office block near my house. After a clean install of Windows and some minor cleaning I had, at last, a second computer I could use as a server.

I didn’t know much about servers at the time, which meant that my first incursion was basically an extension of what I’d seen before: using SMB to share a 1TB drive I’d added to it by removing the optical drive and rescuing a hard drive from a dead first-gen Apple TV. I added a VPN (ZeroTier One, it’s like Hamachi but good), printer sharing, VNC access and pretty soon I was running a decent NAS.

I added a second 1TB drive a few months after (which involved modding the PSU for more SATA power ports) and some extra software: qBitTorrent’s web interface for downloading and managing Torrents from restricted networks (like my school’s), automatic backups using FreeFileSync, and a few extra tidbits. I even managed to configure SMB so I could play PS2 games directly from it, using the console’s NIC and some homebrew software.

This was my setup for around four years, and It did it’s job beautifully. Over time I added a Plex server for keeping tabs of my media, And I even played around with Minecraft and Unturned servers to play with my friends. Around 2019 though, I was starting to hit a bottleneck. Using a hard drive as boot media for the server was dog slow, and I had run out of SATA ports for expanding my drive roster. I had been toying with the idea of RAID arrays for a while, especially after losing data to faulty drives. Mirroring was too expensive for me, so I my method of choice was level 5: single parity distributed between al drives, single drive failure tolerance. I just needed a machine capable of doing it. For a while I wondered about buying HBAs and just tacking the drives onto the old hardware and calling it a day. I ended up doing something completely different.

At last, something that looks like a server

In the end I decided that a better idea was to upgrade the motherboard, processor, the power supply and a few other stuff. I added a USB 3.0 card for external drive access, upgraded de processor from a Core i3 240 to a Core i5 650, and got a motherboard similar to the current one but with six sata ports, and got four 2TB video surveillance drives for dirt cheap, along with a beefier power supply to tie it all together. Around this time I got a Gigabit Ethernet switch, which vastly increased throughput for backups. It was mostly used, bottom of the barrel stuff, but it allowed me to create a RAID-5 array with 6TB of total storage, and slightly more room to expand my activities. Lastly, I replaced the boot drive with a cheap SATA SSD.

With it came an actual Plex library, a deep storage repository for old video project files, daily backups, a software library for repairs, MySQL for remote database work, and even a Home Assistant VM for home automation. I kept running servers and making experiments on it. This second iteration lasted me for around three more years, which was more than I expected from what was essentially decade old hardware running in a dusty cupboard next to my desk.

Soon enough however, new bottlenecks started appearing. I was getting more and more into video work, and I was in need of a server that could transcode video in at least real time. Most CPUs cannot do that even today, so I was looking into GPU acceleration. I also started suffering with Windows: It works fine for begginers, and I even toyed with Windows Server for a while, but it’s just way behind Linux distros for server work. It took up lots of resources doing essentially nothing, the server software I needed was clunky, and I lacked the truly networked logic of a UNIX-like OS.

Enterprise-grade hardware

Once again, I looked upon the used market. Businesses are replacing hardware all the time, and it’s not difficult to find amazing deals on used server hardware. You’re not getting the absolute latest and greatest stuff on the market, but most devices are not really that old, and more than capable for most home uses.

From the second-generation server I could still salvage the USB 3.0 card, the power supply, the boot drive, and the four RAID drives. All of those were bought new, and they were in excellent condition, so there was no need to replace them. I wanted something that would last me at least the next five years, which could accomodate all my existing hardware, and had plenty of room for expansion: PCIe slots for GPUs and other devices, proper mounting hardware for everything, and a case to keep everything neat.

I went for a tower server instead of a rackmount mainly because I don’t have a place for a rack in my home, and the long-and-thin package of most racked servers made no sense in my case. After a bit of searching I came upon the HP ProLiant ML310e Gen8 v2: An ATX-like server with a normal power supply, four drive bays with caddies, an integrated LOM, and even an optical drive (which given that my server now hosted the last optical drive in the house, was a must). It was perfect. I also managed to score an NVIDIA GTX1060 6GB for cheap, which is more than a couple generations behind at this point, but most importantly for me, it has NVENC support, which meant transcoding HD video at a few hundred FPS with ease.

Building the current server

My third generation server was built around the aforementioned HP tower, but many modifications had to be made in order to acheive the desired funcionality. After receiving it, I swapped the PSU and modified one of the accessory headers for the server’s proprietary drive backplane connector, so I could power the drives from the new power supply. Apart from increasing the max load from 350W to 500W, it also gave me PCIe power connectors to drive my GPU, which the previous PSU lacked.

Then, I installed the GPU and encountered my first problem: Servers like these use only a couple of fans and a big plastic baffle to make sure the air makes it to all the components on the board. This is a great idea in the server world, it reduces power consumption, decreases noise, and allows for better cooling, but it’s also interferes with the GPU: it’s not a server model, so it’s taller than a 2U rack, so the baffle cannot close. Not to worry though, a bit of Dremel work later I had a nice GPU-shaped hole, which I made sure to make as small as possible to not disturb the airflow too much. The GPU shape came in my favor too, as it redirects air perfectly onto the fanless CPU cooler.

Other than the four drive bays (in which I installed the 2TB drives) there isn’t much place for another boot drive, so I used the empty second optical drive bay to screw in the boot SSD. A lot of cable management and some testing later, I was ready to go.

For the OS, I went with Ubuntu Server 20.04 LTS. I’m very familiar with this whole family of Linux distros, so it made sense to use it here. My servers at work also use it, so I had some server experience with it as well. Once de OS was in, I installed the drivers for the LOM and the GPU, along with building the RAID-5 array again using mdadm. I was using Microsoft Storage Spaces for the array in the previous generation, so I had to rebuild it in Linux cleanly. After dumping the data into some spare drives and building the array on Linux I was in business.

For the software I installed Jellyfin (I was getting sick of the pay-to-win model of Plex and I wanted something with IPTV support), Samba for the shared folders, VirtualBox and the latest Home Assistant VM (don’t @ me about Docker, that version is crap and the supervised install is a pain in the ass, I’m done with Docker for now), qBitTorrent, MySQL, ZeroTier One, OctoPrint, and of course, a Minecraft server. I also installed a few quality-of-life stuff like btop for Linux and thus, my server was complete, at least for now.

The realities of living with a server at home

Despite my childhood aspirations, enterprise-grade hardware has a major flaw with respect to home equipment: noise. I can’t blame them too much, after all, these devices are made for production environments where no one really minds, or vast datacenters where noise is just part of the deal. I on the other hand, like sleeping. The first time I turned the machine on I was greeted with the cacophonous roar of two high RPM server fans. It dawned on me pretty quickly that this noise simply would not pass in my house, so I quickly set about fixing it.

Unlike desktop boards, the OS doesn’t have control over the fans. To the sensor package in Linux, it’s like they didn’t even exist. I did get some temperature readings and there might be a way to address them via IPMI, It just didn’t work right. The job of handling the fans is up to the iLO, HP’s name for a LOM, a tiny computer inside the server that allows for low-level remote management, including remote consoles and power cycling. The homelabbing community figured out years ago how to tune them down, and a bit of unofficial firmware later, my fans calmed down to a reasonable 20%, and I could sleep again. I took this oportunity to put a piece of tape over the POST buzzer, which had no right to be as loud as it was.

Closing thoughts

This has been a wild ride of old hardware, nasty hacks, ugly solutions, and wasted time, but in the end, the outcome is so much more that what I ever envisioned: a device from which I could automate the boring bits of my daily routine, keep backups and storage of all my stuff safely, and having access to all my services from wherever I happen to be. If you’re a nerd like me and willing to spend some time faffing around in the Linux CLI, I highly recommend you make yourself a home server, no matter the budget, device, or services.

Categories
Tech Explorations

Fast Track C600: Faults and Fixes

A few years ago, one of my high school music teachers came to me with a deal that was too difficult to pass up. He had just replaced his audio interface, and he wanted to get rid of the old one, which was of course faulty. Having known each other for a while, he knew that I was into that sort of thing and had decent chance of making it work. The device in question was a M-Audio Fast Track C600, a fantastic USB audio interface featuring 4 mic or line inputs with gain control, 6 balanced audio outputs, 96kHz 24bit crystal clear audio, low latency, and S/PDIF and MIDI I/O, along with many other tidbits and little details that make it a joy to use. It was way out of my price range, there was no way I could afford such a high-end device, and yet it was now mine. That was, of course, provided I could make it work in the first place. Today, we’ll delve into the adventure that was fixing it. Unfortunately, I didn’t take any pictures of the process, so you’ll have to take my word for this.

When I got home, I decided to plug it in and give it a shot, not expecting much. Instead of the usual greeting of flashing lights I was met with darkness. It was completely dead. My computer didn’t detect anything either, so clearly there was a hardware issue lurking inside. After opening it up, I was greeted with a myriad of cables routing lines back and forth from the two printed circuit boards that were inside, which looked pristine. No charring, no blown capacitors, no components rattling inside the case. the C600, or as the PCB ominously shouted at me (what I can only assume was the internal project name) in all caps, GOLDFINGER, looked as neat and tidy as the day it left the factory. A bummer it seemed. It wasn’t going to be an easy fix.

A breakthrough

And so it sat on my desk, half disassembled, for months. For one, I was still learning the basics of electronics, so there wasn’t much for me to do at that point. On the other hand, I was just getting into the world of digital sound, and my little Berhinger Xenyx 302USB was more than enough for what I was playing with back then.

Then one day, I decided to remove the lower board entirely (this is the one that holds all the important electronics, the upper one just has the display elements, knobs and buttons, along with the preamps for the inputs, which weren’t really necessary at this point), plugged the AC adapter (which I didn’t have, but an old 5V wall wart coming from an old Iomega Zip drive matched the jack and voltage perfectly) and the USB port, and started looking around the board.

At first, nothing really seemed to stand out, until after a while, when a smell of flux and solder caught my nose. For those who have never worked on electronics before, it’s a very pungent and characteristic smell, usually indicating a component that is way too hot. I started feeling around with my finger until I found the culprit; a tiny 10-lead MSOP package only slightly bigger than a grain of rice. I didn’t know what it was at first, but it had some big capacitors around, so I assumed it was some sort of voltage regulator, but the writing was tiny, and I couldn’t read the markings on the chip. After much squinting, I came to the conclusion that the markings read “LTABA”, which didn’t sound like a part name to me. A preliminary Google search came inconclusive, as expected, even after adding keywords and switching things around.

But then it dawned on me. a few weeks ago while hunting components on AliExpress, I noticed that most sellers usually wrote the complete markings of the chip on the listing, unlike other vendors like Mouser who just stick to the official part name. so I searched our magic word and lo and behold, there was my answer. Our mystery chip was, as expected, a regulator, the LTC3407 600mA dual synchronous switching voltage regulator from Analog Devices. The mystery was not complete however, as the regulator was of the adjustable type, and as such, I had absolutely no idea what voltages I was looking for.

But Goldfinger had me covered. etched on the silkscreen just a few mm away from the regulator, I saw three test pads, labeled “5V, 3V3, 1V8”. I assumed that the 5V was coming from either the USB socket or the AC adapter, while the 3.3V and 1.8V (voltages very common for powering digital microelectronics) were being handled by the dual-output regulator, stepped down from the 5V rail. After a quick continuity check, my assumptions were confirmed. The pieces were starting to come together.

A (not so) temporary fix

For a regulator to get that hot, usually one of two things need to happen. Either a short circuit on the output rail, or an internal fault that requires replacing of the chip. I discarded the short theory fairly quickly just by measuring the voltages. When a short occurs, the regulator usually switches off the output automatically and gives us a voltage very close or at 0V. In our case the output voltages were jumping around erratically, nowhere near the stated voltage on the board. While this was a relief in the sense that there was no problem with the board, it now posed an ever tougher question; what was causing this issue?

For a while I poured over the datasheet looking for an answer. At first I thought it was a problem in the feedback circuitry (the design of this circuit is what sets the output voltage and allows it to correct it as the load changes), but that would only affect one of the regulator subsystems, as each leg had a different feedback circuit. I also thought that the external components of the regulator (capacitors and inductors mostly) were faulty, but again, this didn’t explain why both rails were bust.

So I decided to quit. I’m not an electrical engineer (yet) and without a proper schematic of the board there was no way I could troubleshoot this PCB with my available tools in my house’s washing room. So I ripped the regulator out (It was slightly brutal, as this package has a massive solder pad beneath the package to dissipate heat, that is pretty much impossible to desolder amicably without a hot air rework station, which I don’t have) and went to my local electronics store and bought a 10-pack of LM317 linear adjustable voltage regulators. This million-year-old component, being a linear regulator, although trivially simple to install, has a massive disadvantage; unlike the original regulator which relied on switching the input voltage on and off really quickly, this one lowers the voltage by straight up dissipating the excess power as heat, which in turn means a greater power consumption. This meant both hoping that the USB port didn’t trip its overcurrent protection and adding heat sinks (salvaged from an old TV) inside the case with duct tape and wishing for the best. At least in my mind, this was all temporary. after soldering wires into the board, adding the passives for setting the voltage, and admiring my horrendous creation, we were ready for a test run.

First light, second problem

As I plugged it in, I saw das blinkenlights flashing at me for the first time. I was overjoyed when my computer recognized a new USB device. It was alive at last, but the battle was only halfway through.

For one, it turns out that Goldfinger doesn’t look kindly to USB hubs or USB 3.0 plugs. Both official and unofficial documentation warns the user to get away from both these apparent evils and stick to strictly USB 2.0. Luckily, my workhorse laptop does still include a USB 2.0 port which has given me no issues so far.

I had installed the “latest” (version 1.17, dated mid-2014) drivers available officially from the manufacturer’s website, which gave me issues since the beginning. Unstable on Windows Sound API, clicks and cutouts on ASIO, bluescreens if unplugged, bluescreens for no reason at all, poor hardware detection, you name it. After gouging through what’s left of the M-Audio forums, I found a post with the suggestion of rolling back to a previous version of the driver, which unfortunately went unanswered. So I gave it a try, downloading the 1.15 version (also available from the drivers site) and installing the old version. And at last, it worked.

A quick review, finally

So I’ve been using this interface for about a year now, give or take, and it has been a dream to work with. I’ve used it to record both live gigs and snippets and experiments of my own creation, and even used it a few times for livestreaming.

For me, it’s a perfectly adequate device for the kind of work I do, especially for free ninety nine. The user experience could use a tweak or two, especially the squishy knobs and the weirdly sensitive gain pots, but the build quality is solid, the connectors are a joy to use, and the included software is finicky, but powerful if you’re willing to respect it’s quirks.

Closing thoughts

While this turned out to be a massive project, both in time and scope, many important things were learned. First and foremost, never turn down free stuff, even if it’s broken. Turns out most people throw out things even if the fix is simple. Also, repairing things is good for the environment and usually cheaper than buying new. Second, just because the device you’re trying to fix uses some high-speed component doesn’t mean a 50-year-old component won’t replace it.