Categories
Tech Explorations

A house run by computers: making all of your IoT devices play nice with each other

The current state of the Internet-of-Things scene can be sometimes mind-boggling: incompatible ecosystems, an endless reliance on cloud services (that will and have been shut down), and an uncomfortable feeling that you’re not quite in control of what your devices are doing. Then I got Home Assistant, and everything changed. This is a story about how smart-home should and shouldn’t be, along with a few tricks that will hopefully save you some blood, sweat, and tears.

The E in IoT stands for easy

If you’re into home automation, then you are probably aware of the absolute mess that the market has become: everything has its own ecosystem, there’s poor integration between services, and everything relies on the cloud all of the time: turning on a switch from 5 meters away usually has to pass through a server hundreds or thousands of kilometers away.

If you want to make an Ecobee thermostat work with a Sonoff temperature sensor, you’re all out of luck. Local network control? nope. Custom hardware? not really. While most of the internet has coalesced thanks to open and flexible standards like IP, DNS, and TCP, the smart home situation has been plagued with the xkcd standards problem: incompatible ecosystems, walled gardens, proprietary protocols, and an overall sensation that companies are prioritizing their own profit margins over trying to create a sustainable market.

Speaking of sustainable, why the hell does everything have to talk to the cloud? There is very little computing power involved to turn on a relay, why do I have to tell Tuya or Google Nest or eWeLink or whoever that I’m turning on my heating, or my room lamps? What will happen when, not if, these companies decide to retire these services? Are we all doomed to be left with useless pumpkins the second this market stops making money?

If you think I’m exaggerating, this has already happened before. Samsung went all in with an entire smart home ecosystem called SmartThings, which claimed, like all of them do, to be your one-stop-shop for all of your home automation needs. This was scaled back after they realized how much of a pain in the ass all of this is to maintain, breaking compatibility with many devices. I still have a v1 SmartThings Zigbee/Z-Wave hub that I cannot use because it’s not supported anymore: the hardware is perfectly fine, but that’s what Samsung decided, so we’re all fucked.

Even well-intentioned endeavors to standardize like Matter, Thread, and Zigbee have all become their own little niche because none of them are actual all-in-one solutions, they are just puzzle pieces: transport protocols, physical networking standards, computing services, whatever. They all have to talk to each other to work, and that is usually left to the end user.

Home Assistant

In comes Home Assistant: an open source project that aims to put this madness to an end. It’s a whole operating system built on Linux, that aims to create a truly complete smart home hub that will communicate with everything. By creating a standard interface for all of your devices to communicate with HA, you essentially get a single interface for all of your smart-home stuff.

We could spend days talking about all of the ins-and-outs of Home Assistant, but this is the gist of it:

  • Home Assistant runs on a device with access to your local network: it can be a Raspberry Pi, a virtual machine, or even the custom hardware solutions Home Assistant has to offer.
  • Each ecosystem connects to HA via an integration: these blocks connect to APIs or local devices in order to talk to a specific services. There are lots of officially supported ones, as well as some very good community implementations.
  • Integrations usually generate devices, which in turn generate entities. An entity is the base building block of Home Assistant: every sensor, switch, light, or whatever you want is represented as an entity. Entity data is stored on a local database and can be shown on custom dashboards.
  • Integrations also generate actions, which involve executing procedures on command, like refreshing data, actuating a switch, turning on a lamp, etc. There are standard integrations for the more common entities, but there are endless possibilities.
  • You can create custom entities called helpers, which further process information given by your entities. These are also entities themselves.
  • Entities can be analyzed and processed using automations, which run instructions based on the state of different entities. These instructions are actions executed on devices or entities.
My currently installed integrations.

Overall, it’s a fairly simple system, but it’s also highly scalable: you can make this as complicated as you want, as long as you follow these basic rules. Do you want to turn on lights with the sun? Make an integration that triggers using the sun entity during sunrise and sunset, that actuates the light switch. Do you want to make an automated sprinkler system? Sure, make helpers for all parameters and have an automation switch relays for sprinkler valves in order. Do you want to fire a notification to your phone when a temperature sensor is under 19.2 degrees but only during the evening and there are no dishes on your dishwasher? Sure, but I don’t know why you would, although I’m not here to judge. As long as it has some sort of connectivity, chances are, you can make it work with Home Assistant.

A sample trigger setup for an automation.
A sample action setup for an automation.

Why isn’t this the norm?

Well, there are complications to this, mostly stemming from the fact that Home Assistant isn’t exactly maintenance free: You need to have a device running the OS locally, which will require some tinkering. Also, as your instance becomes more complicated, your points of failure also sprawl, and while HA is overall fairly stable, it does throw the occasional tantrum.

There are also some companies that have obfuscated open access to their APIs in the name of “customer safety”. This is usually a measure to make their ecosystems even more of a walled garden, so I would recommend just avoiding these products: they are not in favor of right-to-repair, and I frankly have no sympathy for them. I do however recommend you to look around for custom integrations that give increased functionality in case you’re stuck with such a device.

There are also many finicky steps to get some integrations to work: handing over login credentials is one thing, but that’s sometimes not enough; Sometimes you need API keys, sometimes OAuth tokens, sometimes other stuff. These things are usually well-documented, but they are usually buried under layers of menus and interfaces that often feel like afterthoughts. The worst one for me is LocalTuya, a custom integration for Tuya devices that allows for, admittedly, a very useful increase in functionality over the official Tuya integration, but it requires many steps to get the API to work, and the entities have to be set up by hand, without much in the way of help. I only have a single device at the moment, but configuring 12 entities in the blind was an absolute nightmare, and my stomach turns a little when I think of adding more devices.

I also have some issues with using SQLite as a main database. Sure, it’s easy and fairly uninvolved to get working, but my Home Assistant database has corrupted itself one too many times for me, without much in the way of me doing anything. I switched to my local MySQL server about a month ago and my database has been much more reliable since.

One of my custom dashboards for monitoring devices.

Making custom devices: ESPHome

If you’re weird like me (and let’s face it, if you’re reading this you already are), you probably want to plug everything into Home Assistant, even those things that weren’t really meant to be connected to a network. For me, it was a pool, but it can be anything that has sensors and/or actuators of some kind.

I have a pool with a solar thermal collector, a lamp, and a copper electrolytic cell. This means we have quite a few variables we would like to integrate to HA:

  • Temperature values for input, panel return, and output pipes. This not only allows me to get the current pool temperature, but also to get rough estimates about panel power capture.
  • Actuating relays for turning on the pool pump and lamp.
  • Controlling an H-bridge connected to the copper cell, allowing for control of voltage and polarity of the cell.

Obviously there isn’t a device that can do exactly this, and while you could make it work with various devices it would be a janky mess, so actually building a custom IoT device actually makes some sense here.

In comes ESPHome: using the Espressif ESP32 or ESP8266 family of microcontrollers, Home Assistant can create custom devices using simple YAML config files and a ton of supported devices. Just connect the sensors and devices you want to the ESP, set everything up on the config file, and that’s it! you have your own IoT device, and a surprisingly flexible one at that: sensors can be filtered, automations can be configured directly on board the controller, and so much more. I plan on doing a detailed review of the ESPHome suite at another time, but suffice it to say that it allows to make absolutely anything that can be reasonably connected to a microcontroller HA-capable.

Both of these devices are custom ESP32 implementations running complex tasks of data acquisition and actuation.

Why hasn’t anyone made this simpler?

It boggles my mind a little bit that no one has thought of a more hands-off approach to this. For most intents and purposes, a hardened Home Assistant brick that can just sit in a corner and get online via the cloud subscription is enough for your usual tech enthusiast.

Nabu Casa, Home Assistant’s main developers, have already started offering plug-and-play hardware solutions in its Green and Yellow packages, but in my opinion there are way too many growing pains in HA for it to be a truly fire-and-forget solution: there is way too much tinkering still if you want to do everything HA is capable of.

So I wonder, why hasn’t it been a push to standardize IoT ecosystem interactions between different brands? Why have pretty much all IoT brands gone for any sort of interoperability? Well, money is the reason, but I wonder if this is a losing strategy: what’s the point of having three thousand apps to turn on an A/C unit and a couple of lamps? How does this nightmare of usability not impact sales of these systems?

While it probably does, the answer is they don’t really care: they sell enough to be a revenue stream, but the extra work that would create actually making a sustainable product is just too much upfront cost to justify it: in the end, all sustainable IoT ecosystems are passion projects: open-source and free software which challenges market incentives. There is this undercurrent of skepticism in my writing about tech, but it comes down to this: the market supports whatever is cheaper, not whatever is best, and there are consequences to be had if the tech sector keeps going after it at all costs.

Closing thoughts

If you’re the tinkering type like me and you haven’t set up something like Home Assistant, please do! It’s genuinely useful and quite fun, but be prepared for a bit of head-banging: it’s not for free. It’s now an essential part of my home, and it provides amazing data collection and intelligent operation, allowing for increased efficiency and automation, even if it came at the price of many hours of staring at YAML config files, and corrupted logs.

So please, if you work in this sector, remember what has made our Internet work: open, flexible standards which work everywhere and with everything. If these principles can be applied to IoT, I am confident in saying that IoT will be a mass-appeal product.

Categories
Tech Explorations

Business in the front, party in the back: optimizing desktop setups for multi-PC applications

I recently started a new job as a network engineer and with it I got my first work laptop: a fairly decent Lenovo ThinkPad T14, and while I am a fan of portability and uncluttered workspaces, I much prefer to use external input devices along with a second monitor, especially if it is where I usually work.

Luckily I do have all of these things, a nice keyboard, a big 4K monitor, and a very smooth trackball (come on, you’ve read my articles already, you know I’m a weirdo like that). They are however connected to my personal laptop, and I don’t have a big enough desk (or wallet, for that matter) to duplicate it all. Some sharing is in order.

In my desperation, I reorganized my desk with the help of a few gizmos, which allows me to quickly switch my input devices and monitors between laptops, while maintaining independence between both systems, and in a way that doesn’t turn me absolutely crazy. This is how I did it.

Problems, half measures, and a sore back

My job is essentially entirely remote: I’m basically half front-end developer, half tech support; I answer emails, read and compose docs, stare at code, and participate in meetings. Since I didn’t have much experience in having two computers, I just plopped it next to my big rig and just went to work for a couple of days. Immediately, many problems appeared:

  • My back was sore: Laptops on desks usually have you facing downwards at the screen, and since it’s a small screen and a big desk, my back was certainly feeling it.
  • Laptop keyboards and trackpads are a pain in the long run: they are small, key travel is tiny, and they usually don’t have a numpad. The T14 certainly has one of the better keyboards in the market right now, but an actual key switch would be much better. The trackpad is certainly good (especially with its hardware buttons on top), but it’s fairly small and cramped (and don’t even get me started on the ThinkPad Nippleℱ). Also, raising your computer up to eye level makes them even harder to use.
  • Limited screen real-estate: the screen is a 1920×1080 IPS 14-inch display, which is great, but it’s small: the scaling has to be big in order for text to be legible and being accustomed to dual-monitor setup just made it a pain overall.

Because all of my current setup works with a single USB-C port (more on that later) I just put my work laptop on my stand and used it like that for a while, but that quickly made evident that switching devices all day was going to be a messy and non productive solution. What are my choices here?

Well, I could just use my personal laptop for work, but that is a recipe for disaster: mixing business and pleasure in general is a bad idea for privacy and security reasons, but there’s also other security measures in my work device that would make it difficult, if not impossible, to get everything running as it should.

I then turned to the idea of using the laptop as a pivot computer: it just sits in a corner chugging away and I just have to open a Remote Desktop Connection to it: RDP has a sophisticated feature set including device and clipboard sharing, bidirectional audio for calls, the works. This seemed like a great idea, I could share all of my devices on my personal computer to my work computer and everything would be sorted, right?

Not so fast. My work device runs Windows and you can go to Settings and enable RDP, but the real problem was Active Directory: all of my login data is on a company server to which I have no access to, and the Remote Desktop Server on my just refused to play ball with it: I got certificate issues, authentication issues, connection issues, and I just couldn’t get it to work. If this was Windows Server, I could probably massage the service enough to make it work, but it isn’t, and it’s probably for the better: if a remote desktop is compromised, you can cause catastrophic damage on everything you have access to, as the device doesn’t really distinguish between a remote session and a local one, so back to the drawing board it was.

I tried other solutions, but they all failed in one way or another: switching inputs on my monitor? Doesn’t solve the device problem. Other remote desktop protocols like VNC or AnyDesk? either they didn’t have device support or I had to pay subscriptions, along with having to install unauthorized software on my laptop, a big no-no.

My only recourse was hardware: a dedicated device handles sharing and switching devices between computers, while the target computers are none the wiser. But how was I to implement this and have it play ball with my current setup?

My previous setup

My personal laptop is an HP Victus 16, sporting quite a few peripherals:

  • Logi G513 Carbon with MX Brown switches and custom keycaps. (over USB)
  • Kensington Trackball Expert wireless pointer device. (over USB)
  • Dell S2721QS 4K 21-inch monitor running at 1440p for better performance.
  • Behringer XENYX 302USB audio interface (over USB)
  • Gigabit Ethernet over USB from my dock.

This setup has my laptop screen as the main display, with the monitor over to the side, with all devices connected via a USB-C dock. This allows me to have everything connected with just two cables (the other one being a power supply, being a “gamer” model with high power consumption). I really like docks for their flexibility, and with USB-C native video transport and the high speed USB 3 data link, I can switch from on-the-go to static and vice versa in mere seconds, all while having hidden cables and reduced clutter.

This is very much a tangent, but I’ve always found docks the coolest thing ever. Ever since I saw an OG IBM ThinkPad in my dad’s office desktop rocking Windows XP and a massive dock back in like 2006 I’ve appreciated the massive advantages in commodity and portability. My first laptop had a massive dock connector on the side, and USB-C has finally given me the possibility of running power, video, and data over a single cable. If you have a laptop sitting semi-permanently in your desk, I highly recommend you get one. Sure, laptops are loud and underpowered compared to equivalent desktop PCs, but if you need portability, it doesn’t really get much better than this.

I’ve been using a Baseus Metal Gleam 6-in-1 USB-C dock: they have 3 USB 3.0 ports, USB-PD passthrough, an HDMI output, and a Gigabit Ethernet port. It’s enough for my needs and are also small, which meant I could mount it directly to the stand the laptop sits on top of.

Now I had to decide on a new layout: how exactly was I going to place two laptops and a monitor in my desk without losing all my space?

Introducing the KVM

Having all of these in mind, these were my objectives:

  • The monitor will now become the primary screen, switching between devices as needad.
  • The keyboard and mouse must switch between laptops in sync with the screen.
  • I need to hear the audio of both computers simultaneously, although the main one would be the personal one.
  • Whatever device does the switching must have some sort of remote, in order to hide it under the desk for better cable management.

For my work computer, I just duplicated my setup for my personal computer: a laptop stand and another of those USB-C dock things. The audio situation was also simple, as the Behringer audio interface I’m using has a secondary stereo output called 2-TRACK. using a simple USB sound card, a ground loop isolator (for preventing buzzing sounds) and some random 3.5mm to RCA audio cable I had both devices in my headphones without issue.

For the screen and the USB devices, I needed a KVM switch: a clunky acronym standing for Keyboard-Video-Mouse, it’s exactly what it sounds like: you press a button, and your keyboard, mouse, and monitor are now connected to another machine. These are fairly niche devices mostly relegated to server racks and other specialized applications, but they can still be found for cheap in the power user electronics market.

I got a UGREEN CM664 HDMI-USB KVM switch from AliExpress for cheap, and despite it’s low price it has everything I need: HDMI video switching, USB 3.0 switching, and a cute little wired remote perfect for adding to my keyboard. It’s also fairly small, only big enough to fit all the large connectors, and requires no software, it’s just an HDMI pass-through and a USB hub that can switch devices.

Not to get too deep into the weeds in here, but this device physically disconnects all the interfaces during switching. This means devices have to be recognized and initialized, and a second screen must be instantiated and all windows reordered, something that takes a couple of seconds in total. This is not a problem for me, but there are some KVM switches that emulate the presence of a device while another computer is active in order to make the transition almost seamless, but that seemed a bit excessive for this application, especially for the considerable price hike.

Now it’s just a matter of hooking up everything together and we’re done, right?

A cable management nightmare

Well, not so fast. You may have noticed there’s a lot of cables in the mix: tons of USB cables, network cables, audio cables, power bricks, the whole shebang. If not kept in check, this could quickly become a giant octopus of messy cables that can quickly eat up desk space and just flat out look ugly.

My desk also has some storage underneath that must be able to be slid out, so it’s flat out not an option to have cables dangling behind it. To solve this I just used zip ties and a clever twist on the usual mounting clips: I really wish those plastic mounts with adhesive backing worked: I really like them, but having cables pulling permanently on a piece of double sided tape just guarantees they’ll pop off at some point.

A better solution for me was a box of short washer-head screws: the wider head makes it easy to grab a zip tie under it, while being discreet enough to grab a bunch of cables without pulling out. Granted, you’re putting holes in your furniture, but I have found time and again that it is a worthwhile sacrifice in order for the cables to stay put for long periods of time. The screws are also reusable: just back them out a turn or two and the zip tie will come right out.

Once I got my enormous bundle of cables under control, it was time to test it out.

Performance and quirks

Overall, the whole thing works great: I can quickly switch between both laptops, sharing devices without an issue. I attached the remote to a corner of my laptop, which gives me a clean look and easy access to it. The switching is fairly quickly and all apps quickly rearrange when the second display is detected, which is very useful when returning to a computer after a switch. Also, having the laptop screen still showing is great for situational awareness when you’re working with both laptops at the same time. The entire setup uses slightly more space than it used to, but it’s a marginal difference in comparison to all of the advantages it has brought.

I thought having shared audio for both devices would be a bit of a mess, but surprisingly no: hearing notifications from the other computer while playing music or keeping a call going while switching computers is extremely useful, and the expected overlap of sounds have turned out to not really be a problem.

The KVM switching process, with it’s rediscovery and rearrangement of devices and applications, takes a couple of seconds, but it isn’t really a problem, at least for my sensibilities. I do wish the KVM had some sort of optimization for preventing the lag in USB devices, which I feel is slightly too much.

There is also the problem of sleep: you have to tweak your settings to prevent the computers from going to sleep while you’re looking at them: since it’s very much possible that I’m not interacting with the device for a while, it’s not an unreasonable assumption that the device is ready to sleep, even if it isn’t.

Closing thoughts

Overall, this KVM solution has pretty much solved all my problems of parallel laptops: the devices are shared without a problem, and my desk has not been entirely consumed in the process. There are some quirks, but overall the device does exactly what it should.

I do feel however that it’s very involved process: as work-from-home gets turned into an ubiquitous form of labor, I fell that a hardware solution that just does this for you, with some degree of ability for customization, could be a real game changer for all of us in this situation. This is a thing that should be so much easier, but it just isn’t, and there aren’t many approaches in the market that don’t require this kind of tinkering, but if you are so inclined, you can make it work.

I just hope I never see the day when a third computer has to be integrated.

Categories
The Trenches

How 400 lines of code saved me from madness: Using computers to organize high-attendance events

Note to readers: I can’t really show pictures of the event because most participants were underage and I don’t have a release for having them here. This makes pictures look slightly off-topic, but they are the only ones I can actually show.

About a year ago, I got myself involved as a logistics officer for an event with an attendance in excess of 400 people. For five days, we had to make sure all of them had food, transport, merchandising, a place to sleep, a group to work with, and a good time.

We were constrained in budget, time, and manpower: we wanted to do everything with nothing, and there was just way too much data to crunch by hand: brute-forcing this wasn’t going to work; exactly the kind of situation that gets me excited for some outside-of-the-box thinking.

We used computers, and they worked beautifully: this is how we kept track of the lives of 428 people for five days without losing our minds.

Dreams, experiences, and realizations

Carlos Dittborn, one of the people responsible for hosting the FIFA World Cup in Chile in 1962, had a quote that has resonated in this country for more than sixty years:

Because we have nothing, we want to do everything.

Carlos Dittborn, on an interview during qualifiers for deciding the host of the 1962 FIFA World Cup.

If you’re anything like me, this quote makes transparent the ethos of makers: It’s not about anything other than solving a problem; the thrill of making something work is enough, and that makes us do whatever it takes to achieve it.

This project was born out my work as a consultant and youth leader in a religious organization: we had groups scattered all across the country, and they all knew of each other, but there hadn’t been an instance for all of us to come together as one. The last time something like that had been attempted was back in 2015, and most people who were currently in the lines of our youth groups were way too young to be there.

Overall, the first time was a success, but it highlighted a key aspect of why using machines matter: there was just way too much information to keep track of using just the mind and pieces of paper. It’s a remarkable achievement that it worked anyways, but it took a tremendous effort by around 40 people a couple of months to get all the data crunched, and many mistakes were made during the event; things got lost, food got delayed, and people didn’t know where to be. Keeping up tabs on 400-ish people is a task that is outside of what the human mind can manage on its own, so better solutions were needed.

There’s also the age variable here: these people are mostly kids. While they are old enough that they don’t need permanent supervision, adolescents are not exactly known for following the rules or caring much about what the adults have to say, so the people in charge needed to be free of burdens of information that could be better managed by computers.

In practical terms, we needed:

  • Receiving all inscriptions to the event in quasi-real-time, so we could set up sleeping accommodations for all participants and their leaders, along with food and health requirements, transportation, etc.
  • All participants had to be assigned groups in which all the activities were to be carried out. These also had requirements regarding gender, age, and place of origin.
  • We needed to keep track of how many people ate during all meals down to individual people, both to ensure correct payment of food and to ensure no kid skipped more than a single meal.
  • We had to transport everyone to a different location during the second day, which meant getting them all into the correct buses both heading to and leaving from the new location.
  • For some activities, the kids would rank their preference from a number of choices in order to participate in events. These answers had to be tallied, the events assigned, and the attendance lists handed out to participants and organizers alike.
  • Supplies had to be handed out individually to each participant, keeping track of who had what.

This is all pretty par for the course for any event, but there was another variable for us to wrangle: this all had to be done at a breakneck pace, by only five people, and with a budget of around 500 USD.

A need for speed (for cheap)

If it were maybe a couple dozen people, all of this could easily be made with a copy of Microsoft Office and some macros. The main problem here was speed, groups had to be assigned as soon as the inscription data was ready, all 400 people had to be checked in every meal in less than an hour, and loading people into the buses had to be done in under 90 minutes. It was an ambitious plan, but doing it any other way would have meant to take away time from the things that actually mattered. Another problem was concurrency: for the math to work out, we needed multiple people filling in data in our tables in real time, while maintaining operator independence. For some applications, we also needed real-time monitoring that gave us insights of what exactly was going on, along with methods for ingesting data from many different sources.

You kinda-sorta can do this with a cloud solution like Google Sheets, but making complex operations like assigning groups with set criteria is something that a simple spreadsheet just can’t really do comfortably: a cell is a cell, and operating with data arrays as results is just out of reach. I do know about Excel macros, but they have always felt like a hack to me. An actual programming language with a database is way more flexible.

On the other hand, how do you make the user be faster? Even asking for something like a name was way too much time for this, and memorizing a number would bring us to a standstill the second someone forgot theirs.

A solution materializes

Not everything in life is having fun and playing the guitar.

The backbone of all of this had to be data storage and manipulation solution: A way to store structured data, run queries, and that satisfied our speed and concurrency requirements: we needed a database.

Fortunately, we already had a MySQL server on an internet-facing machine that we could take advantage of. Database engines are like an Excel spreadsheet on steroids: they can process truly mind-boggling amounts of information, run queries, automatically calculate data, answer multiple queries? you name it, it can do it. Unlike a spreadsheet though, accessing a database requires the use of the Structured Query Language (SQL), which meant if we were going to use this with non-engineers, a frontend was needed.

For this, I need to come clean: I’m not a very good programmer. Sure, I can build things that work, but they aren’t going to be pretty. I picked Python as my language of choice, interacting with MySQL servers is very well documented and data processing can be done fairly quickly if you’re not sloppy about where your data goes. For a while I toyed with making a graphical interface using Tkinter and then curses, but as I was only looking to accept keyboard input and display basic data, a simple terminal would do, inputs and prints was the way to go. This would prevent me from wasting hours debugging the interface, and gave me much more speed when it came to do new iterations and modifications.

A sample interface using print and input. The entire interface was reprinted which each cycle, but this was acceptable for our needs.

To get a high throughput where counting people was required, I turned to the retail industry: I used Code-39 barcodes, each one encoding a four digit number assigned to each participant. Barcode readers are cheap and most of them support Code-39, and making it work with my software was dead easy: when you plug in a barcode reader to your computer, it detects a keyboard: all characters are written as if someone had typed them, and then it automatically presses the Enter key. This had two key advantages: a simple input field could read barcodes pretty much as fast as we could scan them, and if a code became damaged, it could be typed out manually.

A sample label for participants. The number next to their name is encoded in the barcode. G and B fields correspond to their group and bus respectively.

All the rest of the data processing was made using Python scripts that read the database and calculated everything we needed. There were two types of scripts; the first were frontends for real-time operations, such as:

  • Keeping track of meals.
  • Giving out t-shirts and notebooks.
  • Check-in for bus transportation.

The second type were one-time scripts that calculated new data from what we already had:

  • Assigning work groups for each participant, along with their leaders. These groups had age, sex, and origin requirements.
  • Assigning recreational activities and lectures for all participants based upon a list of preferences.

This gave us a new challenge however: We had never arrived to explicit criteria for making groups like these before, so how could we make a machine think like a human?

A machine thinking like a person

Everyone that has had to organize an event of pretty much any kind has almost surely come to this conundrum: How do you split a large group into smaller groups?

For small groups, rules of thumb and educated guesses is enough, but beyond maybe 70-80 people, its starts to get incredibly tedious, and mistakes are pretty much bound to happen. Computers are deterministic machines however, so they could make a classification problem like this one, provided we can explain all criteria for it. So, how do we do it?

Workgroups

Let’s look at our first example: we need to divide all participants into groups of 10 people. Before we get into the weeds with programming, we need to ask the fundamental question of this problem: What makes our groups better? Let’s see what we know:

  • All groups are to perform activities under the supervision of two adult leaders. These tasks are centered around knowing each other, sharing opinions and experiences, looking to find common experiences among different groups.
  • All participants are teenagers aged 14 to 18 years old, and they’re usually split into age groups within their places of origin, so mixing them up would be preferable.
  • Participants are of mixed gender, with a slight majority of women.
  • Participants are very likely to know people from their own place of origin, and likely to know people from nearby places of origin, as they often have joint activities. A key aspect of this event is for them to get to know more people, so we need them as separated as possible.

With this in mind, we can arrive at some criteria:

  • Groups will be of 10 or 11 people, a necessity in the very likely scenario that the number of participants is not evenly divisible by 10.
  • Groups have to conserve the gender ratio as much as possible: if the whole is 40% men and 60% women, then all groups should have 6 women and 4 men.
  • Places of origin should be split as much as possible along all groups, to avoid concentrations of people that know each other. Peoples of the same local area should also be split if possible.
  • Ages of participants will have to be as mixed as possible.

Great, now we have our criteria. How can we turn this into code?

The approach I went with involves sorting the table of participants and then assign a group to each one in a rotating fashion. I will call this method the sort-and-box, as those are the two key steps involved. Conceptually, there is a box for each group, and each participant will be assigned a box sequentially: the first person goes to group 1, the second to group 2, and so on. Once we’re out of boxes, we roll over back to the first one and we stop once all people have been assigned groups. If we have sorted the table correctly, this will guarantee maximum dispersion among participants, but it creates a new challenge: how can we sort the list of participants?

conceptual view of the sort-and-box method

This approach has a single core tenet: if you group people together, they will end up in different groups; there is no way to consecutive participants in the table end up in the same group, so to get maximum dispersion, we need to sort all the people that need to be apart, together. Also, we can sort by more than one criteria by doing recursion: we sort inside the already sorted blocks. This creates a tree of sorted participants, where the first sort has the first priority when splitting, then the second, and so forth. An additional layer of separation can be achieved by doing the larger groups (by place of origin) first, which ensures large groups (which are the hardest ones to split) are split as evenly as possible, without interference from smaller groups.

With this, a Python script was created that did the following steps:

  1. Get the list of participants from the SQL database.
  2. Calculate a multiplier for each place of origin. This value corresponds to the multiplication of two numbers: the number of people from that place of origin divided by the total number from the local area, and the number of people in the local area divided by the total number of people in attendance. If we sort by this number, the bigger groups and zones will be sorted first. These numbers are appended to each participant.
  3. Sort the table by origin multiplier, gender, and age, in that order.
  4. Create empty groups by dividing the number of participants by 10 and discarding decimals.
  5. Assign each person a group using the round-robin method described above.
  6. Add a column to the table with each participant’s group.

At last, our work groups were complete. Sorting all 400-ish participants took around three seconds, and I’m sure you could make it faster if you wanted. This meant more time to receive inscription data and more time to print out the barcode labels we needed.

Activities

In two instances, groups would separate among different lectures and recreational activities respectively. This was thought both as an opportunity for them to share experiences within the event and also to generate spaces in which people could meet outside of their groups. For both cases, these were the requirements:

  • Each activity had a limited maximum number of participants.
  • Participants had to decide which activity they wanted to attend, but the last call was made by us.
  • Participants who got their preferences in first would get priority when separating groups.

As we had been using Google Forms for most of this, participants were asked to rank all possible choices from first option down to last. That data was then imported into a SQL table and then processed as follows:

  • Get the maximum number of participants for each activity.
  • Each participant is analyzed separately, with the earliest ones to respond going in first.
  • For each participant, the first option was checked. If there were remaining seats, then the person would be assigned that activity.
  • If there weren’t available seats, the program would move to the next option until one is found.

Because there were more seats than participants, everyone was assigned a place, even if it was their last choice. Having a program do this meant that the lists could be collated mere minutes after the poll was closed, which meant more time to sort out all activities and more time for participants to get their choices in.

An excerpt of the table given to participants (with their names hidden) for checking into their activities. This was made entirely automatically.

Overall, this is an exercise in teaching ourselves about our own process of thought, so we can teach some sand how to do it for us. Best of all, sand can usually do it faster and more consistently than a human can.

Want to know more? Check out my GitHub repo! It has all scripts I used for this.

Duct tape and cardboard

Prussian general Helmuth von Moltke the Elder, one of the key figures in the unification of Germany, once said:

No plan survives first contact with the enemy.

Helmuth von Moltke the Elder, Kriegsgechichtliche Einzelschriften (1880)

There is some nuance to this quote: there are obvious advantages to planning ahead, but all plans will soon have to face unforeseen challenges; the real world is too complex, here are too many variables at play for all of them to be accounted for, so you have to be flexible in order for your objectives to come true.

This project was certainly no exception. For one, all of these scripts and data analysis tools were very vulnerable to the Garbage-In Garbage-Out problem: if inscription data had errors or voids, everything started breaking real fast. Because this data ultimately came from humans, we needed to make sure the user could not make a mistake even if they wanted to: Each place of origin received a Google Sheets spreadsheet in which they had to type all of the information regarding their participants, so how could we idiot-proof it?

In comes data validation: spreadsheets have become leviathans of features, and one of them is the ability to inspect the data as it is typed so that nothing that doesn’t match specific rules can be inserted. First, all cells that were not to be edited were locked, preventing the user from breaking things like calculated fields or sheet concatenation. Then, all input data was assigned a type: date values had to actually work and have a specific format (DD-MM-YYYY), phone numbers had to have a specific length and area code (using regular expressions to match characters, something Excel can’t do but Google Sheets can), emails had to be of valid syntax, and so on. Also, once you filled out a single field in a row, you had to fill all others, otherwise you got an error.

Once all the sheets were received, a big table was made using all the data, which of course had to be skimmed by a human before ingestion: you make something idiot-proof, and the world makes better idiots. Fortunately most of the errors had been caught in time and only minor corrections had to be made.

Then came the barcodes. Our initial plan was to make wristbands and printing the codes on them: we even had test runs made in order to check if our readers could pick up the codes correctly. However, a week before the event, the printing press informed us that they would not be able to fulfill our order in time. This not only meant we needed a different way to get the barcodes to the participants, it also meant we had to design our own labels too, since the press was going to handle that in-house.

We quickly solved it using the provided notebooks we were giving out: a simple paper label on the rear cover had us covered no problem, but how could we make 400 personalized labels in just a couple of days?

The answer is very simple: Microsoft Word. As it turns out, it has an incredibly powerful label maker, which can take a table of raw data and churn out labels to your heart’s content. It can even make barcodes on the fly, which was very handy for this occasion. In about two hours we had all labels designed and printed, and in the afternoon all of them had been placed inside of the notebooks. It was tight, as it was finished the day before the event, but it was enough to save us from scrapping the entire control system.

The day comes

I was wearing many hats during this whole ordeal, live sound included.

Our first task turned out to be excellent for fixing late bugs and smoothing out errors: each place of origin arrived separately, and each participant was given a t-shirt and the aforementioned notebook. For each one, the barcode was scanned in order to keep track of who had received what. Because all places of origin arrived at different times, the pace was relatively quiet and we could manually ingest data if need be. Some bugs were found and quickly patched in anticipation for the big event: the meals.

The problem we had with meals was one of speed: our calculations showed that in order to feed everyone in the allocated time, we would be pushing out a meal every 13 seconds. Our catering crew was good, but if we slowed down even by a tiny bit, we could jeopardize our ability to serve everyone in time. Failure was out of the question. Our excitement was palpable then, when the queue started backing up, but not because we couldn’t keep up, but due to the serving line being at capacity and reaching where we were scanning the barcodes. Even with a single scanner we could keep up with the serving rate, with two being enough for pretty much all situations.

Assignment of activities was also a huge success: from composing the inscription form to distributing results to participants was done in a couple of hours, with only minor tweaking needed to be done for each time, which was now possible due to the massive time savings. Overall participants were very satisfied with the distribution of activities and our policy of transparency regarding the rules for election made it so that we got almost no complaints or last minute rearrangements.

Our biggest hurdle speed-wise was the boarding of busses: 10 buses of 45 people each had to be completely filled up and emptied out in both directions, with no more than 60 minutes each for every one: our biggest hurdle was actually getting people to move fast enough to each bus, but after a few slower runs, we had found our rhythm, and the return journey was even done ahead of schedule, with all buses boarded in under 50 minutes.

Even after the event itself our control scheme kept being useful: getting attendance numbers for activities helped us review the most popular ones, and the meals were within 5% of what the catering crew actually served, which convinced us and them that we were paying the right amount of money for over 2000 meals served.

Lessons learned

This entire project was a massive success from beginning to end: while most tech-savvy people will agree that this is not a particularly complicated project, introducing a rather modest amount of computational power to an event that requires this much information processing generates enormous time savings, which has one key consequence: we, the people making this event work, had more time to attend to the kind of issues you cannot account for beforehand, instead of focusing on trying to wrangle an overgrown spreadsheet.

Another advantage is one of consistency: when the rules of the game are clear, computers make a better job at making decisions than we do, and if we make a conscious effort to eliminate bias within our algorithms, we can create fairer solutions that maximize your chances of consistently good data. Being transparent about what your code does also generates legitimacy to an unknown software: if you can explain what your code does in layman’s terms, chances are people will trust and follow the instructions it gives. Be cautious however; computers are bound to the biases and prejudices of whoever programs them, do not put trust in them blindly.

Even when problems came up (uncaught bugs, lack of functionality, and a need to adapt to changes in the event program), our software was so simple that a couple of lines of code was usually enough to get the software to do what we wanted. We even ran the assignment scripts multiple times when bugs were caught, and everything was so fast that we could redo everything with minimal time loss.

Conclusions

Perhaps the most astute observation from all of this is one of systems architecture: machines have to be designed in order to serve you, not the other way around. If you create an automaton that takes advantage of its strengths, offloading mind-numbing work from people makes them in turn more useful, because you’re taking advantage of their humanity; More time for us meant we could plan everything out better in advance, the trust we placed in the machine meant we had more time to think about what we were doing and why: we wanted to give this kids an unforgettable experience, and to give them the chance to grow as people, together, and that is something that machines can’t do.

What we do with systems and machines and automation we do because it gives us our humanity back from the void of data. To adopt technology is an imperative, not because it’s just fun or just useful, it’s because it gives us the tools we need to comprehend and interact with an increasingly complex world.

I sometimes feel that right now technology is on trial in the public consciousness: the endless trot of innovation often makes us jaded and skeptical of adopting these tools and for good reason; like all tools, they are values-neutral, it’s up to us to decide how and why we use them.

They are also many reasons to be distrustful: we’re getting a better understanding on how social media affects negatively affects our relationships and self-image, we’ve seen what tech companies are capable of doing for a quick buck, and after a pandemic that had us staring at screens for sixteen hours a day, it’s understandable that we wish to escape these machines for good.

But these machines also gives us many unprecedented abilities: To communicate at high speed to and from anywhere in the world, to generate models that help us understand the world around us, to create new solutions, to save lives, to use them as tools of freedom and dignity, to preserve our past, to shape our future, and to allow us to focus on being human.

What we created here is not any of these things, but it allowed us to create an event that I’m sure will be a high-spot for everyone who attended it. We had the tools at our disposal, and we used them effectively to create an amazing experience for everyone, without losing our sanity in the process, and that feels great. It fills you up not only with pride and joy, but with a tremendous sense of accomplishment and purpose.

So please, if you can, use them to create systems that decrease the amount of suck in this world. Craft new experiences, push the boundaries of what is possible, be amazed by its sheer power, and maybe you will create something amazing in the process.

Categories
Tech Explorations

It’s Free Real Estate: DIY Solar Pool Heating System

More than five years ago, I set out to solve one of the biggest grievances with my home: I had a very very cold pool. Even in the summer it was unbearable to bathe in for more than a few minutes; we even considered filling it up with dirt.

Here’s how i fixed it on a tight budget, and what I learned doing it.

The issue

Our house came with a very nice 32.6m3 freshwater pool; it was a big selling point and one of the main reasons we bought it. We imagined it would be great for the hot summers of central Chile, and a centerpiece of household social activities. It soon became clear that it would not be so.

That pool would, in the hottest of summer days, never really get past 21ÂșC. Getting into that might be refreshing for a while, but it soon chilled you to your bones. Most sources on the Internet indicate that a reasonable temperature for a freshwater pool is at least 24ÂșC, and those three degrees made a huge difference. Remember, 21Âș is the best case, in practice the actual temperature was quite a few degrees lower.

For one, the pool’s is lightly colored, and painting it anything short of pitch black wouldn’t have really made any difference, because there is a large tree that gives it shade most of the day. Fixing any of these problems was out of the question, as it would not pass aesthetic inspection (my mom). For a while, we even considered filling in the pool to get some extra garden space, but it always felt like a waste. The hunt was on then, a new way of heating the pool was needed.

Choosing the right way

The first question was which energy source was I going to use: It had to be cheap both upfront and over time, and already available at my house. This basically meant (at least in principle) either gas or electric. For gas-powered systems, you can install what is essentially an oversized boiler, while electric solutions involve resistive heating (like a hot water tank) or heat pumps. All of these systems quickly made no sense for my budget; both installation and running costs would have been massive, as energy is expensive here.

In comes solar heating. This boils down to circulating water through a black pipe placed in the sun; the pipe heats up and transfers its heat to the water. The advantages were clear: no energy costs and very basic infrastructure. Next to our pool filter lies our roofed driveway, which despite being on the south side (the worst side in the southern hemisphere) of a tall house, had enough space to clear its shadow for most of the day. This was the way to go.

Designing solar water heaters from scratch

You can buy ready-made solar pool heaters which are essentially a flat panel of small tubes (about 8mm in diameter) which can be laid on a roof and piped to and from the pool filter, but these are expensive and usually hard to get if you’re not an installer (at least over here). Also, you read the title, you know where we’re going with this.

To make low-temperature solar thermal collectors, we need something that can withstand UV light, be somewhat flexible for ease of installation, and ideally, be black: in comes polyethylene pipe, a flexible-ish black pipe meant for irrigation. Smaller pipes gives you better surface area per water volume, so the smallest size easily available, half-inch, was used.

Then came the question of area: how much roof space do you need to fill with panels to get good temperatures? My reflex answer is as much as you can, but there are some difficulties with this approach:

  • The more panels you put, the bigger the pump you will need to push water through them, and the higher the operating pressure you will need.
  • Water is heavy and your panels will be full of it; be careful how much weight you place on your roof.
  • For this application having panels in the shade is not really harmful, but it will be wasted space and pressure; try to put only as many panels as you actually need.

Figuring out how many panels you need to heat up a pool is rather difficult: you will most certainly end up partially eyeballing it. However, there are some important facts you need to consider:

  • How big is your pool and how much of a temperature difference you actually want.
  • The angle of your roof and the direction your roof is facing.
  • The height of your roof and the power for your pump, as it will dictate your flow rate.

For us, what made sense was around 500m of total poly pipe exposed to the sun, we also had a roof that was readily accessible right next to the pool pump. That number is somewhat arbitrary and more to do with how we went about doing it, but it ended up working out in the end.

Designing the panels

To make panels that would actually work, we set the following criteria:

  • The panels must be small and light enough to be lifted to the roof by a single person.
  • The panels must be able to be replaced if necessary.
  • The panels must be arranged in such a way as to have the smallest possible impact in flow rates and pressures.

Because we went with half-inch poly pipe, putting panels in parallel was pretty much mandatory, so we decided to make lots of small panels we could haul up into the roof and then connect into an array; after some quick calculations we realized that a flat spiral a meter in diameter would have roughly 50m of pipe, which meant we could build 10 lightweight spirals: the pipe would be tied using steel wire to a wooden cross every four turns, and after many, many hours of rolling, we had our panels.

10 panels also turned out to be a bit of a magic number, as it meant that doing 5 sets of two panels would equal to roughly the same cross-sectional area of the 50mm pipe coming to and from the pool, which meant pressure loss would not be that bad. The total internal volume of the panels was around 350L, which meant the waterline would recede by around a centimeter. This was the winning combination.

Connecting it to the pool filter

There are three key features regarding the connection to the pool: first, the water circulating through the panel must have already passed through the filter, as to prevent blockages. Second, the user must be able to control not only whether the panels get water or not, but how much water gets up to them, to be able to control the temperature without sacrificing too much flow and pressure. Third, attention must be taken in order to get the shortest runs of pipe possible; every fitting and every jump in height reduces flow and pressure.

With all of this in mind, and blessed with a roof just next to the pump house, the output of the filter was teed off in two places, with a ball valve installed in the middle: this will be our mixing valve, allowing us to mix cold water from the pool with warm water from the panel in order to control the temperature. Then, the first tee in the chain would be connected to the panel valve, and then up to the panels, in which there are five manifolds in order to hook up the poly pipe spirals, with a matching set of inputs downstream after the panels. The return from the panels would enter into the second tee, and then back to the pool.

There are some considerations here: A ball valve for mixing is not the most precise way of controlling temperature: something like a gate valve gives you more control, but they are a lot more expensive and you can still adjust the temperatures just fine with a little finesse on the valve handle. Also, when the pump turns off, a vacuum forms inside of the panels, as the water descends from gravity and nothing replaces it. For these panels, I found that the back pressure from the return lines was enough to break the vacuum and prevent an implosion, but for taller roofs, I would recommend adding a vacuum breaker (essentially a valve that opens when the pressure inside of the panels goes below atmospheric and lets air in) just in case.

And, well, that’s it! By opening the panel valve and slowly closing the mixing valve, water will start to go up the panels, and heat capture will commence.

Using the system in practice

Bernoulli’s equation of hydrostatics tells us that if we increase the height of a fluid, it’s pressure must go up. For us, this means that there will be a minimum operating pressure in which the panels will actually get water, otherwise a column of water will peacefully reside in your pipes without overtopping the highest point in your system. The same equation gives us the answer:

Pcritical [Pa] = ρ[kg/m3] · g[m/s2] · h [m]

Where P is the minimum pressure you need, ρ is the density of the water, g is gravitational acceleration, and h is the difference in height between your pump and the tallest point of your panel. You can also kinda eyeball this: close the mixing valve bit by bit until you start hearing water coming back through the return pipe, and then back off until you can’t hear it anymore: that’s your minimum pressure.

With panels this thick, passing the entire flow of water through the panels is somewhat unnecessary: there are diminishing returns once the water starts heating up, so you want to close the heating valve just enough so that the panels don’t get above ambient temperature (so you don’t lose heat) for maximum performance of both the panels and your pump. If the pool gets too hot, then losing heat is what you need, so just open the mixing valve a little bit more and in a day or two you will have a cooler pool.

Unforeseen circumstances

Great, your pool is now warm! Unfortunately, this is not without consequences. For one, warmer pools lose water by evaporation a lot faster than cooler ones, so expect to fill it up more often, be mindful of your water bill. Also, warmer pools are much more attractive to algae, which grow a lot faster in these waters: maintaining good chlorine levels, incorporating some sort of electrolysis cell for adding copper ions, and cleaning the pool regularly are a must, unless green and turbid water is what you want.

After much experimentation, I have found to be the winning combination: one and a half tablets of TCCA per week, addition of copper ions via electrolytic cells, and weekly vacuuming and sweeping is enough to keep algae at bay. Remember that the actual quantities are going to be dependent on your temperatures and volume of the pool.

Albeit not my case, if you happen to live in a place where freezing temperatures are common, it’s very important that the panels are drained during the winter season: usually popping open the cap on the filter and the drain plug on the filter for a couple of hours is enough, otherwise prepare for burst pipes and cracked joints. On that same vein, remember to paint your PVC pipes every so often, UV light is not nice to polymers, so try to avoid exposure if possible.

On a more humorous note: my panel usually drains almost completely at night, which means every morning the pump removes all of the air out of the pipes, which results in a very unique noise every morning: my pool is farting!

A five-year retrospective: closing thoughts

This project turned out to be a huge success not only for my household, but because it made me learn many useful skills not only building it, but designing it: the art of the educated guess cannot be understated, and sometimes the only thing you need to succeed is some ballpark back-of-the-envelope calculations. By applying some high school physics and a bit of blood, sweat, and tears, we ended up with a pool which regularly hits 28ÂșC and beyond, and it became a centerpiece of our beautiful garden. If you want to get into some low-stakes plumbing, the low pressures and big pipes are a great way to get started, and even a large pool can be done for relatively cheap, definitely more so than hiring someone to do it. Best of all, you’ll be doing it in an environmentally friendly way.

Categories
Tech Explorations

Building a lab-grade power supply from old computer parts

A bench power supply is a fundamental tool for testing electronics, allowing for flexible power delivery to a range of different devices that could make their way to your bench. As electronics became ubiquitous DC power supplies have become easy to find, building capable devices from scrap electronics becomes a very budget friendly way to expand the capabilities of your setup.

I’m not beating around the bush: this isn’t how to make a fully-featured power supply for cheap, it’s a hacky, cobbled together device that could be so much more powerful, but I just don’t want it to: it’s just so I can charge batteries, power junk on to see if it works, and just get some voltages out to the world when I’m too lazy to go get a power brick. It’s ugly and profoundly utilitarian, but it works.

I’ve got a ton of ATX power supplies, and you probably do too

I’m willing to bet that when IBM launched the PC AT in 1984, they didn’t expect that it’s overall layout and design would become the de facto standard for computers, especially forty years later. One would be forgiven for questioning how we came to this predicament: there are many things to hate about the AT standard: The card risers are barely adequate for holding modern GPUs, the power supplies are way too bulky and have a rats nest of wires that you may not need, the connectors suck, and so, so much more. However, it is what stuck, so we’re stuck with it too.

This means that pretty much every desktop computer that has a tower form factor has an ATX (AT eXtended, basically a beefed up AT standard for slightly less crap and more modern applications) compatible power supply and pretty much everything is more or less in the same place inside the chassis, which makes it great for finding parts that more or less all work with each other.

If you’ve ever disassembled a desktop computer (and let’s face it, if you’re reading this you probably have), you probably ended up throwing the PSU into a pile of them that you look at every so often thinking “I should probably do something with them”; well, here we are.

Contemporary power supplies usually have a few components in common:

  • A 24-pin motherboard connector. (+3.3V, +5V, +12V, -12V, 5Vsb)
  • A 4 or 8-pin processor connector. (+12V)
  • A PCIe power connector, either 6-8 pins, with higher power models having multiple connectors. (+12V)
  • Accessory connectors, usually SATA and/or Molex connectors, for stuff like storage drives, optical drives, fans, etc. (+5V, +12V)

These devices are extraordinarily dumb: while the motherboard does have some control over its operation, the protocol is extremely simple: a +5V standby signal powers the control circuitry, which turns on the supply by pulling the PWR_ON line to ground, and it is notified that the PSU is ready to go when the PG line is pulled to +5V. That’s it. The wide array of voltages and simple communications make these supplies an exceptional way of powering almost everything. Almost.

Most bench power supplies are adjustable, having both voltage and current control over a wide range of supply conditions, which is very handy to get that pesky device that uses a weird voltage to power up, or even running tests under different conditions. There could be ways of modifying the feedback circuitry of the switchmode power supply inside, but I’m not knowledgeable enough in electronics to know how to do so, and from what I’ve seen, it might not even be possible.

Some jellybean parts from AliExpress, a box, and some soldering later

With all these factors taken into account, the requirements are as follows:

  • I want to use a ATX power supply from an old computer.
  • I want all the voltages from the ATX standard available for use.
  • I want an adjustable regulator that can do both Buck and Boost, so I can get a wide range of voltages.
  • The regulator must have both constant voltage (CV) and constant current (CC) capabilities.
  • Having two regulators would be nice.
  • The power supply must be at least 150W total.

From my pile of scrap I fished out a FSP 250-60HEN 250W ATX power supply. It’s fairly old, but it has a couple features I like:

  • It has a big fan on the top, which makes it quieter.
  • It has two 12V rails: one for the processor connector, another for everything else.
  • the wire gauges are all fairly similar, which makes it easier to bundle afterwards.

With this, I cut off all connectors and separated the rails: orange is +3.3V, red is +5V, yellow is +12V, the lonely blue wire is -12V, black is ground, and all the status cables (green for power on, gray for power good, purple for +5Vsb, and a ground for making it all work) were separated by color and soldered to ring terminals for connecting to banana plugs on the front. The +12V rail from the processor connector was also kept apart. Some cheap binding post/banana plug combos from AliExpress and a heinous 3D print job that peeled off from the print bed halfway through, and I had some voltages to work with. The power on signal went to a toggle switch that connected it to ground (this is my main power switch), the 5Vsb went to an indicator to show the device has AC power, and the power good lights up another indicator to show the device is ready to be used.

For the regulators, I went for some nifty panel mount regulators I found on AliExpress for cheap: they can handle a decent amount of power, they have a usable interface, and they have an extensive range: they can to 0-36V at 0-5A, and all from the second +12V rail. Pretty cool! Add some banana plug cables, alligator clips, some other accessories, a couple of zipties, aluminum tape, and some swearing later, we have a supply!

Ups and downs

I’m not going to sugar coat this: this is a quick and dirty project. The thing is ugly, it looks like it’s going to kill you, and it very much gives a “rough around the edges” vibe to it, but it works exactly as I had hoped for: the regulators work great, the fixed voltages are no problem, and all the control devices work as they should. There are a few things worth noting though:

  • The regulators have an interesting way of performing constant-current duties: instead of some sort of control loop to keep the current stable along a desired value, the devices just shoves whatever voltage you gave it and then it observes; it changes the output voltage to give a current lower than your target and then measures again, reaching your desired current in steps. This perturb-and-observe model is very much useful for steady-state applications, but if you have sensitive electronics like LEDs or integrated circuits, be mindful to set your voltage to a safe level before activating the CC mode, failing to do so could result in an unsafe voltage in your terminals.
  • The measurements from the regulators are accurate, but not perfect, if you need precision, use a multimeter and short leads.
  • The fixed outputs have no onboard measurement other than what is needed for protection, so be careful about shorting these out.
  • I messed up the settings on my print and it came out really deformed. If I wasn’t lazy, I’d redo them with better adhesion to the bed, but I’m not. Nothing that some tape won’t solve. I might change them later, but the thought of undoing all the binding posts makes me queasy of doing it.

Overall, it’s like having a cheap AliExpress power supply, for about a quarter of the price. Pretty good overall, I’d say.

The tools you have are better than the tools you don’t

I’ve been working with this for about a month now, and I wonder how I made it this far without a bench power supply. Building my own tools gives me tons of satisfaction, and I hope to keep using and improving this device in the future. Sometimes the tools you can build with what you have is the best tool you can possibly get, and it will probably get you farther than waiting for the shiniest gadget.

So yeah, if you have a pile of junk computer parts, build a power supply! you’ll get lots of mileage from it and it will open lots of doors in your electronics adventures, not to mention the money it’ll save you.

Get building!

Categories
Tech Explorations

Building a better Elgato Game Capture HD

Back in 2015 I got myself a brand new Elgato Game Capture HD. At the time, it was one of the best capture cards on the consumer market; it has HDMI passthrough, Standard definition inputs with very reasonable analog-to-digital converters, and decent enough support for a range of different setups.

Despite its age, I still find it very handy, especially for non-HDMI inputs, but the original design is saddled with flaws which prevent it from taking advantage of its entire potential. This is how I built a better one.

Using this card in the field

After a few months of using it to capture PS3 footage and even making some crude streaming setups for small events using a camera with a clean HDMI output, two very big flaws were quickly apparent: First, the plastic case’s hermetic design and lack of thermal management solutions made it run really hot, which after prolonged operation resulted in dropouts which sometimes required disconnecting and reconnecting the device and/or its inputs, and second, the SD inputs are very frustrating; the connectors are non-standard and the dongles provided are iffy and don’t even allow for taking full advantage of its capabilities without tracking down some long discontinued accessories.

My first modification to it was rather crude: after it failed on a livestream, I took the Dremel to it and made a couple of holes for ventilation, coupled with an old PC fan that I ran using USB power (the undervolting of the fan provided enough cooling without being deafening). This obviously worked, but it introduced more problems: the card now made noise, which could be picked up by microphones, and it now had a big gaping hole with rotating blades that was just waiting to snatch a fingernail. This wouldn’t do.

Solving thermal issues

It quicly became clear that the original case for the Elgato Game Capture HD was a thermal design nightmare: it provided no passive cooling, neither by having heatsinks or vents. The outer case design was sleek, but it sacrificed stability on the way.

This device is packed with chips, all of which provide different functions: HDMI receivers and transmitters, ADCs, RAM, and many other glue logic parts, which meant that power consumption was going to be high. Having a custom LSI solution or even using FPGAs could have been better in terms of power consumption, but this is often way more expensive. Amongst all of the ICs, one stood out in terms of heat generation: a Fujitsu MB86H58 H.264 Full HD Transcoder. This was doing all the leg work in terms of picking up a video stream and packaging into a compressed stream and piping it through a USB 2.0 connection. It’s pretty advanced stuff for the time, and it even boasts about it’s low power consumption in the datasheet. I don’t know exactly why it runs so hot, but it does, and past a certain threshold it struggles and stutters to keep a video signal moving.

There was nothing worth saving in the original enclosure, so I whipped up a new one in Fusion 360 which includes many ventilation holes, and enough space above the chip so I could add a chipset heatsink from an old motherboard. I stuck it down with double sided tape, which is not particularly thermally conductive, but along with the improved ventilation is enough to keep the chip to frying itself to oblivion. I ran another protracted test, and none of the chips got hot enough to raise suspicion, and even after three hours of continuous video, the image was still being received appropriately. I initially though there could be other chips in need of heatsinks, but it appears that the heat from this transcoder was the one pushing it over the edge, without it the other ICs got barely warm.

Since we made a new enclosure, let’s do something about that SD input.

Redesigning the SD video inputs

This card hosts a very healthy non-HDMI feature set: It supports composite video, S-Video, and Y/Pb/Pr component video, along with stereo audio. The signal is clean and the deinterlacing is perfectly serviceable, which makes it a good candidate for recording old gaming consoles and old analog media like VHS or Video8/Hi8. However, Elgato condensed all of these signals into a single non-standard pseudo-miniDIN plug, which mated with included dongles. Along with a PlayStation AV MULTI connector, it came with a component breakout dongle which allowed any source to be used. With the included instructions you could even get composite video in this way. S-Video however was much more of a pain; while it was possible to connect an S-Video signal straight into the plug, it left you without audio, and the official solution for this was to purchase an additional dongle which of course by the time I got it no one had.

To solve it, I started by simply desoldering the connector off the board. I saw some tutorials on how to modify S-Video plugs for the 7-pin weirdness of the Elgato, and even considered placing a special order for them, but in the end I realized that it was moot. The dongles sat very loosely on the connector, and any expansion I wished to make on it was going to be limited by that connector, so I just removed it.

To the now exposed pad, I soldered an array of panel-mount RCA and S-Video connectors I pulled out of an old projector, so I could use them with whatever standard I pleased: three jacks for Y/Pb/Pr component video, a jack for S-Video, a jack for composite video, and two jacks for stereo audio, complete with their proper colors too. The SD input combines the different standards into a single three-wire bus: Pb (component blue) is also S-Video chroma (C), Pr (component red) is also composite video, and Y (component green) is S-Video Luma (Y), so the new connectors are electrically connected to the others, but for simplicity I much prefer it to having to remember which one is which, or having to keep track of adapters for S-Video (which I use a lot for old camcorders).

Final assembly and finished product

After printing the new enclosure I slotted in the board (it was made for a press fit with the case, to avoid using additional fasteners), and soldered the new plugs to the bare pads of the connector using thin wire from an old IDE cable. The connectors were attached to the case using small screws, and the design was such that all of the connectors were on the bottom side of the case, which meant no loose wires. The top stays in place using small pieces of double sided tape and some locating pins, which makes dissassembly easy, great for future works or just showing off.

I wish this was the product I received from Elgato. It allows the hardware to work to its true potential, and it makes it infinitely more useful in daily usage. No more faffing around with dongles, no more moving parts, or dropouts on a hot day. It feels like this was what the engineers at Elgato envisioned when they came out with this thing. The Elgato Game Capture HD is now my main non-HD capture device and even for HDMI stuff it still gets some usage, when I can’t be bothered to set up the ATEM switcher.

Finishing thoughts

I love the Elgato Game Capture HD, both for what it is capable of doing and what it did to the nascent streaming and video creation scene back in it’s day. I love its featureset and I’m even fond of its quirks, but with this mod I feel like I have its true potential available without compromises. It changed its place in my toolkit from a thing I kinda know how to use that stays in the bottom of my drawer to a proven and reliable piece of equipment. If you have one of these devices and feel unsatisfied with its performance, I urge you to give it a try, you will no doubt notice the difference and maybe you’ll keep it from going into the bin.