Categories
Tech Explorations

A house run by computers: making all of your IoT devices play nice with each other

The current state of the Internet-of-Things scene can be sometimes mind-boggling: incompatible ecosystems, an endless reliance on cloud services (that will and have been shut down), and an uncomfortable feeling that you’re not quite in control of what your devices are doing. Then I got Home Assistant, and everything changed. This is a story about how smart-home should and shouldn’t be, along with a few tricks that will hopefully save you some blood, sweat, and tears.

The E in IoT stands for easy

If you’re into home automation, then you are probably aware of the absolute mess that the market has become: everything has its own ecosystem, there’s poor integration between services, and everything relies on the cloud all of the time: turning on a switch from 5 meters away usually has to pass through a server hundreds or thousands of kilometers away.

If you want to make an Ecobee thermostat work with a Sonoff temperature sensor, you’re all out of luck. Local network control? nope. Custom hardware? not really. While most of the internet has coalesced thanks to open and flexible standards like IP, DNS, and TCP, the smart home situation has been plagued with the xkcd standards problem: incompatible ecosystems, walled gardens, proprietary protocols, and an overall sensation that companies are prioritizing their own profit margins over trying to create a sustainable market.

Speaking of sustainable, why the hell does everything have to talk to the cloud? There is very little computing power involved to turn on a relay, why do I have to tell Tuya or Google Nest or eWeLink or whoever that I’m turning on my heating, or my room lamps? What will happen when, not if, these companies decide to retire these services? Are we all doomed to be left with useless pumpkins the second this market stops making money?

If you think I’m exaggerating, this has already happened before. Samsung went all in with an entire smart home ecosystem called SmartThings, which claimed, like all of them do, to be your one-stop-shop for all of your home automation needs. This was scaled back after they realized how much of a pain in the ass all of this is to maintain, breaking compatibility with many devices. I still have a v1 SmartThings Zigbee/Z-Wave hub that I cannot use because it’s not supported anymore: the hardware is perfectly fine, but that’s what Samsung decided, so we’re all fucked.

Even well-intentioned endeavors to standardize like Matter, Thread, and Zigbee have all become their own little niche because none of them are actual all-in-one solutions, they are just puzzle pieces: transport protocols, physical networking standards, computing services, whatever. They all have to talk to each other to work, and that is usually left to the end user.

Home Assistant

In comes Home Assistant: an open source project that aims to put this madness to an end. It’s a whole operating system built on Linux, that aims to create a truly complete smart home hub that will communicate with everything. By creating a standard interface for all of your devices to communicate with HA, you essentially get a single interface for all of your smart-home stuff.

We could spend days talking about all of the ins-and-outs of Home Assistant, but this is the gist of it:

  • Home Assistant runs on a device with access to your local network: it can be a Raspberry Pi, a virtual machine, or even the custom hardware solutions Home Assistant has to offer.
  • Each ecosystem connects to HA via an integration: these blocks connect to APIs or local devices in order to talk to a specific services. There are lots of officially supported ones, as well as some very good community implementations.
  • Integrations usually generate devices, which in turn generate entities. An entity is the base building block of Home Assistant: every sensor, switch, light, or whatever you want is represented as an entity. Entity data is stored on a local database and can be shown on custom dashboards.
  • Integrations also generate actions, which involve executing procedures on command, like refreshing data, actuating a switch, turning on a lamp, etc. There are standard integrations for the more common entities, but there are endless possibilities.
  • You can create custom entities called helpers, which further process information given by your entities. These are also entities themselves.
  • Entities can be analyzed and processed using automations, which run instructions based on the state of different entities. These instructions are actions executed on devices or entities.
My currently installed integrations.

Overall, it’s a fairly simple system, but it’s also highly scalable: you can make this as complicated as you want, as long as you follow these basic rules. Do you want to turn on lights with the sun? Make an integration that triggers using the sun entity during sunrise and sunset, that actuates the light switch. Do you want to make an automated sprinkler system? Sure, make helpers for all parameters and have an automation switch relays for sprinkler valves in order. Do you want to fire a notification to your phone when a temperature sensor is under 19.2 degrees but only during the evening and there are no dishes on your dishwasher? Sure, but I don’t know why you would, although I’m not here to judge. As long as it has some sort of connectivity, chances are, you can make it work with Home Assistant.

A sample trigger setup for an automation.
A sample action setup for an automation.

Why isn’t this the norm?

Well, there are complications to this, mostly stemming from the fact that Home Assistant isn’t exactly maintenance free: You need to have a device running the OS locally, which will require some tinkering. Also, as your instance becomes more complicated, your points of failure also sprawl, and while HA is overall fairly stable, it does throw the occasional tantrum.

There are also some companies that have obfuscated open access to their APIs in the name of “customer safety”. This is usually a measure to make their ecosystems even more of a walled garden, so I would recommend just avoiding these products: they are not in favor of right-to-repair, and I frankly have no sympathy for them. I do however recommend you to look around for custom integrations that give increased functionality in case you’re stuck with such a device.

There are also many finicky steps to get some integrations to work: handing over login credentials is one thing, but that’s sometimes not enough; Sometimes you need API keys, sometimes OAuth tokens, sometimes other stuff. These things are usually well-documented, but they are usually buried under layers of menus and interfaces that often feel like afterthoughts. The worst one for me is LocalTuya, a custom integration for Tuya devices that allows for, admittedly, a very useful increase in functionality over the official Tuya integration, but it requires many steps to get the API to work, and the entities have to be set up by hand, without much in the way of help. I only have a single device at the moment, but configuring 12 entities in the blind was an absolute nightmare, and my stomach turns a little when I think of adding more devices.

I also have some issues with using SQLite as a main database. Sure, it’s easy and fairly uninvolved to get working, but my Home Assistant database has corrupted itself one too many times for me, without much in the way of me doing anything. I switched to my local MySQL server about a month ago and my database has been much more reliable since.

One of my custom dashboards for monitoring devices.

Making custom devices: ESPHome

If you’re weird like me (and let’s face it, if you’re reading this you already are), you probably want to plug everything into Home Assistant, even those things that weren’t really meant to be connected to a network. For me, it was a pool, but it can be anything that has sensors and/or actuators of some kind.

I have a pool with a solar thermal collector, a lamp, and a copper electrolytic cell. This means we have quite a few variables we would like to integrate to HA:

  • Temperature values for input, panel return, and output pipes. This not only allows me to get the current pool temperature, but also to get rough estimates about panel power capture.
  • Actuating relays for turning on the pool pump and lamp.
  • Controlling an H-bridge connected to the copper cell, allowing for control of voltage and polarity of the cell.

Obviously there isn’t a device that can do exactly this, and while you could make it work with various devices it would be a janky mess, so actually building a custom IoT device actually makes some sense here.

In comes ESPHome: using the Espressif ESP32 or ESP8266 family of microcontrollers, Home Assistant can create custom devices using simple YAML config files and a ton of supported devices. Just connect the sensors and devices you want to the ESP, set everything up on the config file, and that’s it! you have your own IoT device, and a surprisingly flexible one at that: sensors can be filtered, automations can be configured directly on board the controller, and so much more. I plan on doing a detailed review of the ESPHome suite at another time, but suffice it to say that it allows to make absolutely anything that can be reasonably connected to a microcontroller HA-capable.

Both of these devices are custom ESP32 implementations running complex tasks of data acquisition and actuation.

Why hasn’t anyone made this simpler?

It boggles my mind a little bit that no one has thought of a more hands-off approach to this. For most intents and purposes, a hardened Home Assistant brick that can just sit in a corner and get online via the cloud subscription is enough for your usual tech enthusiast.

Nabu Casa, Home Assistant’s main developers, have already started offering plug-and-play hardware solutions in its Green and Yellow packages, but in my opinion there are way too many growing pains in HA for it to be a truly fire-and-forget solution: there is way too much tinkering still if you want to do everything HA is capable of.

So I wonder, why hasn’t it been a push to standardize IoT ecosystem interactions between different brands? Why have pretty much all IoT brands gone for any sort of interoperability? Well, money is the reason, but I wonder if this is a losing strategy: what’s the point of having three thousand apps to turn on an A/C unit and a couple of lamps? How does this nightmare of usability not impact sales of these systems?

While it probably does, the answer is they don’t really care: they sell enough to be a revenue stream, but the extra work that would create actually making a sustainable product is just too much upfront cost to justify it: in the end, all sustainable IoT ecosystems are passion projects: open-source and free software which challenges market incentives. There is this undercurrent of skepticism in my writing about tech, but it comes down to this: the market supports whatever is cheaper, not whatever is best, and there are consequences to be had if the tech sector keeps going after it at all costs.

Closing thoughts

If you’re the tinkering type like me and you haven’t set up something like Home Assistant, please do! It’s genuinely useful and quite fun, but be prepared for a bit of head-banging: it’s not for free. It’s now an essential part of my home, and it provides amazing data collection and intelligent operation, allowing for increased efficiency and automation, even if it came at the price of many hours of staring at YAML config files, and corrupted logs.

So please, if you work in this sector, remember what has made our Internet work: open, flexible standards which work everywhere and with everything. If these principles can be applied to IoT, I am confident in saying that IoT will be a mass-appeal product.

Categories
Tech Explorations

Business in the front, party in the back: optimizing desktop setups for multi-PC applications

I recently started a new job as a network engineer and with it I got my first work laptop: a fairly decent Lenovo ThinkPad T14, and while I am a fan of portability and uncluttered workspaces, I much prefer to use external input devices along with a second monitor, especially if it is where I usually work.

Luckily I do have all of these things, a nice keyboard, a big 4K monitor, and a very smooth trackball (come on, you’ve read my articles already, you know I’m a weirdo like that). They are however connected to my personal laptop, and I don’t have a big enough desk (or wallet, for that matter) to duplicate it all. Some sharing is in order.

In my desperation, I reorganized my desk with the help of a few gizmos, which allows me to quickly switch my input devices and monitors between laptops, while maintaining independence between both systems, and in a way that doesn’t turn me absolutely crazy. This is how I did it.

Problems, half measures, and a sore back

My job is essentially entirely remote: I’m basically half front-end developer, half tech support; I answer emails, read and compose docs, stare at code, and participate in meetings. Since I didn’t have much experience in having two computers, I just plopped it next to my big rig and just went to work for a couple of days. Immediately, many problems appeared:

  • My back was sore: Laptops on desks usually have you facing downwards at the screen, and since it’s a small screen and a big desk, my back was certainly feeling it.
  • Laptop keyboards and trackpads are a pain in the long run: they are small, key travel is tiny, and they usually don’t have a numpad. The T14 certainly has one of the better keyboards in the market right now, but an actual key switch would be much better. The trackpad is certainly good (especially with its hardware buttons on top), but it’s fairly small and cramped (and don’t even get me started on the ThinkPad Nippleā„¢). Also, raising your computer up to eye level makes them even harder to use.
  • Limited screen real-estate: the screen is a 1920×1080 IPS 14-inch display, which is great, but it’s small: the scaling has to be big in order for text to be legible and being accustomed to dual-monitor setup just made it a pain overall.

Because all of my current setup works with a single USB-C port (more on that later) I just put my work laptop on my stand and used it like that for a while, but that quickly made evident that switching devices all day was going to be a messy and non productive solution. What are my choices here?

Well, I could just use my personal laptop for work, but that is a recipe for disaster: mixing business and pleasure in general is a bad idea for privacy and security reasons, but there’s also other security measures in my work device that would make it difficult, if not impossible, to get everything running as it should.

I then turned to the idea of using the laptop as a pivot computer: it just sits in a corner chugging away and I just have to open a Remote Desktop Connection to it: RDP has a sophisticated feature set including device and clipboard sharing, bidirectional audio for calls, the works. This seemed like a great idea, I could share all of my devices on my personal computer to my work computer and everything would be sorted, right?

Not so fast. My work device runs Windows and you can go to Settings and enable RDP, but the real problem was Active Directory: all of my login data is on a company server to which I have no access to, and the Remote Desktop Server on my just refused to play ball with it: I got certificate issues, authentication issues, connection issues, and I just couldn’t get it to work. If this was Windows Server, I could probably massage the service enough to make it work, but it isn’t, and it’s probably for the better: if a remote desktop is compromised, you can cause catastrophic damage on everything you have access to, as the device doesn’t really distinguish between a remote session and a local one, so back to the drawing board it was.

I tried other solutions, but they all failed in one way or another: switching inputs on my monitor? Doesn’t solve the device problem. Other remote desktop protocols like VNC or AnyDesk? either they didn’t have device support or I had to pay subscriptions, along with having to install unauthorized software on my laptop, a big no-no.

My only recourse was hardware: a dedicated device handles sharing and switching devices between computers, while the target computers are none the wiser. But how was I to implement this and have it play ball with my current setup?

My previous setup

My personal laptop is an HP Victus 16, sporting quite a few peripherals:

  • Logi G513 Carbon with MX Brown switches and custom keycaps. (over USB)
  • Kensington Trackball Expert wireless pointer device. (over USB)
  • Dell S2721QS 4K 21-inch monitor running at 1440p for better performance.
  • Behringer XENYX 302USB audio interface (over USB)
  • Gigabit Ethernet over USB from my dock.

This setup has my laptop screen as the main display, with the monitor over to the side, with all devices connected via a USB-C dock. This allows me to have everything connected with just two cables (the other one being a power supply, being a “gamer” model with high power consumption). I really like docks for their flexibility, and with USB-C native video transport and the high speed USB 3 data link, I can switch from on-the-go to static and vice versa in mere seconds, all while having hidden cables and reduced clutter.

This is very much a tangent, but I’ve always found docks the coolest thing ever. Ever since I saw an OG IBM ThinkPad in my dad’s office desktop rocking Windows XP and a massive dock back in like 2006 I’ve appreciated the massive advantages in commodity and portability. My first laptop had a massive dock connector on the side, and USB-C has finally given me the possibility of running power, video, and data over a single cable. If you have a laptop sitting semi-permanently in your desk, I highly recommend you get one. Sure, laptops are loud and underpowered compared to equivalent desktop PCs, but if you need portability, it doesn’t really get much better than this.

I’ve been using a Baseus Metal Gleam 6-in-1 USB-C dock: they have 3 USB 3.0 ports, USB-PD passthrough, an HDMI output, and a Gigabit Ethernet port. It’s enough for my needs and are also small, which meant I could mount it directly to the stand the laptop sits on top of.

Now I had to decide on a new layout: how exactly was I going to place two laptops and a monitor in my desk without losing all my space?

Introducing the KVM

Having all of these in mind, these were my objectives:

  • The monitor will now become the primary screen, switching between devices as needad.
  • The keyboard and mouse must switch between laptops in sync with the screen.
  • I need to hear the audio of both computers simultaneously, although the main one would be the personal one.
  • Whatever device does the switching must have some sort of remote, in order to hide it under the desk for better cable management.

For my work computer, I just duplicated my setup for my personal computer: a laptop stand and another of those USB-C dock things. The audio situation was also simple, as the Behringer audio interface I’m using has a secondary stereo output called 2-TRACK. using a simple USB sound card, a ground loop isolator (for preventing buzzing sounds) and some random 3.5mm to RCA audio cable I had both devices in my headphones without issue.

For the screen and the USB devices, I needed a KVM switch: a clunky acronym standing for Keyboard-Video-Mouse, it’s exactly what it sounds like: you press a button, and your keyboard, mouse, and monitor are now connected to another machine. These are fairly niche devices mostly relegated to server racks and other specialized applications, but they can still be found for cheap in the power user electronics market.

I got a UGREEN CM664 HDMI-USB KVM switch from AliExpress for cheap, and despite it’s low price it has everything I need: HDMI video switching, USB 3.0 switching, and a cute little wired remote perfect for adding to my keyboard. It’s also fairly small, only big enough to fit all the large connectors, and requires no software, it’s just an HDMI pass-through and a USB hub that can switch devices.

Not to get too deep into the weeds in here, but this device physically disconnects all the interfaces during switching. This means devices have to be recognized and initialized, and a second screen must be instantiated and all windows reordered, something that takes a couple of seconds in total. This is not a problem for me, but there are some KVM switches that emulate the presence of a device while another computer is active in order to make the transition almost seamless, but that seemed a bit excessive for this application, especially for the considerable price hike.

Now it’s just a matter of hooking up everything together and we’re done, right?

A cable management nightmare

Well, not so fast. You may have noticed there’s a lot of cables in the mix: tons of USB cables, network cables, audio cables, power bricks, the whole shebang. If not kept in check, this could quickly become a giant octopus of messy cables that can quickly eat up desk space and just flat out look ugly.

My desk also has some storage underneath that must be able to be slid out, so it’s flat out not an option to have cables dangling behind it. To solve this I just used zip ties and a clever twist on the usual mounting clips: I really wish those plastic mounts with adhesive backing worked: I really like them, but having cables pulling permanently on a piece of double sided tape just guarantees they’ll pop off at some point.

A better solution for me was a box of short washer-head screws: the wider head makes it easy to grab a zip tie under it, while being discreet enough to grab a bunch of cables without pulling out. Granted, you’re putting holes in your furniture, but I have found time and again that it is a worthwhile sacrifice in order for the cables to stay put for long periods of time. The screws are also reusable: just back them out a turn or two and the zip tie will come right out.

Once I got my enormous bundle of cables under control, it was time to test it out.

Performance and quirks

Overall, the whole thing works great: I can quickly switch between both laptops, sharing devices without an issue. I attached the remote to a corner of my laptop, which gives me a clean look and easy access to it. The switching is fairly quickly and all apps quickly rearrange when the second display is detected, which is very useful when returning to a computer after a switch. Also, having the laptop screen still showing is great for situational awareness when you’re working with both laptops at the same time. The entire setup uses slightly more space than it used to, but it’s a marginal difference in comparison to all of the advantages it has brought.

I thought having shared audio for both devices would be a bit of a mess, but surprisingly no: hearing notifications from the other computer while playing music or keeping a call going while switching computers is extremely useful, and the expected overlap of sounds have turned out to not really be a problem.

The KVM switching process, with it’s rediscovery and rearrangement of devices and applications, takes a couple of seconds, but it isn’t really a problem, at least for my sensibilities. I do wish the KVM had some sort of optimization for preventing the lag in USB devices, which I feel is slightly too much.

There is also the problem of sleep: you have to tweak your settings to prevent the computers from going to sleep while you’re looking at them: since it’s very much possible that I’m not interacting with the device for a while, it’s not an unreasonable assumption that the device is ready to sleep, even if it isn’t.

Closing thoughts

Overall, this KVM solution has pretty much solved all my problems of parallel laptops: the devices are shared without a problem, and my desk has not been entirely consumed in the process. There are some quirks, but overall the device does exactly what it should.

I do feel however that it’s very involved process: as work-from-home gets turned into an ubiquitous form of labor, I fell that a hardware solution that just does this for you, with some degree of ability for customization, could be a real game changer for all of us in this situation. This is a thing that should be so much easier, but it just isn’t, and there aren’t many approaches in the market that don’t require this kind of tinkering, but if you are so inclined, you can make it work.

I just hope I never see the day when a third computer has to be integrated.

Categories
Tech Explorations

It’s Free Real Estate: DIY Solar Pool Heating System

More than five years ago, I set out to solve one of the biggest grievances with my home: I had a very very cold pool. Even in the summer it was unbearable to bathe in for more than a few minutes; we even considered filling it up with dirt.

Here’s how i fixed it on a tight budget, and what I learned doing it.

The issue

Our house came with a very nice 32.6m3 freshwater pool; it was a big selling point and one of the main reasons we bought it. We imagined it would be great for the hot summers of central Chile, and a centerpiece of household social activities. It soon became clear that it would not be so.

That pool would, in the hottest of summer days, never really get past 21ĀŗC. Getting into that might be refreshing for a while, but it soon chilled you to your bones. Most sources on the Internet indicate that a reasonable temperature for a freshwater pool is at least 24ĀŗC, and those three degrees made a huge difference. Remember, 21Āŗ is the best case, in practice the actual temperature was quite a few degrees lower.

For one, the pool’s is lightly colored, and painting it anything short of pitch black wouldn’t have really made any difference, because there is a large tree that gives it shade most of the day. Fixing any of these problems was out of the question, as it would not pass aesthetic inspection (my mom). For a while, we even considered filling in the pool to get some extra garden space, but it always felt like a waste. The hunt was on then, a new way of heating the pool was needed.

Choosing the right way

The first question was which energy source was I going to use: It had to be cheap both upfront and over time, and already available at my house. This basically meant (at least in principle) either gas or electric. For gas-powered systems, you can install what is essentially an oversized boiler, while electric solutions involve resistive heating (like a hot water tank) or heat pumps. All of these systems quickly made no sense for my budget; both installation and running costs would have been massive, as energy is expensive here.

In comes solar heating. This boils down to circulating water through a black pipe placed in the sun; the pipe heats up and transfers its heat to the water. The advantages were clear: no energy costs and very basic infrastructure. Next to our pool filter lies our roofed driveway, which despite being on the south side (the worst side in the southern hemisphere) of a tall house, had enough space to clear its shadow for most of the day. This was the way to go.

Designing solar water heaters from scratch

You can buy ready-made solar pool heaters which are essentially a flat panel of small tubes (about 8mm in diameter) which can be laid on a roof and piped to and from the pool filter, but these are expensive and usually hard to get if you’re not an installer (at least over here). Also, you read the title, you know where we’re going with this.

To make low-temperature solar thermal collectors, we need something that can withstand UV light, be somewhat flexible for ease of installation, and ideally, be black: in comes polyethylene pipe, a flexible-ish black pipe meant for irrigation. Smaller pipes gives you better surface area per water volume, so the smallest size easily available, half-inch, was used.

Then came the question of area: how much roof space do you need to fill with panels to get good temperatures? My reflex answer is as much as you can, but there are some difficulties with this approach:

  • The more panels you put, the bigger the pump you will need to push water through them, and the higher the operating pressure you will need.
  • Water is heavy and your panels will be full of it; be careful how much weight you place on your roof.
  • For this application having panels in the shade is not really harmful, but it will be wasted space and pressure; try to put only as many panels as you actually need.

Figuring out how many panels you need to heat up a pool is rather difficult: you will most certainly end up partially eyeballing it. However, there are some important facts you need to consider:

  • How big is your pool and how much of a temperature difference you actually want.
  • The angle of your roof and the direction your roof is facing.
  • The height of your roof and the power for your pump, as it will dictate your flow rate.

For us, what made sense was around 500m of total poly pipe exposed to the sun, we also had a roof that was readily accessible right next to the pool pump. That number is somewhat arbitrary and more to do with how we went about doing it, but it ended up working out in the end.

Designing the panels

To make panels that would actually work, we set the following criteria:

  • The panels must be small and light enough to be lifted to the roof by a single person.
  • The panels must be able to be replaced if necessary.
  • The panels must be arranged in such a way as to have the smallest possible impact in flow rates and pressures.

Because we went with half-inch poly pipe, putting panels in parallel was pretty much mandatory, so we decided to make lots of small panels we could haul up into the roof and then connect into an array; after some quick calculations we realized that a flat spiral a meter in diameter would have roughly 50m of pipe, which meant we could build 10 lightweight spirals: the pipe would be tied using steel wire to a wooden cross every four turns, and after many, many hours of rolling, we had our panels.

10 panels also turned out to be a bit of a magic number, as it meant that doing 5 sets of two panels would equal to roughly the same cross-sectional area of the 50mm pipe coming to and from the pool, which meant pressure loss would not be that bad. The total internal volume of the panels was around 350L, which meant the waterline would recede by around a centimeter. This was the winning combination.

Connecting it to the pool filter

There are three key features regarding the connection to the pool: first, the water circulating through the panel must have already passed through the filter, as to prevent blockages. Second, the user must be able to control not only whether the panels get water or not, but how much water gets up to them, to be able to control the temperature without sacrificing too much flow and pressure. Third, attention must be taken in order to get the shortest runs of pipe possible; every fitting and every jump in height reduces flow and pressure.

With all of this in mind, and blessed with a roof just next to the pump house, the output of the filter was teed off in two places, with a ball valve installed in the middle: this will be our mixing valve, allowing us to mix cold water from the pool with warm water from the panel in order to control the temperature. Then, the first tee in the chain would be connected to the panel valve, and then up to the panels, in which there are five manifolds in order to hook up the poly pipe spirals, with a matching set of inputs downstream after the panels. The return from the panels would enter into the second tee, and then back to the pool.

There are some considerations here: A ball valve for mixing is not the most precise way of controlling temperature: something like a gate valve gives you more control, but they are a lot more expensive and you can still adjust the temperatures just fine with a little finesse on the valve handle. Also, when the pump turns off, a vacuum forms inside of the panels, as the water descends from gravity and nothing replaces it. For these panels, I found that the back pressure from the return lines was enough to break the vacuum and prevent an implosion, but for taller roofs, I would recommend adding a vacuum breaker (essentially a valve that opens when the pressure inside of the panels goes below atmospheric and lets air in) just in case.

And, well, that’s it! By opening the panel valve and slowly closing the mixing valve, water will start to go up the panels, and heat capture will commence.

Using the system in practice

Bernoulli’s equation of hydrostatics tells us that if we increase the height of a fluid, it’s pressure must go up. For us, this means that there will be a minimum operating pressure in which the panels will actually get water, otherwise a column of water will peacefully reside in your pipes without overtopping the highest point in your system. The same equation gives us the answer:

Pcritical [Pa] = Ļ[kg/m3] Ā· g[m/s2] Ā· h [m]

Where P is the minimum pressure you need, Ļ is the density of the water, g is gravitational acceleration, and h is the difference in height between your pump and the tallest point of your panel. You can also kinda eyeball this: close the mixing valve bit by bit until you start hearing water coming back through the return pipe, and then back off until you can’t hear it anymore: that’s your minimum pressure.

With panels this thick, passing the entire flow of water through the panels is somewhat unnecessary: there are diminishing returns once the water starts heating up, so you want to close the heating valve just enough so that the panels don’t get above ambient temperature (so you don’t lose heat) for maximum performance of both the panels and your pump. If the pool gets too hot, then losing heat is what you need, so just open the mixing valve a little bit more and in a day or two you will have a cooler pool.

Unforeseen circumstances

Great, your pool is now warm! Unfortunately, this is not without consequences. For one, warmer pools lose water by evaporation a lot faster than cooler ones, so expect to fill it up more often, be mindful of your water bill. Also, warmer pools are much more attractive to algae, which grow a lot faster in these waters: maintaining good chlorine levels, incorporating some sort of electrolysis cell for adding copper ions, and cleaning the pool regularly are a must, unless green and turbid water is what you want.

After much experimentation, I have found to be the winning combination: one and a half tablets of TCCA per week, addition of copper ions via electrolytic cells, and weekly vacuuming and sweeping is enough to keep algae at bay. Remember that the actual quantities are going to be dependent on your temperatures and volume of the pool.

Albeit not my case, if you happen to live in a place where freezing temperatures are common, it’s very important that the panels are drained during the winter season: usually popping open the cap on the filter and the drain plug on the filter for a couple of hours is enough, otherwise prepare for burst pipes and cracked joints. On that same vein, remember to paint your PVC pipes every so often, UV light is not nice to polymers, so try to avoid exposure if possible.

On a more humorous note: my panel usually drains almost completely at night, which means every morning the pump removes all of the air out of the pipes, which results in a very unique noise every morning: my pool is farting!

A five-year retrospective: closing thoughts

This project turned out to be a huge success not only for my household, but because it made me learn many useful skills not only building it, but designing it: the art of the educated guess cannot be understated, and sometimes the only thing you need to succeed is some ballpark back-of-the-envelope calculations. By applying some high school physics and a bit of blood, sweat, and tears, we ended up with a pool which regularly hits 28ĀŗC and beyond, and it became a centerpiece of our beautiful garden. If you want to get into some low-stakes plumbing, the low pressures and big pipes are a great way to get started, and even a large pool can be done for relatively cheap, definitely more so than hiring someone to do it. Best of all, you’ll be doing it in an environmentally friendly way.

Categories
Tech Explorations

Building a lab-grade power supply from old computer parts

A bench power supply is a fundamental tool for testing electronics, allowing for flexible power delivery to a range of different devices that could make their way to your bench. As electronics became ubiquitous DC power supplies have become easy to find, building capable devices from scrap electronics becomes a very budget friendly way to expand the capabilities of your setup.

I’m not beating around the bush: this isn’t how to make a fully-featured power supply for cheap, it’s a hacky, cobbled together device that could be so much more powerful, but I just don’t want it to: it’s just so I can charge batteries, power junk on to see if it works, and just get some voltages out to the world when I’m too lazy to go get a power brick. It’s ugly and profoundly utilitarian, but it works.

I’ve got a ton of ATX power supplies, and you probably do too

I’m willing to bet that when IBM launched the PC AT in 1984, they didn’t expect that it’s overall layout and design would become the de facto standard for computers, especially forty years later. One would be forgiven for questioning how we came to this predicament: there are many things to hate about the AT standard: The card risers are barely adequate for holding modern GPUs, the power supplies are way too bulky and have a rats nest of wires that you may not need, the connectors suck, and so, so much more. However, it is what stuck, so we’re stuck with it too.

This means that pretty much every desktop computer that has a tower form factor has an ATX (AT eXtended, basically a beefed up AT standard for slightly less crap and more modern applications) compatible power supply and pretty much everything is more or less in the same place inside the chassis, which makes it great for finding parts that more or less all work with each other.

If you’ve ever disassembled a desktop computer (and let’s face it, if you’re reading this you probably have), you probably ended up throwing the PSU into a pile of them that you look at every so often thinking “I should probably do something with them”; well, here we are.

Contemporary power supplies usually have a few components in common:

  • A 24-pin motherboard connector. (+3.3V, +5V, +12V, -12V, 5Vsb)
  • A 4 or 8-pin processor connector. (+12V)
  • A PCIe power connector, either 6-8 pins, with higher power models having multiple connectors. (+12V)
  • Accessory connectors, usually SATA and/or Molex connectors, for stuff like storage drives, optical drives, fans, etc. (+5V, +12V)

These devices are extraordinarily dumb: while the motherboard does have some control over its operation, the protocol is extremely simple: a +5V standby signal powers the control circuitry, which turns on the supply by pulling the PWR_ON line to ground, and it is notified that the PSU is ready to go when the PG line is pulled to +5V. That’s it. The wide array of voltages and simple communications make these supplies an exceptional way of powering almost everything. Almost.

Most bench power supplies are adjustable, having both voltage and current control over a wide range of supply conditions, which is very handy to get that pesky device that uses a weird voltage to power up, or even running tests under different conditions. There could be ways of modifying the feedback circuitry of the switchmode power supply inside, but I’m not knowledgeable enough in electronics to know how to do so, and from what I’ve seen, it might not even be possible.

Some jellybean parts from AliExpress, a box, and some soldering later

With all these factors taken into account, the requirements are as follows:

  • I want to use a ATX power supply from an old computer.
  • I want all the voltages from the ATX standard available for use.
  • I want an adjustable regulator that can do both Buck and Boost, so I can get a wide range of voltages.
  • The regulator must have both constant voltage (CV) and constant current (CC) capabilities.
  • Having two regulators would be nice.
  • The power supply must be at least 150W total.

From my pile of scrap I fished out a FSP 250-60HEN 250W ATX power supply. It’s fairly old, but it has a couple features I like:

  • It has a big fan on the top, which makes it quieter.
  • It has two 12V rails: one for the processor connector, another for everything else.
  • the wire gauges are all fairly similar, which makes it easier to bundle afterwards.

With this, I cut off all connectors and separated the rails: orange is +3.3V, red is +5V, yellow is +12V, the lonely blue wire is -12V, black is ground, and all the status cables (green for power on, gray for power good, purple for +5Vsb, and a ground for making it all work) were separated by color and soldered to ring terminals for connecting to banana plugs on the front. The +12V rail from the processor connector was also kept apart. Some cheap binding post/banana plug combos from AliExpress and a heinous 3D print job that peeled off from the print bed halfway through, and I had some voltages to work with. The power on signal went to a toggle switch that connected it to ground (this is my main power switch), the 5Vsb went to an indicator to show the device has AC power, and the power good lights up another indicator to show the device is ready to be used.

For the regulators, I went for some nifty panel mount regulators I found on AliExpress for cheap: they can handle a decent amount of power, they have a usable interface, and they have an extensive range: they can to 0-36V at 0-5A, and all from the second +12V rail. Pretty cool! Add some banana plug cables, alligator clips, some other accessories, a couple of zipties, aluminum tape, and some swearing later, we have a supply!

Ups and downs

I’m not going to sugar coat this: this is a quick and dirty project. The thing is ugly, it looks like it’s going to kill you, and it very much gives a “rough around the edges” vibe to it, but it works exactly as I had hoped for: the regulators work great, the fixed voltages are no problem, and all the control devices work as they should. There are a few things worth noting though:

  • The regulators have an interesting way of performing constant-current duties: instead of some sort of control loop to keep the current stable along a desired value, the devices just shoves whatever voltage you gave it and then it observes; it changes the output voltage to give a current lower than your target and then measures again, reaching your desired current in steps. This perturb-and-observe model is very much useful for steady-state applications, but if you have sensitive electronics like LEDs or integrated circuits, be mindful to set your voltage to a safe level before activating the CC mode, failing to do so could result in an unsafe voltage in your terminals.
  • The measurements from the regulators are accurate, but not perfect, if you need precision, use a multimeter and short leads.
  • The fixed outputs have no onboard measurement other than what is needed for protection, so be careful about shorting these out.
  • I messed up the settings on my print and it came out really deformed. If I wasn’t lazy, I’d redo them with better adhesion to the bed, but I’m not. Nothing that some tape won’t solve. I might change them later, but the thought of undoing all the binding posts makes me queasy of doing it.

Overall, it’s like having a cheap AliExpress power supply, for about a quarter of the price. Pretty good overall, I’d say.

The tools you have are better than the tools you don’t

I’ve been working with this for about a month now, and I wonder how I made it this far without a bench power supply. Building my own tools gives me tons of satisfaction, and I hope to keep using and improving this device in the future. Sometimes the tools you can build with what you have is the best tool you can possibly get, and it will probably get you farther than waiting for the shiniest gadget.

So yeah, if you have a pile of junk computer parts, build a power supply! you’ll get lots of mileage from it and it will open lots of doors in your electronics adventures, not to mention the money it’ll save you.

Get building!

Categories
Tech Explorations

Dirt-Cheap Livestreaming: How to do professional quality streaming on a budget

A couple of years ago I wrote an article on how I cobbled together livestreaming hardware at the very beginning of the pandemic. Finding AV equipment was very difficult, so I did what I could with what I had. Almost three years have passed since then, and in the meantime I built a multi-camera, simulcasting capable, live event oriented livestreaming solution on a shoestring budget. Many compromises were made, many frustrations were had, but it worked.

This is how I built it and made it work.

Why bother changing it?

After my initial stint doing a couple of livestreams for small events, the requirements kept popping up, for simple one camera setups my equipment would do, but it quickly started falling short; and as venues slowly started filling back up with people, I just couldn’t rely on building the event around the camera. I had to find a way to do this without being intrusive, and to be able to fulfill my clients’ needs.

Lessons from the previous setup

The setup I described in the previous writeup was not free of complications. On the video side, I was limited by a single HDMI input, and any switching solutions I had were too rough for my geriatric capture card; it would take way too long to reacquire a picture. On the audio side, my trusty USB interface was good, but way too finicky and unreliable (the drivers were crap, the knobs were crap, and while I got it for free it just wasn’t worth the hassle) for me to be comfortable using it for actual work. I also had a single camera, very short cable runs, no support for mobile cameras, and an overall jankiness that just would not cut it for bigger events.

A new centerpiece: ATEM Mini

My first gripe was my capture card. I was using an Elgato GameCapture HD, a capture device I bought back in 2014 which could barely do 1080p30. I still like it for its non-HD inputs (which I have extensively modified and a story for another time), but the single HDMI input, the long acquisition times, and the near three second delay in the video stream made it super janky to use in practice.

After a month and a half on a waiting list, I managed to get my hands on a Blackmagic ATEM Mini, the basic model, and it changed everything: it has four HDMI inputs, an HDMI mix output, USB-C interface, and two stereo audio interfaces, along with fully fledged live video and audio processing: transitions between sources, automatic audio switching, picture-in-picture, chroma and luma keying, still image display, audio dynamics and EQ control and so much more. Its rugged buttons and companion app make operating the ATEM Mini an absolute breeze, and its extensive functionality and integration makes it the closest thing to an all-in-one solution. Many things that I did using lots of devices and maximum jankiness were consolidated in this one device. Anyone who is getting into livestreaming should get one of these.

Dejankifying my audio

Having good audio is almost more important than having good video; a stream with mediocre video but good audio is serviceable, one with good video and bad audio is almost unbearable. Because most events I work on have live audiences on-site, there is no need for me to handle microphones or other audio equipment directly: most mixers have a secondary output I can tap into and get a stereo mix that I can pipe into the ATEM Mini. For line-level unbalanced inputs I can connect them straight into the ATEM, and if I needed something more involved like multiple audio sources, preamplification, or any sort of basic analog processing I keep a small Phonic AM440D mixer in my equipment case, which gives me endless flexibility for audio inputs.

One of the advantages of using common hardware for audio and video is that both streams are synchronized by default, which removes the need for delaying the audio stream entirely, which once again reduces the complexity of setting up livestreams in the field.

New cameras and solving the HDMI distance limitation

For a while, a single Panasonic HDC-TM700 was my only video source, with an additional Sony camera on loan for some events. This was one of my biggest limitations, which I set out to fix.

Most semi-pro/pro cameras are way too expensive for my needs: even standard consumer cameras are way out of my budget, a single camera like the one I already have would need a couple of months worth of revenue, which given that I’m still at university I couldn’t ramp up. There are ways out though.

For one, I thought about USB webcams. There are some good ones on the market right now that are more than enough for livestreaming, but they are very much on the expensive side and I have never liked them for something like this: poor performance at low light, small, low quality lenses and fixed apertures, and low bitrates are just a few of my gripes. Also, I had a better capture card that could take advantage of HDMI cameras. So I looked around AliExpress, and found exactly what I was looking for: Process Cameras.

A process camera is essentially a security camera with an HDMI output. They have no screen, fixed (although decent quality and reasonably large) lenses, and a perfectly usable dynamic range. Since the do not have a screen or autofocus capabilities they are best used for fixed shots, but most of my streams rarely require movement (for which I have the other camera). Best of all: they were very cheap, at around $100 a piece if you included a tripod.

Now, we need to talk about HDMI: It’s a perfectly good standard for home use, but it has some problems in this use case (which we can forgive, this is very much an edge case), the biggest one being max distance. HDMI rarely works above 10m, and even 5m is challenging without active cables and devices that can actually drive cables that long. There are optical cables which can take them over the 10m mark, but these are expensive, bulky, and stiff, which complicates using them in events where they could end up in the way. The solution is somewhat counter-intuitive: just don’t use HDMI cables. But isn’t it the best way to do it in this case? Yes!

See, just because we’re using HDMI signals doesn’t mean we need to adhere strictly to the electrical specification as long as we can get the message across while converting it to a physical medium better suited for long distances. There are many ways of doing this, some use coaxial cables and HD-SDI, others use simple fiber optic patch cables, but I went for old twisted-pair Cat5e: It’s cheap, it’s available, and there are ready-made converters with an HDMI connector on one side and a 8P8C plug on the other. Add a 3D-printed bracket for mounting it on the side of the camera and some small HDMI patch cables and we’re off. With these converters I can get 25m runs no problem, and even 75m in extreme cases, which is enough for most venues.

This was not the only use for a 3D printer: I made custom power bars which hang from a tripod’s center hook, for powering cameras and converters.

Better networking and server equipment

In my previous article I used a Raspberry Pi 3B+ to run an NGINX server with the appropiate RTMP module and some extra software to make a simulcasting server, where a single stream can feed multiple endpoints. This worked great, but Raspberry Pis are a bit anemic and I wanted something with a bit more oomph in case I wanted to do more with it. The idea of a portable server is useful for me not only for streaming, so I grabbed a 2011 Mac Mini on Facebook Marketplace, swapped the hard drive with an SSD and off I went. The additional RAM (4GB instead of just 1GB) allows me to have more services set up without worrying about resources and the beefier Intel Processor allows me more freedom to run concurrent tasks. There is even some QSV work I could do to use hardware encoding and decoding, but that’s a story for another time.

I also ditched my 16-port rackmount switch in exchange for a cheap Netgear WNDR3400v2 wireless router, which gives me a nice hotspot for connecting my phone or in case someone else needs it; the new router is much lighter too.

A portable camera jank-o-rama

For a couple of scenarios, I really needed a portable camera that was fully untethered; maybe for showcasing something, or to keep an eye on the action while on the move. There are some wireless HDMI solutions but it always felt like loosing a good camera for an entire shoot (I usually run a one-man operation, so it was pretty much always a short run for the portable camera), and the cost argument kept popping up.

The way I solved it is to me as janky as it is genius: just use your phone. Most modern phones have excellent cameras, decent audio, and even optical stabilization. I used Larix, a streaming app to create a streaming server that I broadcast over WiFi (see why I needed a wireless router?) to be picked up by OBS. Unreliable? a little bit. Has it ever mattered? Not really, this capability is more of a novelty and a fun thing to add to my repertoire, but not meant as a centerpiece. I have even toyed with a GoPro Hero 7 Black streaming to my RTMP server and picking it up from there, which works, albeit with lots of lag. It’s a bit of a pain to not have it in my ATEM switchboard and having to switch it over OBS, but, you know, it’ll do.

Miscellaneous

Until now I carried everyting on a duffel bag, which just wasn’t going to work anymore: the weight killed my back anytime I went near the thing and there just wasn’t enough space: so I needed something like the big wooden cases that the pro audio industry uses, without breaking the bank. I just took an old hard-side suitcase and crammed everything in it. It’s big enough for me to house most of my stuff but not too big as to be bulky, and allowed me to keep everything tidy but without wasting space.

Because my new cameras don’t have a screen, setting up the shot and focusing can be a challenge. I usually resorted to using my second monitor to do so, but it was always janky and time consuming. To solve this, I bought a CCTV camera tester with an HDMI input. This is essentialy a monitor with a battery, for way less than a professional one.

I needed lots of cables, some of them really long. I ended up buying rolls of power and Cat5E cable and made them myself. My standard kit includes four 25m Cat5E rolls and a 75m one in case the network jack is far away, plus three 20m extension cords so I can place the cameras wherever I want. This is not including the three power bars for the cameras and a fourth one for my computer.

So what comes next?

To be absolutely honest, I think this is as far as this setup goes. Livestreaming jobs have dried up now that the pandemic has quietened down, and pursuing more stable ventures would require lots of investment, which I’m not really in a position to make. I found a niche during the pandemic, and I milked it as much as I could, I’ve paid for the equipment two or three times already, so I’m not complaining, but until I find the time to do that YouTube channel I’ve always wanted to do, I don’t think it’s going to see the light of day for a while.

Closing thoughts

I’ve had some tremendous fun building up this setup, and for my uses, it has proven itself time and again as a dependable if basic setup. Maybe you can get your own ideas to get creative; many of the lessons learned here are very much applicable to other streaming opportunities and who knows, maybe you’ll get some ideas to get creative with this media.

Categories
Tech Explorations

Building a better Elgato Game Capture HD

Back in 2015 I got myself a brand new Elgato Game Capture HD. At the time, it was one of the best capture cards on the consumer market; it has HDMI passthrough, Standard definition inputs with very reasonable analog-to-digital converters, and decent enough support for a range of different setups.

Despite its age, I still find it very handy, especially for non-HDMI inputs, but the original design is saddled with flaws which prevent it from taking advantage of its entire potential. This is how I built a better one.

Using this card in the field

After a few months of using it to capture PS3 footage and even making some crude streaming setups for small events using a camera with a clean HDMI output, two very big flaws were quickly apparent: First, the plastic case’s hermetic design and lack of thermal management solutions made it run really hot, which after prolonged operation resulted in dropouts which sometimes required disconnecting and reconnecting the device and/or its inputs, and second, the SD inputs are very frustrating; the connectors are non-standard and the dongles provided are iffy and don’t even allow for taking full advantage of its capabilities without tracking down some long discontinued accessories.

My first modification to it was rather crude: after it failed on a livestream, I took the Dremel to it and made a couple of holes for ventilation, coupled with an old PC fan that I ran using USB power (the undervolting of the fan provided enough cooling without being deafening). This obviously worked, but it introduced more problems: the card now made noise, which could be picked up by microphones, and it now had a big gaping hole with rotating blades that was just waiting to snatch a fingernail. This wouldn’t do.

Solving thermal issues

It quicly became clear that the original case for the Elgato Game Capture HD was a thermal design nightmare: it provided no passive cooling, neither by having heatsinks or vents. The outer case design was sleek, but it sacrificed stability on the way.

This device is packed with chips, all of which provide different functions: HDMI receivers and transmitters, ADCs, RAM, and many other glue logic parts, which meant that power consumption was going to be high. Having a custom LSI solution or even using FPGAs could have been better in terms of power consumption, but this is often way more expensive. Amongst all of the ICs, one stood out in terms of heat generation: a Fujitsu MB86H58 H.264 Full HD Transcoder. This was doing all the leg work in terms of picking up a video stream and packaging into a compressed stream and piping it through a USB 2.0 connection. It’s pretty advanced stuff for the time, and it even boasts about it’s low power consumption in the datasheet. I don’t know exactly why it runs so hot, but it does, and past a certain threshold it struggles and stutters to keep a video signal moving.

There was nothing worth saving in the original enclosure, so I whipped up a new one in Fusion 360 which includes many ventilation holes, and enough space above the chip so I could add a chipset heatsink from an old motherboard. I stuck it down with double sided tape, which is not particularly thermally conductive, but along with the improved ventilation is enough to keep the chip to frying itself to oblivion. I ran another protracted test, and none of the chips got hot enough to raise suspicion, and even after three hours of continuous video, the image was still being received appropriately. I initially though there could be other chips in need of heatsinks, but it appears that the heat from this transcoder was the one pushing it over the edge, without it the other ICs got barely warm.

Since we made a new enclosure, let’s do something about that SD input.

Redesigning the SD video inputs

This card hosts a very healthy non-HDMI feature set: It supports composite video, S-Video, and Y/Pb/Pr component video, along with stereo audio. The signal is clean and the deinterlacing is perfectly serviceable, which makes it a good candidate for recording old gaming consoles and old analog media like VHS or Video8/Hi8. However, Elgato condensed all of these signals into a single non-standard pseudo-miniDIN plug, which mated with included dongles. Along with a PlayStation AV MULTI connector, it came with a component breakout dongle which allowed any source to be used. With the included instructions you could even get composite video in this way. S-Video however was much more of a pain; while it was possible to connect an S-Video signal straight into the plug, it left you without audio, and the official solution for this was to purchase an additional dongle which of course by the time I got it no one had.

To solve it, I started by simply desoldering the connector off the board. I saw some tutorials on how to modify S-Video plugs for the 7-pin weirdness of the Elgato, and even considered placing a special order for them, but in the end I realized that it was moot. The dongles sat very loosely on the connector, and any expansion I wished to make on it was going to be limited by that connector, so I just removed it.

To the now exposed pad, I soldered an array of panel-mount RCA and S-Video connectors I pulled out of an old projector, so I could use them with whatever standard I pleased: three jacks for Y/Pb/Pr component video, a jack for S-Video, a jack for composite video, and two jacks for stereo audio, complete with their proper colors too. The SD input combines the different standards into a single three-wire bus: Pb (component blue) is also S-Video chroma (C), Pr (component red) is also composite video, and Y (component green) is S-Video Luma (Y), so the new connectors are electrically connected to the others, but for simplicity I much prefer it to having to remember which one is which, or having to keep track of adapters for S-Video (which I use a lot for old camcorders).

Final assembly and finished product

After printing the new enclosure I slotted in the board (it was made for a press fit with the case, to avoid using additional fasteners), and soldered the new plugs to the bare pads of the connector using thin wire from an old IDE cable. The connectors were attached to the case using small screws, and the design was such that all of the connectors were on the bottom side of the case, which meant no loose wires. The top stays in place using small pieces of double sided tape and some locating pins, which makes dissassembly easy, great for future works or just showing off.

I wish this was the product I received from Elgato. It allows the hardware to work to its true potential, and it makes it infinitely more useful in daily usage. No more faffing around with dongles, no more moving parts, or dropouts on a hot day. It feels like this was what the engineers at Elgato envisioned when they came out with this thing. The Elgato Game Capture HD is now my main non-HD capture device and even for HDMI stuff it still gets some usage, when I can’t be bothered to set up the ATEM switcher.

Finishing thoughts

I love the Elgato Game Capture HD, both for what it is capable of doing and what it did to the nascent streaming and video creation scene back in it’s day. I love its featureset and I’m even fond of its quirks, but with this mod I feel like I have its true potential available without compromises. It changed its place in my toolkit from a thing I kinda know how to use that stays in the bottom of my drawer to a proven and reliable piece of equipment. If you have one of these devices and feel unsatisfied with its performance, I urge you to give it a try, you will no doubt notice the difference and maybe you’ll keep it from going into the bin.

Categories
Tech Explorations

A server at home: childhood fantasy or genuinely useful?

Ever since I was a child, I always dreamed of having the sort of high-speed, low-drag, enterprise-grade equipment in my home network. For me it was like getting the best toy in the store, or having the meanest roller shoes in class (reference which I guess dates me). It was as if getting these devices would open my world (and my Internet Connection) to a world known only by IT professionals and system administrators; something that would show me some hidden knowledge (or class cred maybe) that would bring my computing experience to the next level.

Anyone who has ever worked in the IT field knows the reality of my dreams: while the sense of wonder is not entirely erased, the reality of mantaining such systems is at best dull. But, you know, I’m weird like that. If I weren’t you wouldn’t be here.

Almost a decade ago, I embarked on a journey of building and mantaining a home server. It has solved many of my problems, challenged me to solve new ones, and taught me inmensely about the nuances of running networks, mantaining machines, building solutions, and creating new stuff with it. This is my love letter to the home server.

This article is somewhat different than most of my other content; it’s more of a story rather than a tutorialized narrative. It’s not really meant to be a guide to build or maintain servers at home, it’s more of a tool for understanding the rationale behind having one.

Baby steps

As the designated “IT guy” in my friend group, I often found myself helping my friends and family with their computing related needs. I started helping others install software, customize their computers, using basic software like Office and stuff like that. We also gamed some, whatever we could pirate and get to run in crappy laptops and stuff like that. Our friend group was big into Minecraft at the time, as most kids were, and we loved to show off our worlds, different approaches and exploits to each other, sharing our experiences in the game. Inevitably, one day the inevitable question came: What if we made a Minecraft server for all of us to play in?

The writing was on the wall, so I set out to make one. At the time I was rocking the family computer, a respectable 21.5″ 2013 iMac. It was beefy enough for me to run the server and a client at the same time, and paired with LogMeIn’s Hamachi (which I hated, but didn’t know better), a highlight of my childhood was born. It barely worked, many ticks were skipped and many crashes were had, but it was enough for me and my group of friends to bond over.

Around the same time my parents bought a NAS, an Iomega StorCenter sporting a pair of striped 500GB hard drives for a whopping 1TB of total storage. Today that sounds quaint, but at the time it was a huge amount of space. For many years we kept the family photos, our music library, and limited backups in it. It opened my eyes to the posibility of networked storage, and after an experiment with a USB printer and the onboard ports on the NAS, I even toyed with basic services to provide. An idea was coming to my head, but I was just starting high school, so there was pretty much no budget to go around, so I just kept working around all the limitations.

At least, up to a point. Within a couple of months, both the RAID array in the Iomega NAS and my iMac’s hard drive failed, without any backups to recover. It was my first experience with real data loss, and many memories were forever wiped, including that very first Minecraft server world. Most of our family stuff was backed up, but not my files; It sucked. It was time for something new.

Building reliable servers from scrap

I was still in high school, there was some money to go around, but nowhere near enough me to get a second computer to keep online forever. So I went around looking for scraps, picking up whatever people were willing to give me and building whatever I could with it. My first experiments were carried out on a geriatric AMD Athlon from a dump behind my school from the early 2000s which wasn’t really up to the task of doing anything, but it gave me valuable information regarding building computers and what that entailed. My first real breakthrough was around 2015 when I managed to get a five year old underpowered Core i3 Tower with 4GB of RAM sitting in a dumpster outside an office block near my house. After a clean install of Windows and some minor cleaning I had, at last, a second computer I could use as a server.

I didn’t know much about servers at the time, which meant that my first incursion was basically an extension of what I’d seen before: using SMB to share a 1TB drive I’d added to it by removing the optical drive and rescuing a hard drive from a dead first-gen Apple TV. I added a VPN (ZeroTier One, it’s like Hamachi but good), printer sharing, VNC access and pretty soon I was running a decent NAS.

I added a second 1TB drive a few months after (which involved modding the PSU for more SATA power ports) and some extra software: qBitTorrent’s web interface for downloading and managing Torrents from restricted networks (like my school’s), automatic backups using FreeFileSync, and a few extra tidbits. I even managed to configure SMB so I could play PS2 games directly from it, using the console’s NIC and some homebrew software.

This was my setup for around four years, and It did it’s job beautifully. Over time I added a Plex server for keeping tabs of my media, And I even played around with Minecraft and Unturned servers to play with my friends. Around 2019 though, I was starting to hit a bottleneck. Using a hard drive as boot media for the server was dog slow, and I had run out of SATA ports for expanding my drive roster. I had been toying with the idea of RAID arrays for a while, especially after losing data to faulty drives. Mirroring was too expensive for me, so I my method of choice was level 5: single parity distributed between al drives, single drive failure tolerance. I just needed a machine capable of doing it. For a while I wondered about buying HBAs and just tacking the drives onto the old hardware and calling it a day. I ended up doing something completely different.

At last, something that looks like a server

In the end I decided that a better idea was to upgrade the motherboard, processor, the power supply and a few other stuff. I added a USB 3.0 card for external drive access, upgraded de processor from a Core i3 240 to a Core i5 650, and got a motherboard similar to the current one but with six sata ports, and got four 2TB video surveillance drives for dirt cheap, along with a beefier power supply to tie it all together. Around this time I got a Gigabit Ethernet switch, which vastly increased throughput for backups. It was mostly used, bottom of the barrel stuff, but it allowed me to create a RAID-5 array with 6TB of total storage, and slightly more room to expand my activities. Lastly, I replaced the boot drive with a cheap SATA SSD.

With it came an actual Plex library, a deep storage repository for old video project files, daily backups, a software library for repairs, MySQL for remote database work, and even a Home Assistant VM for home automation. I kept running servers and making experiments on it. This second iteration lasted me for around three more years, which was more than I expected from what was essentially decade old hardware running in a dusty cupboard next to my desk.

Soon enough however, new bottlenecks started appearing. I was getting more and more into video work, and I was in need of a server that could transcode video in at least real time. Most CPUs cannot do that even today, so I was looking into GPU acceleration. I also started suffering with Windows: It works fine for begginers, and I even toyed with Windows Server for a while, but it’s just way behind Linux distros for server work. It took up lots of resources doing essentially nothing, the server software I needed was clunky, and I lacked the truly networked logic of a UNIX-like OS.

Enterprise-grade hardware

Once again, I looked upon the used market. Businesses are replacing hardware all the time, and it’s not difficult to find amazing deals on used server hardware. You’re not getting the absolute latest and greatest stuff on the market, but most devices are not really that old, and more than capable for most home uses.

From the second-generation server I could still salvage the USB 3.0 card, the power supply, the boot drive, and the four RAID drives. All of those were bought new, and they were in excellent condition, so there was no need to replace them. I wanted something that would last me at least the next five years, which could accomodate all my existing hardware, and had plenty of room for expansion: PCIe slots for GPUs and other devices, proper mounting hardware for everything, and a case to keep everything neat.

I went for a tower server instead of a rackmount mainly because I don’t have a place for a rack in my home, and the long-and-thin package of most racked servers made no sense in my case. After a bit of searching I came upon the HP ProLiant ML310e Gen8 v2: An ATX-like server with a normal power supply, four drive bays with caddies, an integrated LOM, and even an optical drive (which given that my server now hosted the last optical drive in the house, was a must). It was perfect. I also managed to score an NVIDIA GTX1060 6GB for cheap, which is more than a couple generations behind at this point, but most importantly for me, it has NVENC support, which meant transcoding HD video at a few hundred FPS with ease.

Building the current server

My third generation server was built around the aforementioned HP tower, but many modifications had to be made in order to acheive the desired funcionality. After receiving it, I swapped the PSU and modified one of the accessory headers for the server’s proprietary drive backplane connector, so I could power the drives from the new power supply. Apart from increasing the max load from 350W to 500W, it also gave me PCIe power connectors to drive my GPU, which the previous PSU lacked.

Then, I installed the GPU and encountered my first problem: Servers like these use only a couple of fans and a big plastic baffle to make sure the air makes it to all the components on the board. This is a great idea in the server world, it reduces power consumption, decreases noise, and allows for better cooling, but it’s also interferes with the GPU: it’s not a server model, so it’s taller than a 2U rack, so the baffle cannot close. Not to worry though, a bit of Dremel work later I had a nice GPU-shaped hole, which I made sure to make as small as possible to not disturb the airflow too much. The GPU shape came in my favor too, as it redirects air perfectly onto the fanless CPU cooler.

Other than the four drive bays (in which I installed the 2TB drives) there isn’t much place for another boot drive, so I used the empty second optical drive bay to screw in the boot SSD. A lot of cable management and some testing later, I was ready to go.

For the OS, I went with Ubuntu Server 20.04 LTS. I’m very familiar with this whole family of Linux distros, so it made sense to use it here. My servers at work also use it, so I had some server experience with it as well. Once de OS was in, I installed the drivers for the LOM and the GPU, along with building the RAID-5 array again using mdadm. I was using Microsoft Storage Spaces for the array in the previous generation, so I had to rebuild it in Linux cleanly. After dumping the data into some spare drives and building the array on Linux I was in business.

For the software I installed Jellyfin (I was getting sick of the pay-to-win model of Plex and I wanted something with IPTV support), Samba for the shared folders, VirtualBox and the latest Home Assistant VM (don’t @ me about Docker, that version is crap and the supervised install is a pain in the ass, I’m done with Docker for now), qBitTorrent, MySQL, ZeroTier One, OctoPrint, and of course, a Minecraft server. I also installed a few quality-of-life stuff like btop for Linux and thus, my server was complete, at least for now.

The realities of living with a server at home

Despite my childhood aspirations, enterprise-grade hardware has a major flaw with respect to home equipment: noise. I can’t blame them too much, after all, these devices are made for production environments where no one really minds, or vast datacenters where noise is just part of the deal. I on the other hand, like sleeping. The first time I turned the machine on I was greeted with the cacophonous roar of two high RPM server fans. It dawned on me pretty quickly that this noise simply would not pass in my house, so I quickly set about fixing it.

Unlike desktop boards, the OS doesn’t have control over the fans. To the sensor package in Linux, it’s like they didn’t even exist. I did get some temperature readings and there might be a way to address them via IPMI, It just didn’t work right. The job of handling the fans is up to the iLO, HP’s name for a LOM, a tiny computer inside the server that allows for low-level remote management, including remote consoles and power cycling. The homelabbing community figured out years ago how to tune them down, and a bit of unofficial firmware later, my fans calmed down to a reasonable 20%, and I could sleep again. I took this oportunity to put a piece of tape over the POST buzzer, which had no right to be as loud as it was.

Closing thoughts

This has been a wild ride of old hardware, nasty hacks, ugly solutions, and wasted time, but in the end, the outcome is so much more that what I ever envisioned: a device from which I could automate the boring bits of my daily routine, keep backups and storage of all my stuff safely, and having access to all my services from wherever I happen to be. If you’re a nerd like me and willing to spend some time faffing around in the Linux CLI, I highly recommend you make yourself a home server, no matter the budget, device, or services.

Categories
Tech Explorations

Fast Track C600: Faults and Fixes

A few years ago, one of my high school music teachers came to me with a deal that was too difficult to pass up. He had just replaced his audio interface, and he wanted to get rid of the old one, which was of course faulty. Having known each other for a while, he knew that I was into that sort of thing and had decent chance of making it work. The device in question was a M-Audio Fast Track C600, a fantastic USB audio interface featuring 4 mic or line inputs with gain control, 6 balanced audio outputs, 96kHz 24bit crystal clear audio, low latency, and S/PDIF and MIDI I/O, along with many other tidbits and little details that make it a joy to use. It was way out of my price range, there was no way I could afford such a high-end device, and yet it was now mine. That was, of course, provided I could make it work in the first place. Today, we’ll delve into the adventure that was fixing it. Unfortunately, I didn’t take any pictures of the process, so you’ll have to take my word for this.

When I got home, I decided to plug it in and give it a shot, not expecting much. Instead of the usual greeting of flashing lights I was met with darkness. It was completely dead. My computer didn’t detect anything either, so clearly there was a hardware issue lurking inside. After opening it up, I was greeted with a myriad of cables routing lines back and forth from the two printed circuit boards that were inside, which looked pristine. No charring, no blown capacitors, no components rattling inside the case. the C600, or as the PCB ominously shouted at me (what I can only assume was the internal project name) in all caps, GOLDFINGER, looked as neat and tidy as the day it left the factory. A bummer it seemed. It wasn’t going to be an easy fix.

A breakthrough

And so it sat on my desk, half disassembled, for months. For one, I was still learning the basics of electronics, so there wasn’t much for me to do at that point. On the other hand, I was just getting into the world of digital sound, and my little Berhinger Xenyx 302USB was more than enough for what I was playing with back then.

Then one day, I decided to remove the lower board entirely (this is the one that holds all the important electronics, the upper one just has the display elements, knobs and buttons, along with the preamps for the inputs, which weren’t really necessary at this point), plugged the AC adapter (which I didn’t have, but an old 5V wall wart coming from an old Iomega Zip drive matched the jack and voltage perfectly) and the USB port, and started looking around the board.

At first, nothing really seemed to stand out, until after a while, when a smell of flux and solder caught my nose. For those who have never worked on electronics before, it’s a very pungent and characteristic smell, usually indicating a component that is way too hot. I started feeling around with my finger until I found the culprit; a tiny 10-lead MSOP package only slightly bigger than a grain of rice. I didn’t know what it was at first, but it had some big capacitors around, so I assumed it was some sort of voltage regulator, but the writing was tiny, and I couldn’t read the markings on the chip. After much squinting, I came to the conclusion that the markings read “LTABA”, which didn’t sound like a part name to me. A preliminary Google search came inconclusive, as expected, even after adding keywords and switching things around.

But then it dawned on me. a few weeks ago while hunting components on AliExpress, I noticed that most sellers usually wrote the complete markings of the chip on the listing, unlike other vendors like Mouser who just stick to the official part name. so I searched our magic word and lo and behold, there was my answer. Our mystery chip was, as expected, a regulator, the LTC3407 600mA dual synchronous switching voltage regulator from Analog Devices. The mystery was not complete however, as the regulator was of the adjustable type, and as such, I had absolutely no idea what voltages I was looking for.

But Goldfinger had me covered. etched on the silkscreen just a few mm away from the regulator, I saw three test pads, labeled “5V, 3V3, 1V8”. I assumed that the 5V was coming from either the USB socket or the AC adapter, while the 3.3V and 1.8V (voltages very common for powering digital microelectronics) were being handled by the dual-output regulator, stepped down from the 5V rail. After a quick continuity check, my assumptions were confirmed. The pieces were starting to come together.

A (not so) temporary fix

For a regulator to get that hot, usually one of two things need to happen. Either a short circuit on the output rail, or an internal fault that requires replacing of the chip. I discarded the short theory fairly quickly just by measuring the voltages. When a short occurs, the regulator usually switches off the output automatically and gives us a voltage very close or at 0V. In our case the output voltages were jumping around erratically, nowhere near the stated voltage on the board. While this was a relief in the sense that there was no problem with the board, it now posed an ever tougher question; what was causing this issue?

For a while I poured over the datasheet looking for an answer. At first I thought it was a problem in the feedback circuitry (the design of this circuit is what sets the output voltage and allows it to correct it as the load changes), but that would only affect one of the regulator subsystems, as each leg had a different feedback circuit. I also thought that the external components of the regulator (capacitors and inductors mostly) were faulty, but again, this didn’t explain why both rails were bust.

So I decided to quit. I’m not an electrical engineer (yet) and without a proper schematic of the board there was no way I could troubleshoot this PCB with my available tools in my house’s washing room. So I ripped the regulator out (It was slightly brutal, as this package has a massive solder pad beneath the package to dissipate heat, that is pretty much impossible to desolder amicably without a hot air rework station, which I don’t have) and went to my local electronics store and bought a 10-pack of LM317 linear adjustable voltage regulators. This million-year-old component, being a linear regulator, although trivially simple to install, has a massive disadvantage; unlike the original regulator which relied on switching the input voltage on and off really quickly, this one lowers the voltage by straight up dissipating the excess power as heat, which in turn means a greater power consumption. This meant both hoping that the USB port didn’t trip its overcurrent protection and adding heat sinks (salvaged from an old TV) inside the case with duct tape and wishing for the best. At least in my mind, this was all temporary. after soldering wires into the board, adding the passives for setting the voltage, and admiring my horrendous creation, we were ready for a test run.

First light, second problem

As I plugged it in, I saw das blinkenlights flashing at me for the first time. I was overjoyed when my computer recognized a new USB device. It was alive at last, but the battle was only halfway through.

For one, it turns out that Goldfinger doesn’t look kindly to USB hubs or USB 3.0 plugs. Both official and unofficial documentation warns the user to get away from both these apparent evils and stick to strictly USB 2.0. Luckily, my workhorse laptop does still include a USB 2.0 port which has given me no issues so far.

I had installed the “latest” (version 1.17, dated mid-2014) drivers available officially from the manufacturer’s website, which gave me issues since the beginning. Unstable on Windows Sound API, clicks and cutouts on ASIO, bluescreens if unplugged, bluescreens for no reason at all, poor hardware detection, you name it. After gouging through what’s left of the M-Audio forums, I found a post with the suggestion of rolling back to a previous version of the driver, which unfortunately went unanswered. So I gave it a try, downloading the 1.15 version (also available from the drivers site) and installing the old version. And at last, it worked.

A quick review, finally

So I’ve been using this interface for about a year now, give or take, and it has been a dream to work with. I’ve used it to record both live gigs and snippets and experiments of my own creation, and even used it a few times for livestreaming.

For me, it’s a perfectly adequate device for the kind of work I do, especially for free ninety nine. The user experience could use a tweak or two, especially the squishy knobs and the weirdly sensitive gain pots, but the build quality is solid, the connectors are a joy to use, and the included software is finicky, but powerful if you’re willing to respect it’s quirks.

Closing thoughts

While this turned out to be a massive project, both in time and scope, many important things were learned. First and foremost, never turn down free stuff, even if it’s broken. Turns out most people throw out things even if the fix is simple. Also, repairing things is good for the environment and usually cheaper than buying new. Second, just because the device you’re trying to fix uses some high-speed component doesn’t mean a 50-year-old component won’t replace it.

Categories
Tech Explorations

RTMP systems for multistreaming

Given the current events, livestreaming has become an enticing alternative for giving some continuity to many activities that otherwise would not be possible during these difficult times. However, creating a professional quality setup for broadcasting live events is quite a challenge, especially because until just about a decade ago, the idea of creating a live TV studio in your bedroom was ludicrous to say the least. If you’re working with large groups of people, consider it a powerful tool to reach out to your audience.

This post is going to narrate the hoops I went through in the process of creating such broadcasts with professional quality and a relatively friendly budget in mind, as well as explaining a bit how livestreaming works, and how everything ties together for creating both creative and functional livestreams.

I: The gist of it all

Pretty much every social network worth its salt has some sort of livestreaming capability built in, ranging from the gimmicky to studio quality. For the purposes of this article, we’re going to be focusing on three: Facebook, Instagram, and YouTube, as they’re the ones I usually broadcast in.

  • YouTube is pretty much a pioneer on the subject, introducing one of the first livestreaming services available for everyone to use, and has pretty much everything nailed down at this point, providing a stable and reliable platform for pretty much every scenario.
  • Facebook also has a pretty solid platform for livestreaming, and it’s a personal favorite of mine because it will process and transmit pretty much everything you throw at it.
  • Instagram is the most frustrating one to work with, as it does not officially support broadcasting outside of its mobile app, and as such provides plenty of head scratching and hair pulling during the setup process.

Regardless of platform, pretty much every livestreaming service out there works using the Real-Time Messaging Protocol (RTMP), originally developed by Macromedia and now maintained by Adobe. The standard is (mostly) open and it works great, so it’s to be expected that everyone is using it. However, it does not specify a video standard per se, so you can send pretty much anything you want (this will be a problem later). RTMP works by using a streaming server address and a streaming key to connect and begin the transmission, which are different for each platform and are provided by them when you begin a livestream. Depending on the site, the stream key changes for each stream or after a set time, be mindful to keep them up to date.

For any livestream, there are four basic links in the chain that make it work. The chain is only as strong as its weakest link, so all of them must be reasonably balanced to get good results; these are Capture Hardware, Video Processing, Restreaming and Internet Connection.

II: Capture hardware

This is the first link of our broadcast chain, and involves everything needed to capture video and audio data that will go in your livestream. Integrated webcams are the default (and sometimes only) option for many people, but I decided against them mainly because of their poor quality; low framerates, issues with exposure, noisy output, etc. Modern external cameras like the ones offered by Logitech are a good alternative -ignoring the cheaper ones which often suffer from the same issues- you can easily get 1080p at 60fps with them, but since I don’t own one, I went with my preferred method, a video camera and a capture card.

For this method you can use whatever camera as long as it has a clean video output (avoid the ones that show all the controls and menus through the video output) and whatever capture card capable of processing the desired resolution. In my case I used a Panasonic HDC-TM700 portable video camcorder and an Elgato Game Capture HD USB capture card (a beaten old warhorse, and a story for another time.). Both are over five years old at this point, but I can still get a clean and reliable 1080p/30fps video into my broadcasting software with no issues. Both companies offer newer products that can be used instead, and as long as you have a compatible setup, you’ll be fine with whatever you can find.

For audio capture, you can use the audio straight from the camera’s microphone or, as I did, use an external audio interface to plug in dedicated microphones. I used a M-Audio Fast Track C600 audio interface (which I actually got for free, I’ll write about that adventure at some point) which works great, but requires a bit of a workaround, as probably most multichannel audio interfaces will. This gave me a clean sound and four mic channels for plugging whatever I want.

III: Video processing

Now that we have all the AV equipment set up its time to set up the software. I use good old OBS for it, because it’s light on system resources (a must for me running it on a laptop) and pretty much endlessly customizable. This is not a tutorial on how to use OBS, -and there are many of them out there who could explain it way better than I ever could- but it’s basically a video editor but live. you can have different scenes and switch between them, mix different audio sources, capture windows or screens, add text and graphics, and much much more. For me, I also had to install the Elgato Game Capture app which includes the drivers for my capture card, which work right out of the box with OBS as a video source. I also installed the OBS ASIO plugin to connect to my audio interface, as it provides low-latency and stable performance (drivers for de C600 are a bit janky, and ASIO gave me the least glitches.). For multichannel interfaces this is a must, because while OBS supports 2+ channel audio, it will not give independent control of each channel without it. I also added a window capture, which worked perfectly too (although beware if you have switchable graphics when capturing screens/windows, they are known to be a pain to set up).

Before we move on, we need to talk about video resolution aspect ratio. Because RTMP does not specify said parameters, each platform works in its own way, and has its own set of rules. Facebook is the most forgiving of all, allowing pretty much any resolution at whatever aspect ratio, vertical or horizontal. YouTube on the other hand, will only work with horizontal 16:9 video, and will complain about any resolution other than 1920×1080 (although it will still work), filling the screen by scaling up the video stream if it doesn’t match with the widescreen aspect ratio. Instagram will do the same, only this time it exclusively supports vertical video, and will scale up the video if you give it anything other than 9:16 video, losing everything but a sliver of video. For my setup, horizontal video made the most sense, so we were careful to frame the most important parts on that small sliver, but we still lost some important parts of the video. It might be possible to reprocess the video to remediate some of these issues, but not having the entire frame on Instagram was a reasonable compromise for me.

IV: Restreaming

As the title implies, we want to do multistreaming -also known as simulcasting- which means to transmit to multiple platforms at the same time, a task which OBS does not handle natively (you can run multiple instances of OBS with the -multi argument, but video capture devices and webcams will not allow you to use more than one stream at a time). A few comercial alternatives are available to solve this issue, such as Restream.io which is free for some platforms (but not for Instagram, which was a dealbreaker for me) and a limited number of streams. This left me with the predicament of having to set up my own restreaming server.

For this task I used NGINX, a ubiquitous, open source web server software which is -depending on who you ask- the most used web server on the Internet. Fortunately for me, someone had already made an RTMP module for NGINX which worked perfectly (all credit goes to this amazing post on the OBS forums detailing how to make it work). I ran the server on a Raspberry Pi 3B+ which was more than enough to make it work (the restreaming server doesn’t do any sort of processing, it just forwards the incoming stream to the desired servers). In theory you could run it on the same machine used for video processing, but I found NGINX for Windows to be very confusing and difficult to install, whereas the Linux (Raspbian Lite) install and configuration was very simple and straightforward.

Remember when I said that every streaming server used the same protocol? Well, that’s not entirely correct. While it’s true that all three platforms use RTMP for moving the stream around, only YouTube uses the standard protocol on port 1935 (the default port for RTMP), while Facebook and Instagram use RTMPS, which is just the standard RTMP protocol over an SSL tunnel on port 443. This complicated matters a bit because the RTMP NGINX module does not support RTMPS natively, so we now need to encapsulate the Instagram and Facebook video feeds on a secure tunnel.

This is where STunnel comes in. STunnel works as an SSL/TLS proxy, grabbing whatever data we throw at it and encrypting it before sending it back on its way to the target servers. This page helped me get it up and running with NGINX. Now that all the protocols are in order we can actually attempt a connection with the servers.

V: Internet connection

Now that everything is set up on our side, we need to connect to the streaming platforms. Before that though, make sure you have a stable internet connection (Ethernet is your best choice here, Wi-Fi can be very unstable and low speed in certain scenarios), and that your upload speed is greater than the OBS transmit bitrate (2500 kb/s by default) times the amount of streams you are going to simultaneously stream to.

For exact instructions, follow the tutorials listed above or check de OBS forums for help. For Facebook and YouTube is just a matter of copying and pasting the stream keys from their respective streaming menus and you’re on your way, but once again Instagram is going to put up a fight. For one, as we said earlier, Instagram does not officially support RTMP streaming, so we need to use software like YellowDuck that will trick Instagram into believing that it is receiving an RTMP transmission from a phone, the actual supported and official way of doing it. A quick note on any software or website designed to to this: logging in is very finicky. Test the stream beforehand with the actual account you want to use, to avoid screw-ups when the real time comes.

VI: Closing thoughts

After reading all of this, you might feel livestreaming to be a near impossible task but remember, most of the setup required has to be done only once, weeks in advance. after that is just a matter of keeping the stream keys up to date and making sure everything is working.

In dire times like this, streaming often brings a sense of community that not many other technologies can provide. It allows us to see a face, to feel like we’re part of something bigger. If you are in charge of anything that deals with people, please try to introduce it as a new way of reaching out to people, even after all of this is over. Good luck and happy streaming!

Categories
Tech Explorations

An Introduction

Hi! My name is Benjamin, and I’m an engineering student and overall tinkerer, maker, and explorer from Chile. This blog is going to be a place for me to present to everyone my projects, workarounds, and other interesting tidbits of my journey of making things work.

It’s probably not going to get updated very much, I don’t get to do projects worthy of a write-up very often as I’m still pretty much on the meat grinder at university, so free time is scarce. I may also write about other subjects of my interest, as long as they are at least tangentially related to the topic of the blog.

I’m also very open to chat, so drop me a line at my contact page if you want to get in touch and/or ask questions about my projects. Expect the occasional typo and odd sentence, as english is not my first language, although I’m somewhat fluent.

Cheers!