Getting into Self-Hosting: Infrastructure
So I’ve got my photos and music syncing and streaming, but there are a holes to patch before I can actually rely on these services:
- How safe is my data now? Not very, it’ll be safer with Regular Backups
- How do I access my services when I’m not on the local network? I don’t - A VPN would help.
- The first time I know a service is down is when I need to use it and can’t. Container & connectivity monitoring
- And how will I know if I’m over-extending my little N150? Real-time system resource monitoring
It’s time for some Infra!
Backups#
I’m running a daily backup of everything on my server that’s annoying, difficult or impossible to replace via Duplicati over SSH to a 1TB Hetzner Storage Box1.
With this I don’t have to think about manual backups, nor automated ones since I have duplicati alert me with an email for backup failures. Everything I’ve added or modified in the day will be backed up overnight, and if it isn’t, I’ll get a report in my inbox.
VPN Access#
It’s been pretty unanimously agreed that a VPN is the safest way to access self-hosted services out and about. This is where my FOSS philosophy falls apart a bit as Tailscale is just an insanely easy to use freemium option. Being free for up to 3 users and 100 devices, and plenty of client support, it’s a no-brainer, at least until I commit to rolling my own Wireguard.
TLS Certificates#
Tailscale makes it easy to connect to my services, but the experience of accessing them still feels a bit second-rate because they’re all on HTTP, requiring an exception to be set in the browser the first time they’re accessed; as well as being on forgettable port numbers. All in all, having to visit something like 192.168.1.123:12345 isn’t a great user experience, especially when regularly met with “Your connection is insecure” messages.
What’ll fix all this is proxy with signed certificates and some nice clean subdomains for each service. I’ve found Nginx Proxy Manager to be of great utility for this. Adding new ‘proxy hosts’ is a breeze, and it looks after my Let’s Encrypt certificate. All I had to do was point the base domain to my server’s local IP, and expose my server’s subnet to my tailnet (Tailscale’s name for a group of devices interconnected on their platform).
Now I can visit for example https://immich.mydomain.tld with zero friction.
Monitoring#
But what if any of these important containers crash? I’ll know about it since I’ve got Uptime Kuma monitoring all my containers via docker socket. If any are down for 5 minutes2 and fail 3 consecutive 1 minute health checks, I’ll get an email about it. This is all shockingly easy to configure in-app.
What if my broadband goes down? I’m monitoring that too. Reachability to both google.com and bbc.co.uk should be a reliable signal as to whether I can connect to the open internet. Further keeping an eye on my ISP, I’ve got Speedtest Tracker running (hourly on an odd minute) to check what speeds and ping I’m actually measurably getting.
With all these containers, it’s nice to have one dashboard where I can actually see what system resources are in use. So far Glances has been a pretty slick one-page app for this.
This is somewhat overkill since I’m only using about 10% of the total storage, but it’s fast and relatively cheap for fast remote storage. I had tried using Backblaze and similar competitors since they have a shockingly low price per terabyte, but I had repeated issues backing up to especially Backblaze due to their connectivity provider. I also wanted to see about finding a reliable and trustworthy european host, and Hetzner has ticked those boxes so far. ↩︎
IMHO there’s no need to check more often than this, if something stays down that’s when I need to care about it. This isn’t a commerical application, the main user is me. And I’d rather not add more traffic, resource utilisation, and data from my monitoring that I arguably don’t need. ↩︎