A 'power outage' incident doesn't seem to have been mitigated. My homelab has had evolving mitigations: I cut a hole in the side of a small UPS so I could connect it to a larger (car) battery for longer uptime, which got replaced by a dedicated inverter/charger/transfer-switch attached to a big-ass AGM caravan battery (which on a couple of occasions powered through two-to-three hour power outages), and has now been replaced with these recent LiFePo4 battery power station thingies.
Of course, it's only a homelab, there's nothing critically important that I'm hosting, but that's not the dang point, I want to beat most of "the things", and I don't like having to check that everything has rebooted properly after a minor power fluctuation (I have a few things that mount remote file stores and these mounts usually fail upon boot due to the speed at which certain devices boot up - and I've decided not to solve that yet).
Can you share more about this? I have a APC Back UPS PRO USV 1500VA (BR1500G-GR) and it would be nice to know if this is possible with that one as well.
It was a crude mod. Take the cover off and remove the existing little security alarm battery, use tin snips to cut a hole in the side of the metal UPS cover (this was challenging, it was relatively thick metal, I'd recommend using an angle grinder in an appropriately safe environment far away from the internals of the UPS), and feed the battery cables out through the hole. I probably got some additional cables with appropriately sized terminations to effectively extend the short existing ones (since they were only designed to be used within the device). And then connect it up to a car battery.
Cover any exposed metal on the connectors with that shrink rubber tubing or electrical tape. Be very careful with exposed metal around it anywhere, especially touching the RED POSITIVE pole of the battery. Get a battery box - I got one for the big-ass AGM battery.
Test it out on a laptop that's had it's battery removed or disconnected that, just in case, you don't care too much about losing.
Get a battery charger that can revive a flat battery, and do a full refresh/renew charge on the car battery once a year or after it's had to push through a power outage that may have used more than a few percent of its capacity.
Personally, I think it's safer a less hassle to go for a LiFePo4 (LFP) Power Station style device that has UPS capabilities. LFP batteries have 3,000-ish cycle lifetimes, which could be nearly ten years with daily use.
If your OS is using systemd, you can fix that pretty easily by adding an After=network-online.target (so the ExecStart doesn't even try to check if there is no networking yet) and an ExecCondition shell script [1] to actually check if nfs / smb on the target host is alive as an override to the fs mounts.
Add a bunch of BindsTo overrides to the mounts and the services that need the data, and you have yourself a way to stop the services automatically when the filesystem goes away.
I've long been in the systemd hater camp, but honestly, not having to wrangle with once-a-minute cronjobs to check for issues is actually worth it.
[1] https://forum.manjaro.org/t/for-those-who-use-systemd-servic...
Do you have power outages often? Even if I have one, my services can come up automatically without doing anything, when the power is restored.
You will end up paying much more for your services, along with spending a ton of time maintaining it (and if you don't, you will probably find yourself on the end of a 0-day hack sometime).
In Northern/Western Europe, where power costs around €0.3/kWh on average, just the power consumption of a simple 4 bay NAS will cost you almost as much as buying Google Drive / OneDrive / iCloud / Dropbox / Jottacloud / Whatever.
A simple Synology 4 bay NAS like a DS923+ with 4 x 4TB Seagate Ironwolf drives will use between 150 kWh and 300 kWh per year (100% idle vs 100% active, so somewhere in between), which will cost you between €45 and €90 per year, and that's power alone. Factoring in the cost of the hardware will probably double that (over a 5 year period).
It's cheaper (and easier) to use public cloud, and then use something like Cryptomator (https://cryptomator.org/) to encrypt data before uploading it. That way you get the best of both worlds, privacy without any of the sysadm tasks.
Edit: I'll just add, as you grow older, you come to realize that time is a finite resource, and while money may seem like it is finite, you can always make more money.
Don't spend your time hunched over servers. Spend it doing things you love with people that matter to you. Eventually those people won't be there anymore, and the memories you make with those people will matter far more to you in 20 years, than the €20/month you paid for cloud services.
As for electric heating, that is true in 1:1 heating scenarios, but i assume you guys are also using heat pumps these days, and while you still get heat "for free", it will not be anywhere as efficient as your heat pump.
Yes, it's probably peanuts in the grand scheme of things, i know our air to water heat pump in Denmark uses around 4500-5500 kWh per year, so adding another 100 kWh probably won't mean much.
VPS are very expensive for what you get. If you have the capital, doing it yourself saves you money very quickly. It's not rare to pay $50 for a semi-decent VPS, but for $2000 you would get an absolute beast that would last 10 years at the very least.
With Docker, maintenance is basically zero and unused services are stopped or restarted with 1 command.
I've also self hosted for decades, but it turns out i don't really need that much, at least not public.
I basically just need mail, calendar, file storage and DNS adblocking. I can get mail/calendar/file storage with pretty much any cloud provider (and no, there is no privacy when it comes to mail, there is always another participant in the conversation), and for €18/year i can get something like NextDNS, Control D, or similar.
For reference, a Raspberry Pi 4 or 5 will use around 50 kWh per year, which (again in europe) will translate to €15/year. For just €3 more per year i get redundancy and support.
I still run a bunch of stuff at home, but nothing is opened to the public internet. Everything runs behind a Wireguard VPN, and i have zero redundant storage. My home storage is used for backing up cloud storage, as well as storing media and client backups. And yes, i also have another cloud where i backup to.
My total cloud bill is around €20-€25/month, with 8TB of storage, ad blocking DNS, mail/calendar/office apps and even a small VPS.
Not to mention that I love them.
On my homelab, I update everything every quarter and it takes about 1 hour, so 4 hours a year is pretty reasonable. Docker helps a lot with this.
And I’ve almost never run into trouble in years, so I have very few unexpected maintenance tasks.
EDIT: I am referring to a homelab that is only accessible for private purposes through a VPN.
If you only access your homelab over VPN or similar, then by all means, update whenever you feel like it, but if you expose your services to the internet, you want to be damned sure there are no vulnerabilities in them.
The internet of today is not like it was 20 years ago. Today you're constantly being hammerede by bots that scan every single IPv4 address for open ports, and when they find something they record it in a database, along with information on what's running on that port (provided that information is available).
When (not if) a vulnerability for a given service is discovered, an attacker doesn't need to "hunt & peck" for vulnerable hosts, they already have that information in a database, and all they need to do is start shooting at their list of hosts.
You can use something like shodan.io to see what a would be attacker might see (can check your own IP with "ip:xxx.xxx.xxx.xx".
Try entering something like Synology, Proxmox, Truenas, Unraid, Jellyfin, Plex, Emby, or any of the other popular home services.
Even at the high end estimate the homelab is giving you several times the storage for the same cost.
Very few people i know has use for that much storage. Yes, you can download the entire netflix catalog, and that will of course require more storage, and no, you probably shouldn't put it in the cloud (or even back it up, or use redundant storage for it).
Setting up your own homelab to be your own netflix, but using pirated content, is not really a use case i would consider. I'm aware people are doing it, and i still think it's stupid. They're saving money by "stealing" (or at least breaking laws), which is hardly a lifehack.
Google One for 10TB is 274,99€/mo (at least in my country) so you'd make the entire nas price and subscription cost within a few months, let alone years.
There just aren't compelling public cloud for large sizes (My NAS is 30TB capacity and I'm using 18 right now) and even if you go the more complex loops with like S3 and whatnot you still get billed more than it's worth. Public cloud is meant for public files, there's a lot of costs you're paying for stuff you don't need like being fast to access from everywhere.
It’s also an excuse for me to stay in most summer days.
For my personal use case, that involves photos and documents, all things i cannot easily recreate (photos less so). Those are what matters to me, and storing them in the cloud means i not only get redundancy in a single data center, but also geographical redundancy as many cloud providers will use erasure coding to make your data available across multiple data centers.
Everything else will be just fine on a single drive, even a USB drive, as everything that originated on the internet can most likely be found there again. This is especially true for media (purchased, or naval aquisition). Media is probably the most replicated data on the planet, possibly only behind the bible and IKEA catalog.
So, back to the important data, i can easily fit an entire family of 4 into a single 2TB data plan. That costs me somewhere around €85 - €100 per year, for 4 people, and it works no matter what i do. I no longer need to drag a laptop with me on vacation, and i can basically just say "fuck it" and go on vacation for 2 weeks.
All this assuming that you even need that much storage, which most people definitely do not.
I, for one, don't want to have Google, etc. as a dependency[1], so I will pay some energy cost to do that.
1: see: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Who are you to tell people how to spend their time? Let people have hobbies ffs
But not all those minis are the same. G4 (intel 8th gen) and G5 (intel 9th gen) HPs are horrendous. The fan makes an extremely aggravating noise, and I haven't found a way to fix it. Bonus points for both the fan and heatsink having custom mounts, so even if you wanted to have an ugly but quiet machine by slapping a standard cooler, you couldn't.
G6 versions (intel 10th gen) seem to have fixed this, and they're mostly inaudible on a desk, unless you're compiling something for half an hour.
No idea what happened, but Raspberry Pis are super expensive for the last couple years, which is why I decided to just go with used Intel NUCs instead. They cost around 80-150EUR and they use more electricity but they are a quite good bang for the buck, and some variants also have 3x HDMI or Gbit/s ethernet or m2 slots you can use to have a SATA RAID in them.
With an n100, you get a better, more upgradable system for around the same price and same power usage. On top, you will also have an x64 system that isn't limited to some ARM quirks. I made the switch n100's over a year ago and have had no issues with them so far.
It has an i5-6500, 32 GB RAM (16 + 2x8 DIMMs), 2 SATA SSDs and a 2x10Gb Connect-X3. It runs 24/7 hosting OpnSense and HomeAssistant on top of KVM (Arch Linux Hardened – didn't do anything specific to lower the power draw). Sometimes other stuff, but not right now.
I haven't measured it with this specific nic, but before it had a 4x1Gb i350. With all ports up, all VMs running but not doing much, some power meter I got off Amazon said it pulled a little over 14W. The peak was around 40 when booting up.
Electricity costs 0.22 €/kWh here. The machine itself cost me 0 (they were going to throw it out at work), 35 for the nic and maybe 50 for the RAM. It would take multiple years to break even by buying one of these small machines. My plan is to wait out until they start having 10 Gb nics and this machine won't be able to keep up anymore.
(clarification: that's euro cent, so 0.0635€ etc)
Those little thin clients aren't gonna be fast doing "big" things, but serving up a few dns packets or whatever to your local network is easy work and pretty useful.
I also used to over-engineer my homelab, but I recently took a more simplistic approach (https://www.cyprien.io/posts/homelab/), even though it’s probably still over-engineered for most people.
I realized that I already do too much of this in my day job, so I don’t want to do it at home anymore.
As an example, I use cloudflare tunnel to point to an nginx that reverse proxies all the services, but I could just as well point DNS to that nginx and it would still work. I had to rebuild the entire thing on my home server when I found that the cheap VPS I was using was super over-provisioned ($2/mo for 2 Ryzen 7950 cores? Of course it was) and I had this thing at home anyway, and this served me well for that use-case.
When I rebuilt it, I was able to get it running pretty quickly and each piece could be incrementally done: i.e. I could run without cloudflare tunnel and then add it to the mix, I could run without R2 and then switch file storage to R2 because I used FUSE s3fs to mount R2, so on and so forth.
Also Proxmox was called out as the only choice when that is very much not the case. It is a good choice for sure, but there are others.
How to actually reliably expose a homelab to the broader internet is a little tricky, cloudflare tunnels mostly does the trick but can only expose one port at a time, so the set up is somewhat annoying
Some family members are behind CGNAT, and I'm not sure if their ISP has the option to move out from behind that, but since they don't self-host it's probably slightly more secure from outside probes. We're still able to privately share communications via my VPN hub to which they connect, which allows me to remotely troubleshoot minor issues.
I haven't looked into cloudflare tunnels, but haven't felt the need.
I run cloudflared on one machine, and it proxies one subdomain to one port, and another to a unix socket (could have been a second port, no pb).
I ran a home lab for a number of years. This was a fairly extensive set up - 4 rack mount servers, UPS, ethernet switch etc with LTO backups. Did streaming, email and file storage for the whole family as well as my own experiments.
One morning I woke up to a dead node. The DMZ service node. I found this out because my wife had no internet. It was running the NAT and email too. Some swapping of power supplies later and I found the whole thing was a complete brick. Board gone. It's 07:45 and my wife can't check her schedule and I'm still trying to get 3 kids out of the door.
At that point I realised I'd fucked up by running a home lab. I didn't have the time or redundancy to have anyone rely on me.
I dug the ISP's provided WiFi router out, plugged it in and configured it quickly and got her laptop and phone working on it. Her email was down but she could check calendar etc (on icloud). By the end of the day I'd moved all the family email off to fastmail and fixed everything to talk to the ISP router properly. I spent the next year cleaning up the shit that was on those servers and found out that between us we only had about 300 gig of data worth keeping which was distributed out to individual macbooks and everyone is responsible for backing their own stuff up (time machine makes this easy). Eventually email was moved to icloud as well when domains came along.
I burned 7TB of crap, sold all the kit and never ran a home lab again. Then I realised I didn't have to pay for the energy, the hardware or expend the time running it. There are no total outages and no problems if there's a power failure. The backups are simple, cheap and reliable. I don't even have a NAS now - I just bought everyone some Samsung T7 shield disks.
I have a huge weight off my shoulders and more free time and money. I didn't learn anything I wouldn't have learned at work anyway.
I need to update it and patch it, hoping nothing goes wrong in the process. If something breaks I'm the only one that can repair it, and I really don't want to hear my wife screaming at me at 7am when I wake up.
Eventually I came to your same conclusion, but I still run a hybrid setup that allows me to keep the router (for now), and a NAS for backup (3-2-1) and some local services. I run a dedicated server from Hetzner for "always on" services, so that the hardware, power redundancy and operational toil are offloaded. I gave up long ago on email: any hosting service will be way better than me doing it - I know I can do it, but is it worth my sanity? Nope.