this post was submitted on 29 Jan 2025
27 points (88.6% liked)

Selfhosted

41571 readers
933 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm thinking of expanding my homelab to support running some paid SaaS projects out of my house, and so I need to start thinking about uptime guarantees.

I want to set up a cluster where every service lives on at least two machines, so that no single machine dying can take a service down. The problem is the reverse proxy: the router still has to point port 443 at a single fixed IP address running Caddy, and that machine will always be a single point of failure. How would I go about running two or more reverse proxy servers with failover?

I'm guessing the answer has something to do with the router, and possibly getting a more advanced router or running an actual OS on the router that can handle failover. But at that point the router is a single point of failure! And yes, that's unavoidable... but I'm reasonably confident that the unmodified commodity router I've used for years is unlikely to spontaneously die, whereas I've had very bad luck with cheap fanless and single-board computers, so anything I buy to use as an advanced router is just a new SPOF and I might as well have used it for the reverse proxy.

top 31 comments
sorted by: hot top controversial new old
[–] Netrunner@programming.dev 1 points 1 hour ago* (last edited 1 hour ago)

Disappointed to see the cloud people preaching uptime when most cloud offerings have severe downtime issues weekly.

Stop living in a bubble.

Github was down yesterday and that isn't fun.

Stuff still goes down all the time on the cloud. More than on prem in my experience.

They don't even properly track their downtime and lie about 99.9

[–] WagyuSneakers@lemmy.world 3 points 20 hours ago

SLAs?

You're going to need a redundant ISP and a generator. You've left the territory where it's economical to self host something if that's what you're looking at. You still have several other single points of failure.

And I'll be honest, your setup isn't ready for an SLA either. Just having a second machine is such a small part of what you need to do before doing any guarantees. Are you using a Dynamic DNS service? What's the networking setup look like? Router to Compute?

From the sounds of it, you're not a professional. It might be time to engage an expert if you want to grow this.

[–] possiblylinux127@lemmy.zip 1 points 18 hours ago

This is a rabbit hole that's going to be very expensive. Caddy isn't going to do what you are wanting. You likely need enterprise systems which are complex and require at least 3 machines.

I would use AWS or Azure instead

[–] gray@pawb.social 48 points 2 days ago (2 children)

My personal opinion, as soon as you’re charging and providing SLAs you’ve exceeded what you should be doing on a residential ISP.

I’d really recommend putting your app in a real cloud solution, which can provide actual load balancing via DNS natively for regional failover if you desire.

[–] False@lemmy.world 14 points 2 days ago

I feel like op is about to find out why businesses pay for cloud services.

[–] victory@lemmy.sdf.org 7 points 2 days ago* (last edited 2 days ago) (3 children)

I get it, and I've seen this response other places I've asked about this too. But a license agreement can just offer refunds for downtime, it doesn't have to promise any specific amount of availability. For small, cheap, experimental subscription apps, that should be enough; it's not like I'm planning on selling software to businesses or hosting anything that users would store critically important data in. The difference in cost between home servers and cloud hosting is MASSIVE. It's the difference between being able to make a small profit on small monthly subscriptions, versus losing hundreds or thousands per month until subscriber numbers go up.

(also fwiw this entire plan is dependent on getting fiber internet, which should be available in my area soon; without fiber it would be impractical to run something like this from home)

[–] possiblylinux127@lemmy.zip 1 points 18 hours ago

You aren't going to get high reliability unless you spend big time. Instead, could you just offer uptime during business hours? Maybe give yourself a window to do planned changes.

[–] possiblylinux127@lemmy.zip 0 points 18 hours ago

This will blow up in your face. You know enough to be dangerous but no enough to know that uptime is very hard.

AWS or Azure really isn't that expensive if you are just running a VM with some containers. You don't need to over think it. Create a VM and spin up some docker containers.

[–] WagyuSneakers@lemmy.world -1 points 20 hours ago (1 children)

That's not the point. Its unprofessional. Someone is going to smash and grab OPs idea and actually have the skills to host it properly. Probably at a fraction of the cost because OP doesn't understand that hosting SaaS products out of his house isn't professional or effective.

Also; cloud is cheaper than self hosting at any small amount of scale. This wouldn't cost much to run in AWS if built properly. The people who struggle with AWS costs are not professionals and have no business hosting anything.

[–] possiblylinux127@lemmy.zip 2 points 18 hours ago (1 children)

This is so true. You can't expect your home server to ever be compatible to enterprise setups. Companies who have stuff on prem are still paying for redundant hardware and software which requires money and skill to maintain.

[–] WagyuSneakers@lemmy.world 1 points 13 hours ago (1 children)

I've done the on prem design. I've migrated people entirely to the cloud. I specialize a little in between.

Without any shred of doubt the cloud is going to be more cost effective than self hosting for 99% of all use cases. They're priced that way intentionally. You cannot compete with Cloudflare/AWS/GCP/Vultr/Akami/Digital Ocean/etc.

My homelab isn't about scaling, production workloads and definitely isn't accessible to anyone but me. I'd argue using it in any other way defeats the purpose and shows a lack of understanding.

[–] possiblylinux127@lemmy.zip 1 points 12 hours ago

The cloud is cheaper hosting things like websites that need HA. However if you are doing big compute or storing lots of data it will not be cheaper.

[–] tvcvt@lemmy.ml 4 points 1 day ago

I do this with HAProxy and keepalived. My dns servers resolve my domains to a single virtual ip that keepalived manages. If one HAProxy node goes down, the other picks right up.

And this is one of the few things I’ve got setup with ansible, so deploying and making changes is pretty easy.

[–] Decipher0771@lemmy.ca 11 points 2 days ago (1 children)

Keepalived to set up a floating IP between two proxy hosts. The VIP is where the traffic points to, the two hosts act as active/passive HA.

[–] victory@lemmy.sdf.org 1 points 2 days ago (1 children)

Looking into this a little, it might be what I need. The documentation I've found on this says it uses VRRP, which creates a "virtual" IP address; will that be different from the machine's own IP address? And will an ordinary router be able to forward a port to this kind of virtual IP address without any special configuration?

[–] Decipher0771@lemmy.ca 3 points 2 days ago (1 children)

Yes. Your machines would have one main IP address, and one virtual IP address that would be assigned to either machine depending on the priority or health check status. That IP can be on the same physical interface, or a separate one. It’s very flexible, pretty standard config for high availability setups.

[–] Andres4NY@social.ridetrans.it 1 points 2 days ago

@Decipher0771 @victory Neat, I didn't know keepalived was still active and popular. https://bugs.debian.org/144100

[–] jbloggs777@discuss.tchncs.de 8 points 2 days ago

Additional SPoFs: Your upstream internet connection, your modem/router, electricity supply, your home (not burning, flooded, collapsed, etc.). And you.

[–] scrubbles@poptalk.scrubbles.tech 8 points 2 days ago (1 children)

Congrats, you're officially at the point where you should probably looking at kubernetes. Highly available, failover, and load balancers. It's a steep learning curve, but if you're looking for this level of availability you're probably ready for it

[–] victory@lemmy.sdf.org 1 points 2 days ago (2 children)

Already considering using Kube, though I haven't read much about it yet. Does it support this specific use case (making multiple servers share a single LAN IP with failover), in a way that an ordinary router can use that IP without special configuration?

[–] possiblylinux127@lemmy.zip 1 points 18 hours ago* (last edited 18 hours ago)

You want proper Kubernetes. Kube is for learning and testing purposes only. In Kubernetes there are plenty of different Ingress services available depending on your provider. I would look into something like Traefik or Metallb

I use k3s as my base with istio to handle routing, so each node then has the same ports open and istio is the proxy. Internally there's a load balancer to distribute to whatever pod the traffic needs to go to. Outside the cluster DNS is my only single point of failure but it routes to multiple hosts. I doubt you'd have trouble finding a way to have a DNS that can do that. I don't think you can get that much more separated from single points

[–] Shimitar@downonthestreet.eu 4 points 2 days ago* (last edited 2 days ago)

No, the router being the SPOF (single point of failure) is totally avoidable.

At mny home (no SaaS services offered, but critical "enough" for my life services) i have two different ISPs on two different tecnologies: one is FTTC via copper cable (aka good old ADSL successor) plus a WFA 5G (much faster but with data cap). Those two are connected to one opnSense router (which, indeed, is a SPOF at this time). But you can remove also this SPOF by adding a second opnSense and tie the two in failover.

So the setup would be:

  • FTTC -> ISP1 router -> LAN cable 1 to port 1 of opnSense n.1
  • FTTC -> ISP1 router -> LAN cable 2 to port 1 of opnSense n.2
  • FWA -> ISP2 router -> LAN cable 1 to port 2 of opnSense n.1
  • FWA -> ISP2 router -> LAN cable 2 to port 2 of opnSense n.2

Then in both opnSense i would setup failover multi-WAN and bridge them together so that one diyng will trigger the second one.

edit: fixed small errors

[–] slazer2au@lemmy.world 3 points 2 days ago

So you have 2, or 3 spof, your home internet, your home router, and your reverse proxy container.

You can solve most of that with a second internet connection on its own router and some k3s/k8s

Current router points to one container then you have your second router point to the other container. You can use DNS load balancing to share the connections over your 2 internet connections.

Depending on your monitoring system you if a connection goes down you could then trigger a DNS update to remove the offline connection from DNS. You will have to set the ttl of the record to facilitate the change more rapidly.

[–] ikidd@lemmy.world 2 points 2 days ago* (last edited 2 days ago)

OPNsense and HAproxy might be a place to start, they work well together. You can define a backend pool of servers for roundrobinning, and if you buy a block of IPs you can roundrobin the incoming requests as well. I run OPNsense as a VM so that I can use Proxmox's high availability service for the router and it'll failover or manually livemigrate if I'm doing maintenance. You can VLAN the servers off from the rest of the network as well with OPNsense, and set up VPNs there for clients if needed, or use the SDN functions in the hypervisor to segregate servers if you're running them on the hypervisor.

[–] just_another_person@lemmy.world -2 points 2 days ago* (last edited 2 days ago) (2 children)

The term you're looking for is load balancing. DNS load balancing will work fine for your purposes. Use a DNS host that supports health checks to the endpoints, and you're all set. If one goes down, DNS will not be returned when querying the record for the downed host.

[–] gray@pawb.social 4 points 2 days ago

For what OP is asking DNS has no part in DNAT, they need a load balancer.

Personally, asking about high uptime on a residential ISP is the larger issue here, but alas.

[–] victory@lemmy.sdf.org 2 points 2 days ago (3 children)

I don't think this is it. The router doesn't know anything about DNS, it only knows "this port goes to this IP address". It seems like I either need multiple devices sharing an IP address or router software that inherently supports load balancing.

[–] possiblylinux127@lemmy.zip 0 points 18 hours ago* (last edited 18 hours ago)

You need something like HAproxy or Traefik

[–] False@lemmy.world 2 points 2 days ago

You just described a load balancer. The router doesn't know about DNS but clients using your service use DNS. You can do some simple load balancing behind DNS. If you want to do it by IP address you want a load balancer though.

If your current router doesn't support static DNS entries or advanced management of them, you could run a DNS service, or just get a router that runs OpenWRT. GL.Inet makes solid devices for relatively cheap.