this post was submitted on 29 Jun 2024
119 points (97.6% liked)

Selfhosted

40329 readers
408 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hi all. I was curious about some of the pros and cons of using Proxmox in a home lab set up. It seems like in most home lab setups it’s overkill. But I feel like there may be something I’m missing. Let’s say I run my home lab on two or three different SBCs. Main server is an x86 i5 machine with 16gigs memory and the others are arm devices with 8 gigs memory. Ample space on all. Wouldn’t Proxmox be overkill here and eat up more system resources than just running base Ubuntu, Debian or other server distro on them all and either running the services needed from binary or docker? Seems like the extra memory needed to run the Proxmox software and then the containers would just kill available memory or CPU availability. Am I wrong in thinking that Proxmox is better suited for when you have a machine with 32gigs or more of memory and some sort of base line powerful cpu?

you are viewing a single comment's thread
view the rest of the comments
[–] TCB13@lemmy.world 4 points 4 months ago* (last edited 4 months ago)

LXC is worse than virtualization as it pins to a single core instead of getting scheduled by the kernel scheduler. It also is quiet slow and dated. Either run Podman, Docker or full VMs.

First what you're saying about the scheduler isn't even what happens by default, that was some crap that Proxmox pulled when they migrated from OpenVZ to LXC. To be fair, they had a bunch of more or less valid reasons to force that configuration, but again it due to kernel related issues that were affecting Proxmox more than regular Ubuntu and those issues were solved around the end of 2021.

Now Docker and LXC serve different purposes and they aren't a replacement for each other. Docker is a stateless application container solution while LXC is a full persistent container aimed at running full operating systems...

Docker and LXC share a bunch of underlaying technologies at on the beginning Docker even used LXC as their backed, they later moved to their execution environment called libcontainer because they weren't using all the featured that LXC provided and wanted more control over the implementation.

For those who really need full systems is LXC definitely faster than a VM. Your argument assumes everything can and should be done inside Docker/Podman when that's very far from the reality. The Docker guys have written a very good article showcasing the differences and optimal use cases for both.

Here two quotes for you:

LXC is especially beneficial for users who need granular control over their environments and applications that require near-native performance. As an open source project, LXC continues to evolve, shaped by a community of developers committed to enhancing its capabilities and integration with the Linux kernel. LXC remains a powerful tool for developers looking for efficient, scalable, and secure containerization solutions. Efficient access to hardware resources (...) Virtual Desktop Infrastructure (VDI) (...) Close to native performance, suitable for intensive computational tasks.

Docker excels in environments where deployment speed and configuration simplicity are paramount, making it an ideal choice for modern software development. Streamlined deployment (...) Microservices architecture (...) CI/CD pipelines.

Anyways...

It also ships with a newer kernel than Debian although it shouldn’t matter as you are using it for virtualization.

It matters, trust me. Once you start requiring modules it will suddenly matter. Either way even if they ship a kernel that is newer than Debian it is so fucked at that point that you'll be better with whatever Debian provides out of the box.