this post was submitted on 13 Dec 2023
234 points (98.0% liked)

Selfhosted

40382 readers
463 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm a retired Unix admin. It was my job from the early '90s until the mid '10s. I've kept somewhat current ever since by running various machines at home. So far I've managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of "interesting" reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I'm thinking it's no longer a fad and I should invest some time getting comfortable with it?

you are viewing a single comment's thread
view the rest of the comments
[–] themurphy@lemmy.world 8 points 11 months ago (3 children)

As a guy who's you before summer.

Can you explain why you think it is better now after you have 'contained' all your services? What advantages are there, that I can't seem to figure out?

Please teach me Mr. OriginalLucifer from the land of MoistCatSweat.Com

[–] BeefPiano@lemmy.world 23 points 11 months ago

No more dependency hell from one package needing libsomething.so 5.3.1 and another service absolutely can only run with libsomething.so 4.2.0

That and knowing that when i remove a container, its not leaving a bunch of cruft behind

[–] constantokra@lemmy.one 12 points 11 months ago

You can also back up your compose file and data directories, pull the backup from another computer, and as long as the architecture is compatible you can just restore it with no problem. So basically, your services are a whole lot more portable. I recently did this when dedipath went under. Pulled my latest backup to a new server at virmach, and I was up and running as soon as the DNS propagated.

[–] theterrasque@infosec.pub 5 points 11 months ago* (last edited 11 months ago) (1 children)

Modularity, compartmentalization, reliability, predictability.

One software needs MySQL 5, another needs mariadb 7. A third service needs PHP 7 while the distro supported version is 8. A fourth service uses cuda 11.7 - not 11.8 which is what everything in your package manager uses. a fifth service's install was only tested on latest Ubuntu, and now you need to figure out what rpm gives the exact library it expects. A sixth service expects odbc to be set up in a very specific way, but handwaves it in the installation docs. A seventh program expects a symlink at a specific place that is on the desktop version of the distro, but not the server version. And then you got that weird program that insist on admin access to the database so it can create it's own user. Since I don't trust it with that, let it just have it's own database server running in docker and good riddance.

And so on and so forth.. with docker not only is all this specified in excruciating details, it's also the exact same setup on every install.

You don't have it not working on arch because the maintainer of a library there decided to inline a patch that supposedly doesn't change anything, but somehow causes the program to segfault.

I can develop a service on windows, test it, deploy it to my Kubernetes cluster, and I don't even have to worry about which machine to deploy it on, it just runs it on a machine. Probably an Ubuntu machine, but maybe on that Gentoo node instead. And if my osx friend wants to try it out, then no problem. I can just give him a command, and it's running on his laptop. No worries about the right runtime or setting up environment or libraries and all that.

If you're an old Linux admin... This is what utopia looks like.

Edit: And restarting a container is almost like reinstalling the OS and the program. Since the image is static, restarting the container removes all file system cruft too and starts up a pristine new copy (of course except the specific files and folders you have chosen to save between restarts)

[–] themurphy@lemmy.world 2 points 11 months ago (1 children)

It sounds very nice and clean to work with!

If I'm lucky enough to get the Raspberry 5 at Christmas, I will try to set it up with docker for all my services!

Thanks for the explanation.

[–] theterrasque@infosec.pub 1 points 11 months ago

Just remember that Raspberry is an ARM cpu, which is a different architecture. Docker can cross compile to it, and make multiple images automatically. It takes more time and space though, as it runs an arm emulator to make them.

https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/ has some info about it.