this post was submitted on 13 Dec 2023
234 points (98.0% liked)

Selfhosted

40382 readers
463 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm a retired Unix admin. It was my job from the early '90s until the mid '10s. I've kept somewhat current ever since by running various machines at home. So far I've managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of "interesting" reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I'm thinking it's no longer a fad and I should invest some time getting comfortable with it?

(page 2) 50 comments
sorted by: hot top controversial new old
[–] Opeth@lemm.ee 6 points 11 months ago

Ive worked in enterprise and government as a software engineer and docker has been the defacto standard everywhere since at least 5 years now. It's not going away soon.

[–] quackers@lemmy.blahaj.zone 6 points 11 months ago

It's quite easy to use once you get the hang of it. In most situations, it's the prefered option because you can just have your docker container, choose where relevant files are allowing you to properly isolate your applications. Or on single purpose servers, it makes deployment of applications and maintaining dependencies significantly easier.
At the very least, it's a great tool to add to your toolbox to use as needed.

[–] hottari@lemmy.ml 6 points 11 months ago

I am running all my software services with docker. It's stupid simple to manage and I have all of my running services in one paradigm.

[–] onlinepersona@programming.dev 6 points 11 months ago (34 children)

Why wouldn't you want to use containers? I'm curious. What do you use now? Ansible? Puppet? Chef?

load more comments (34 replies)
[–] SeeJayEmm@lemmy.procrastinati.org 5 points 11 months ago

I was like you and avoided it for a long time. Dedicated use, lean VMs for each thing I was running. I decided to learn it, mostly out of curiosity and I'll be honest, I like the convenience of it a lot. They're easier to deploy and tend to have lower overhead than a single purpose VM running the same software.

Around the same time I switched my VM server over to Proxmox and learned about LxC containers. Those are also pretty nifty and a nice middle ground between full VM and docker container.

Currently I have a mixed environment because I like to use my homelab to learn, but most new stuff I deploy tends to go in this order: Docker > LxC > full VM.

[–] marzhall@lemmy.world 5 points 11 months ago (1 children)

It's convenient. Can't hurt to get used to it, for sure, in that it's useful to not have to go through dependency hell installing things sometimes. It's based on kernel features I don't see Linus pulling out, so I think you'll only see it more.

As someone who runs nix-only at home, I mostly use its underlying tech in the form of snaps/flatpaks, though. I use docker itself at work constantly, but at home, snaps/flatpaks tend to do the "minimize thinking about dependencies and building" bit but in a workflow more convenient for desktop applications.

load more comments (1 replies)
[–] alphacyberranger@lemmy.world 5 points 11 months ago

Learning docker is always a big plus. It's not hard. If you are comfortable with cli commands, then it should be a breeze. Even if you are not comfortable, you should get used to it very fast.

[–] WindowsEnjoyer@sh.itjust.works 5 points 11 months ago (1 children)

If you have homelab and not using containers - you are missing out A LOT! Docker-compose is beautiful thing for homelab. <3

load more comments (1 replies)
[–] x3i@lemmy.x3i.tech 5 points 11 months ago

Yes. Let me give you an example on why it is very nice: I migrated one of my machines at home from an old x86-64 laptop to an arm64 odroid this week. I had a couple of applications running, 8 or 9 of them, all organized in a docker compose file with all persistent storage volumes mapped to plain folders in a directory. All I had to do was stop the compose setup, copy the folder structure, install docker on the new machine and start the compose setup. There was one minor hickup since I forgot that one of the containers was built locally but since all the other software has arm64 images available under the same name, it just worked. Changed the host IP and done.

One of the very nice things is the portability of containers, as well as the reproducibility (within limits) of the applications, since you divide them into stateless parts (the container) and stateful parts (the volumes), definitely give it a go!

[–] irotsoma@lemmy.world 5 points 11 months ago (5 children)

Docker is nice for things that have complex installations and I want a very specific implementation that I don't plan to tweak very much. Otherwise, it's more hassle than it's worth. There are lots of networking issues like limited/experimental support for IPv6, and too much is hidden and preconfigured, making it difficult to make adjustments that would otherwise just be a config file change.

So it is good for products like a mail server where you want to use the exact software they use like let's say postfix + dovecot + roundcube + nginix + acme + MySQL + spam assassin + amavisd, etc. But you want to use an existing reverse proxy and cert it setup, or want to use a different spam filter or database and it becomes a huge hassle.

load more comments (5 replies)
[–] SpaceCadet@feddit.nl 4 points 11 months ago

I think it's a good tool to have on your toolbelt, so it can't hurt to look into it.

Whether you will like it or not, and whether you should move your existing stuff to it is another matter. I know us old Unix folk can be a fussy bunch about new fads (I started as a Unix admin in the late 90s myself).

Personally, I find docker a useful tool for a lot of things, but I also know when to leave the tool in the box.

[–] jrbaconcheese@yall.theatl.social 4 points 11 months ago* (last edited 11 months ago)

As someone who is not a former sysadmin and only vaguely familiar with *nix, I’ve been able to turn my home NAS (bought strictly to hold photos and videos backed up from our phones) into a home media sever by installing Docker, learning how the yml files work, how containers network, etc, and it’s been awesome.

[–] Gooey0210@sh.itjust.works 4 points 11 months ago (8 children)
load more comments (8 replies)
[–] Presi300@lemmy.world 4 points 11 months ago* (last edited 11 months ago)

Yeh, I'm not a system admin in any meaning of the word, but docker is so simple that even I got around to figuring it out and to me it just exists to save time and prevent headaches (dependency hell)

[–] Smk@lemmy.ca 3 points 11 months ago (1 children)

I would never go back installing something without docker. Never.

load more comments (1 replies)
[–] floridaman@lemmy.blahaj.zone 3 points 11 months ago

Some people seem to hate on it, but I love Docker, it works well for what it has to do and has relatively low overhead as far as I can tell. I personally virtualize a Debian server on Proxmox for my containers just so as to keep everything even more compartmentalized, but it takes more work than it's worth to set up.

And if you don't like Docker for whatever reason, you can also try Podman which is API compatible with Docker for the most part.

[–] ricdeh@lemmy.world 3 points 11 months ago (10 children)
load more comments (10 replies)
[–] TheHolm@aussie.zone 3 points 11 months ago (2 children)

Try other container technologies lie LXC or go right side and play with FreeBSD jails. Quality of dockers you can find around is horrendous, giving that Docker itself build for convenience not security. It is not something I will trust.

[–] valaramech@kbin.social 4 points 11 months ago

There's nothing wrong with OCI Images. If you're concerned about the security of Docker (which, imo, you should be) there are other container runtimes that don't have its security tradeoffs (e.g. podman).

load more comments (1 replies)
[–] DontNoodles@discuss.tchncs.de 3 points 11 months ago (6 children)

I hate it very much. I am sure it is due to my limited understanding of it, but I've been stuck on some things that were very easy for me using VM.

We have two networks, one of which has very limited internet connectivity, behind proxy. When using VMs, I used to configure everything: code, files, settings on a machine with no restrictions; shut it down; move the VM files to the restricted network; boot and be happily on my way.

I'm unable to make this work with docker. Getting my Ubuntu server fetch its updates behind proxy is easy enough; setting it for python Pip is another level; realising the specific python libraries need special keys to work around proxies is yet another; figuring out how to get it done for Docker and python under it is when I gave up. Why can it not be as simple as the VM!

Maybe I'm not looking using the right terms or maybe I should go and learn docker "properly", but there is no doubt that using Docker is much more difficult for my use case than using VMs.

[–] SpaceCadet@feddit.nl 3 points 11 months ago* (last edited 11 months ago) (1 children)

Huh? Your docker container shouldn't be calling pip for updates at runtime, you should consider a container immutable and ephemeral. Stop thinking about it as a mini VM. Build your container (presumably pip-ing in all the libraries you require) on the machine with full network access, then export or publish the container image and run it on the machine with limited access. If you want updates, you regularly rebuild the container image and repeat.

Alternatively, even at build time it's fairly easy to use a proxy with docker, unless you have some weird proxy configuration. I use it here so that updates get pulled from a local caching proxy, reducing my internet traffic and making rebuilds quicker.

load more comments (1 replies)
load more comments (5 replies)
[–] P1r4nha@feddit.de 3 points 11 months ago

Definitely not a fad. It's used all over the industry. It gives you a lot more control over the environment where your hosted apps run. There may be some overhead, but it's worth it.

load more comments
view more: ‹ prev next ›