this post was submitted on 19 Aug 2023
22 points (92.3% liked)

Selfhosted

40152 readers
491 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

TL;DR: I want to keep my containers up to date, currently Portainer based compose files updated by renovate. How do you do it?

Status Quo

I'm hosting a few containers on my Unraid Homeserver for personal use, but I don't use the Unraid Webinterface to control them. I'm running Portainer CE in a Container on the host. Within Portainer I use the "Stacks" feature to define my containers. The Stack-files (basically docker-compose files) reside in a private Git(-hub) repository. I configured renovate to create pull requests to the Git repository in case there are new updates for the container images (aka new tagged images).

Issues

Currently I'm not really satisfied with that workflow. These are the issues I have:

  • It's not really automatic. I still have to manually approve the Pull Requests on GitHub, even though I don't test them before applying
  • I once updated a specific container but the database structure of the application changed. I had to manually restore the application data from a backup
  • Some containers I use don't have proper versioning (e.g. only a "latest" image)
  • For some containers renovate doesn't open Pull Requests for updates. I think it's because the images are not in Docker Hub, but on GitHub or other registries.
  • Adding new stacks to Portainer is cumbersome, I have to specify the Git repository, the path of the docker-compose file and credentials everytime.

Wishlist

What I would like to have:

  • Automatic Updates to my containers (bug fixes, new features, security fixes)
    • Updates should apply automatically except if I pin the image tag/version
  • Before updating a container the container should get shutdown and a copy of the application data should be created
  • If the container exits unexpectedly after an update, an automatic rollback should get applied. Notification to me and no further updates for this container until I continue it.
  • Container definitions should be defined in a version controlled code/text, e.g. docker-compose files in a Git repo
  • Solution should be self hosted

Questions

I'm aware of watchtower, but as far as I see it only updates the live-configuration of the system. So no version control or roll-backs. What do you folks think? Are my requirements stupid overkill for a homeserver? How do you keep your container based applications up to date?

top 10 comments
sorted by: hot top controversial new old
[–] psmt@lemmy.pcft.eu 8 points 1 year ago (1 children)

It looks like you are trying to reinvent parts of kubernetes.

I would recommend to give it a try, it's easy to spin up with k3s, even on a single node!

Set imagePullPolicy to Always in your deployments (this is more or less k8s version of compose) and latest tag, then every time you restart a deployment, you get the latest version, with auto rollback. Set the tag to a static version and it doesn't update as long as you don't change it.

For gitops, add fluxcd.io and you're set, it doesn't even require a CI workflow.

For the data copy, k8s provides Volume Snapshots https://kubernetes.io/docs/concepts/storage/volume-snapshots/

[–] diecknet@discuss.tchncs.de 2 points 1 year ago* (last edited 1 year ago)

Oh, lol! I mean I was totally aware of Kubernetes existing as an enterprise grade container solution, but didn't really consider that it could fit my needs. Makes so much sense that they have a feature like Volume snapshots. Gonna look into Kubernetes/k3s. Thanks!

[–] Haui@discuss.tchncs.de 4 points 1 year ago (1 children)

I‘m using watchtower. Unless you tell it to delete old containers directly, you always have the „old version“ usable. Depending on what your container runs (e.g. a database that is unusable in case of rollback), you could add a manual db dump (probably can add that script to watchtower) which you can then reimplement in case of failure.

But honestly, a solid backup of my configs/files combined with watchtower is pretty neat. No need to intervene at all.

Good luck. :)

[–] Bristlerock@kbin.social 2 points 1 year ago (1 children)

This is what I do. I find keeping 20-odd docker-compose files (almost always static content) backed up to be straightforward.

Each is configured to bring up/down the whole stack in the right order, so any Watchtower-triggered update is seamless. My Gotify container sends me an update every time one changes. I use Portainer to manage them across two devices, but that's just about convenience.

I disable Watchtower for twitchy containers, and handle them manually. For the rest, the only issue I've seen is if there's a major change in how the container/stack is built (a change in database, etc), but that's happened twice and I've been able to recover.

[–] Haui@discuss.tchncs.de 1 points 1 year ago (1 children)

Nice! For me its like 10+ stacks and maybe 15 containers. Also all managed my changing compose files which I‘m constantly improving. Adding .env files, changing database and heavy workload paths to ssds instead of hdds and so on. It‘s insane what you can do.

Backing it us is less easy for me since I have to dump 3 databases and copy lots of config files. But tar.gz is my friend. :)

[–] Bristlerock@kbin.social 1 points 1 year ago (1 children)

Yeah, it make for a nice workflow, doesn't it. It doesn't give you the "fully automated" achievement, but it's not much of a chore. :)

Have you considered something like borgbackup? It does good deduplication, so you won't have umpteen copies of unchanged files.

I use it mostly for my daily driver laptop to backup to my NAS, and the Gitlab CE container running on the NAS acts as the equivalent for its local Git repos, which are then straightforward to copy elsewhere. Though haven't got it scripting anything like bouncing containers or DB dumps.

[–] Haui@discuss.tchncs.de 1 points 1 year ago

It’s pretty awesome but I think I still need to improve a lot of stuff.

Sadly, deduplucation doesn’t help me for my docker configs as there are not many files. Deduplication would help for my bulk storage backup but I think iDrive already has something like this in their scheduling program.

[–] eluvatar@programming.dev 3 points 1 year ago

We use Rancher fleet. It monitors a repo for k8s YAML files and applies them to the cluster automatically. But that doesn't sound like it would work for you.

As for PRs, I'm sure you can setup a GitHub workflow to automatically merge PRs (I'd make sure to filter them by the author though).

For the images without proper versions you can always use the Image ID as a pinned reference to a specific image. Though whether that same image is still in docker hub is a different story.

A lot of your wishlist could be done quite easily with kubernetes. The automatic update isn't built in but tools exist to help you with that (even rancher fleet).

[–] ninjan@lemmy.mildgrim.com 2 points 1 year ago

I'm planning CI/CD which you're basically proposing for my own needs as well. My plan is to build it service by service in Jenkins or perhaps another similar tool, though I am familiar with Jenkins from work.

Jenkins can fetch the configuration for each flow (project) from GIT such that I don't need to interact with Jenkins much at all. Notifications will be through Matrix. Backups to my S3 (swift) which in turn is backed up to Dropbox so it's offsited as well.

It will then poll for changes in code or, when applicable, for new container versions. It also has a decent API such that I can trigger builds on commit and similar.

[–] Decronym@lemmy.decronym.xyz 1 points 1 year ago

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
Git Popular version control system, primarily for code
NAS Network-Attached Storage
k8s Kubernetes container management package

[Thread #85 for this sub, first seen 28th Aug 2023, 00:35] [FAQ] [Full list] [Contact] [Source code]