this post was submitted on 13 Jun 2023
79 points (95.4% liked)

Selfhosted

40347 readers
312 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I see many posts asking about what other lemmings are hosting, but I'm curious about your backups.

I'm using duplicity myself, but I'm considering switching to borgbackup when 2.0 is stable. I've had some problems with duplicity. Mainly the initial sync took incredibly long and once a few directories got corrupted (could not get decrypted by gpg anymore).

I run a daily incremental backup and send the encrypted diffs to a cloud storage box. I also use SyncThing to share some files between my phone and other devices, so those get picked up by duplicity on those devices.

(page 2) 47 comments
sorted by: hot top controversial new old
[–] jon@lemmy.tf 2 points 1 year ago

Got a Veeam community instance running on each of my VMware nodes, backing up 9-10 VMs each.

Using Cloudberry for my desktop, laptop and a couple Windows VMs.

Borg for non-VMware Linux servers/VMs, including my WSL instances, game/AI baremetal rig, and some Proxmox VMs I've got hosted with a friend.

Each backup agent dumps its backups into a share on my nas, which then has a cron task to do weekly uploads to GDrive. I also manually do a monthly copy to an HDD and store it off-site with a friend.

[–] TheCakeWasNoLie@lemmy.world 2 points 1 year ago (1 children)

Rsync script that does deltas per day using hardlinks. Found on the Arch wiki. Works like a charm.

[–] ptman@sopuli.xyz 1 points 1 year ago (1 children)
[–] TheCakeWasNoLie@lemmy.world 1 points 1 year ago

No. Rsync works fine, and it is easily testable (untested backups are no backups)

[–] poVoq@slrpnk.net 2 points 1 year ago

btrfs and btrbk work very well, tutorial: https://mutschler.dev/linux/fedora-btrfs-35/

[–] Amius@yiffit.net 2 points 1 year ago

Holy crap. Duplicity is what I've been missing my entire life. Thank you for this.

[–] somedaysoon@lemmy.world 2 points 1 year ago
[–] cwiggs@lemmy.world 2 points 1 year ago

My important data is backed up via Synology DSM Hyper backup to:

  • Local external HDD attached via USB.
  • Remote to backblaze (costs about $1/month for ~100gb of data)

I also have proxmox backup server backup all the VM/CTs every few hours to the same external HDD used above, however these backups aren't crucial, it would just be helpful to rebuild if something went down.

[–] kabouterke@lemmy.world 2 points 1 year ago

In short: crontab, rsync, a local and a remote raspberry pi and cryptfs on usb-sticks.

[–] leopardboy@lemmy.world 1 points 1 year ago

On my home network, devices are backed up using Time Machine over the network. I also use Backblaze to make a second backup of data to their cloud service, using my own private key. Lastly, I throw some backups on a USB drive that I keep in a fire safe.

[–] hal@sopuli.xyz 1 points 1 year ago

restic + rclone crypt + whatever storage server/service is good enough. currently using hetzner storage for my backups. because they've auto snapshots on top of my backups.

I also use this setup for backups on servers, not only at home

[–] Oli@fedia.io 1 points 1 year ago (2 children)

In the process of moving stuff over to Backblaze. Home PCs, few clients PCs, client websites all pointing at it now, happy with the service and price. Two unraid instances push the most important data to an azure storage a/c - but imagine i'll move that to BB soon as well.
Docker backups are similar to post above, tarball the whole thing weekly as a get out of jail card - this is not ideal but works for now until i can give it some more attention.

*i have no link to BB other than being a customer who wanted to reduce reliance on scripts and move stuff out of azure for cost reasons.

load more comments (2 replies)
[–] cwiggs@lemmy.world 1 points 1 year ago

My important data is backed up via Synology DSM Hyper backup to:

  • Local external HDD attached via USB.
  • Remote to backblaze (costs about $1/month for ~100gb of data)

I also have proxmox backup server backup all the VM/CTs every few hours to the same external HDD used above, however these backups aren't crucial, it would just be helpful to rebuild if something went down.

[–] Faceman2K23@discuss.tchncs.de 1 points 1 year ago

I back up everything to my home server... then I run out of money and cross my fingers that it doesn't fail.

Honestly though my important data is backed up on a couple of places, including a cloud service. 90% of my data is replaceable, so the 10% is easy to keep safe.

[–] craftymansamcf@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

For smaller backups <10GB ea. I run a 3 phased approach

  • rsync to a local folder /srv/backup/
  • rsync that to a remote nas
  • rclone that to a b2 bucket

These scripts run on the cron service and I log this info out to a file using --log-file option for rsync/rclone so I can do spot checks of the results

This way I have access to the data locally if the network is down, remotely on a different networked machine for any other device that can browse it, and finally an offsite cloud backup.

Doing this setup manually through rsync/rclone has been important to get the domain knowledge to think about the overall process; scheduling multiple backups at different times overnight to not overload the drive and network, ensuring versioning is stored for files that might require it and ensuring I am not using too many api calls for B2.

For large media backups >200GB I only use the rclone script and set it to run for 3hrs every night after all the more important backups are finished. Its not important I get it done asap but a steady drip of any changes up to b2 matters more.

My next steps is to maybe figure out a process to email the backup logs every so often or look into a full application to take over with better error catching capabilities.

For any service/process that has a backup this way I try and document a spot testing process to confirmed it works every 6months:

  • For my important documents I will add an entry to my keepass db, run the backup, navigate to the cloud service and download the new version of the db and confirm the recently added entry is present.
  • For an application I will run through a restore process and confirm certain config or data is present in the newly deployed app. This also forces me to have a fast restore script I can follow for any app if I need to do this every 6months.
[–] sambal@lemmy.world 1 points 1 year ago

I use rclone to encrypt and send my most valuable data to OneDrive.

[–] milan@discuss.tchncs.de 1 points 1 year ago* (last edited 1 year ago)

I usually just use Restic (not just for servers), for big databases i pipe pg_dump directly into it, and for even bigger ones, i recently moved to pgBackRest.

I ping to a selfhosted Healthchecks instance to see if my backups still run. (or the other way around)

On my main desktop (which recently became a mac, i am sorry) – i currently use Autorestic for multiple locations... its nice to have that yaml, but – well – i am used to bashscripts anyway so it is not that big of a benefit i guess.. .

[–] tj@fedia.io 1 points 1 year ago* (last edited 1 year ago)

I have a central NAS server that hosts all my personal files and shares them (via smb, ssh, syncthing and jellyfin). It also pulls backups from all my local servers and cloud services (google drive, onedrive, dropbox, evernote, mail, calender and contacts, etc.). It runs zfs raid 1 and snapshots every 15 minute. Every night it backs up important files to Backblaze in a US region and azure in a EU region (using restic).

I have a bootstrap procedure in place to do a "clean room recovery" assuming I lost access to all my devices - i only need to remember a tediously long encryption password for a small package containing everything needed to recover from scratch. It is tested every year during Christmas holidays including comparing every single backed and restored file with the original via md5/sha256 comparison.

[–] ptman@sopuli.xyz 1 points 1 year ago

I'm moving from rsync+duplicity+borg towards bupstash

[–] drwho@beehaw.org 1 points 1 year ago

All of my servers have shell scripts that rsync important stuff to a subdirectory. Other scripts run database dumps a couple of times a day.

My primary server at home then rsyncs my servers' backup subdirectories to its own, broken out by FQDN.

Leandra then uses Restic to back everything up (herself as well as the other servers' backups) to Backblaze B2 on a two year cycle.

[–] xionzui@lemmy.world 1 points 1 year ago

I use backupninja for the scheduling and management of all the processes. The actual backups are done by rsync, rdiff, borg, and the b2 tool from backblaze depending on the type and destination of the data. I back up everything to a second internal drive, an external drive, and a backblaze bucket for the most critical stuff. Backupninja manages multiple snapshots within the borg repository, and rdiff lets me only copy new data for the large directories.

[–] wpuckering@lm.williampuckering.com 0 points 1 year ago (1 children)

I run all of my services in containers, and intentionally leave my Docker host as barebones as possible so that it's disposable (I don't backup anything aside from data to do with the services themselves, the host can be launched into the sun without any backups and it wouldn't matter). I like to keep things simple yet practical, so I just run a nightly cron job that spins down all my stacks, creates archives of everything as-is at that time, and uploads them to Wasabi, AWS S3, and Backblaze B2. Then everything just spins back up, rinse and repeat the next night. I use lifecycle policies to keep the last 90 days worth of backups.

[–] palitu@lemmy.perthchat.org 1 points 1 year ago

I like the cut of your jib!

Any details on the scripts?

Backing up to backblaze with duplicacy

load more comments
view more: ‹ prev next ›