82
submitted 1 year ago by Fjor@lemm.ee to c/selfhosted@lemmy.world

So, I am thinking about getting myself a NAS to host mainly Immich and Plex. Got a couple of questions for the experienced folk;

  • Is Synology the best/easiest way to start? If not, what are the closest alternatives?
  • What OS should i go for? OMV, Synology's OS, or UNRAID?
  • Mainly gonna host Plex/Jellyfin, and Synology Photos/Immich - not decided quite what solutions to go for.

Appricate any tips :sparkles:

you are viewing a single comment's thread
view the rest of the comments
[-] talentedkiwi@sh.itjust.works 13 points 1 year ago

I have proxmox on bare metal, an HBA card to passthrough to TrueNAS Scale. I've had good luck with this setup.

The HBA card is to passthrough to TrueNAS so it can get direct control of the drives for ZFS. I got mine on eBay.

I'm running proxmox so that I can separate some of my processes (e.g. plex LXC) into a different VM.

[-] thejevans@lemmy.ml 5 points 1 year ago

This is a great way to set this up. I'm moving over to this in a few days. I have a temporary setup with ZFS directly on Proxmox with an OMV VM for handling shares bc my B450 motherboard IOMMU groups won't let me pass through my GPU and an HBA to separate VMs (note for OP: if you cannot pass through your HBA to a VM, this setup is not a good idea). I ordered an ASRock X570 Phantom Gaming motherboard as a replacement ($110 on Amazon right now. It's a great deal.) that will have more separate IOMMU groups.

My old setup was similar but used ESXi instead of Proxmox. I also went nuts and virtualized pfSense on the same PC. It was surprisingly stable, but I'm keeping my gateway on a separate PC from now on.

[-] yote_zip@pawb.social 3 points 1 year ago

If you can't pass through your HBA to a VM, feel free to manage ZFS through Proxmox instead (CLI or with something like Cockpit). While TrueNAS is a nice GUI for ZFS, if it's getting in the way you really don't need it.

[-] thejevans@lemmy.ml 3 points 1 year ago

TrueNAS has nice defaults for managing snapshots and the like that make it a bit safer, but yeah, as I said, I run ZFS directly on Proxmox right now.

[-] yote_zip@pawb.social 1 points 1 year ago

Oh sorry for some reason I read OMV VM and assumed the ZFS pool was set up there. The Cockpit ZFS Manager extension that I linked has good management of snapshots as well, which may be sufficient depending on how much power you need.

[-] thejevans@lemmy.ml 2 points 1 year ago
[-] InformalTrifle@lemmy.world 2 points 1 year ago

I’d love to find out more about this setup. Do you know of any blogs/wikis explaining that? Are you separating the storage from the compute with the HBA card?

[-] yote_zip@pawb.social 3 points 1 year ago

This is a fairly common setup and it's not too complex - learning more about Proxmox and TrueNAS/ZFS individually will probably be easiest.

Usually:

  • Proxmox on bare metal

  • TrueNAS Core/Scale in a VM

  • Pass the HBA PCI card through to TrueNAS and set up your ZFS pool there

  • If you run your app stack through Docker, set up a minimal Debian/Alpine host VM (you can technically use Docker under an LXC but experienced people keep saying it causes problems eventually and I'll take their word for it)

  • If you run your app stack through LXCs, just set them up through Proxmox normally

  • Set up an NFS share through TrueNAS, and connect your app stack to that NFS share

  • (Optional): Just run your ZFS pool on Proxmox itself and skip TrueNAS

[-] InformalTrifle@lemmy.world 2 points 1 year ago

I already run proxmox but not TrueNAS. I’m really just confused about the HBA card. Probably a stupid question but why can’t TrueNAS access regular drives connected to SATA?

[-] yote_zip@pawb.social 1 points 1 year ago

The main problem is just getting TrueNAS access to the physical disks via IOMMU groups and passthrough. HBA cards are a super easy way to get a dedicated IOMMU group that has all your drives attached, so it's common for people to use them in these sorts of setups. If you can pull your normal SATA controller down into the TrueNAS VM without messing anything else up on the host layer, it will work the same way as an HBA card for all TrueNAS cares.

(TMK, SATA controller hubs are usually an all-at-once passthrough, so if you have your host system running off some part of this controller it probably won't work to unhook it from the host and give it to the guest.)

[-] InformalTrifle@lemmy.world 2 points 1 year ago

Makes sense, thanks for the info

[-] talentedkiwi@sh.itjust.works 2 points 1 year ago

That was one of the things I got wrong at first as well. But it totally makes it much easier in the long run.

[-] talentedkiwi@sh.itjust.works 2 points 1 year ago

This is 100% my experience and setup. (Though I run Debian for my docker VM)

I did run docker in an LXC but ran into some weird permission issues that shouldn't have existed. Ran it again in VM and no issues with the same setup. Decided to keep it that way.

I do run my plex and jellyfin on an LXC tough. No issues with that so far.

[-] rentar42@kbin.social 2 points 1 year ago

So theoretically if someone has alrady set up their NAS (custom Debian with ZFS root instead of TrueNAS, but shouldn't matter), it sounds like it should be relatively straightforward to migrate all of that into a Proxmox VM, by installing Proxmox "under it", right? Only thing I'd need right now is some SSD for Proxmox itself.

[-] yote_zip@pawb.social 0 points 1 year ago* (last edited 1 year ago)

Proxmox would be the host on bare metal, with your current install as a VM under that. I'm not sure how to migrate an existing real install into a VM so it might require backing up configs and reinstalling.

You shouldn't need any extra hardware in theory, as Proxmox will let you split up the space on a drive to give to guest VMs.

(I'm probably misunderstanding what you're trying to do?)

[-] rentar42@kbin.social 1 points 1 year ago

I just thought that if all storage can easily be "passed through" to a VM then it should in theory be very simple to boot the existing installation in a VM directly.

Regarding the extra storage: sharing disk space between proxmox and my current installation would imply that I have to pass-through "half of a drive" which I don't think works like that. Also, I'm using ZFS for my OS disk and I don't feel comformtable trying to figure out if I can easily resize those partitions without breaking anything ;-)

[-] yote_zip@pawb.social 0 points 1 year ago

That should work, but I don't have experience with it. In that case yeah you'd need another separate drive to store Proxmox on.

this post was submitted on 24 Sep 2023
82 points (94.6% liked)

Selfhosted

39276 readers
201 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS