this post was submitted on 10 Dec 2024
25 points (100.0% liked)

Linux

48665 readers
1167 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

This hasn't happened to me yet but I was just thinking about it. Let's say you have a server with an iGPU, and you use GPU passthrough to let VMs use the iGPU. And then one day the host's ssh server breaks, maybe you did something stupid or there was a bad update. Are you fucked? How could you possibly recover, with no display and no SSH? The only thing I can think of is setting up serial access for emergencies like this, but I rarely hear about serial access nowadays so I wonder if there's some other solution here.

top 24 comments
sorted by: hot top controversial new old
[–] MNByChoice@midwest.social 10 points 1 week ago (1 children)

Serial is still a thing.
Get a cheap video card.
Or a usb to vga adapter.
A server class system with BMC.
Live CD.

There are options.

[–] berylenara@sh.itjust.works 1 points 1 week ago* (last edited 1 week ago) (1 children)

Serial is still a thing.

Good to know 👍

Get a cheap video card.

I'd be tempted to just pass it through as well 😅

Live CD.

Doesn't work if you have encrypted disk (nevermind I was wrong about this)

Or a usb to vga adapter.

A server class system with BMC.

Interesting ideas, I'll look into them thanks

[–] eldavi@lemmy.ml 1 points 1 week ago (1 children)

Doesn’t work if you have encrypted disk

this this because you are unable to provide the encryption password?

[–] berylenara@sh.itjust.works 1 points 1 week ago

I was wrong, got confused about how secure boot and disk encryption worked 😅

[–] Max_P@lemmy.max-p.me 7 points 1 week ago (1 children)

I just have a boot entry that doesn't do the passthrough, doesn't bind to vfio-pci and doesn't start the VMs on boot so I can inspect and troubleshoot.

[–] berylenara@sh.itjust.works 2 points 1 week ago (1 children)

That sounds brilliant. Have any resources to learn how to do something like this? I've never created custom boot entries before

[–] Max_P@lemmy.max-p.me 5 points 1 week ago (1 children)

I use systemd-boot so it was pretty easy, and it should be similar in GRUB:

title My boot entry that starts the VM
linux /vmlinuz-linux-zen
initrd /amd-ucode.img
initrd /initramfs-linux-zen.img
options quiet splash root=ZSystem/linux/archlinux rw pcie_aspm=off iommu=on systemd.unit=qemu-vms.target

What you want is that part: systemd.unit=qemu-vms.target which tells systemd which target to boot to. I launch my VMs with scripts so I have the qemu-vms.target and it depends on the VMs I want to autostart. A target is a set of services to run for a desired system state, the default usually being graphical or multi-user, but really it can be anything, and use whatever set of services you want: start network, don't start network, mount drives, don't mount drives, entirely up to you.

https://man.archlinux.org/man/systemd.target.5.en

You can also see if there's a predefined rescue target that fits your need and just goes to a local console: https://man.archlinux.org/man/systemd.special.7.en

[–] berylenara@sh.itjust.works 1 points 1 week ago

This looks simple enough, I'll have a crack at it this weekend. Thank you

[–] horse_battery_staple@lemmy.world 6 points 1 week ago* (last edited 1 week ago) (1 children)

Boot to live disk.

Edit vmconfig to not start at boot.

Mount vmdisk to live disk

Fix ssh

[–] berylenara@sh.itjust.works 0 points 1 week ago* (last edited 1 week ago)

As mentioned in another reply, this doesn't work if you have encrypted disk. The price for security I suppose

Edit: nevermind I thought that secure boot and disk encryption would prevent you from mounting the disk to another system, but that appears to be wrong

[–] InEnduringGrowStrong@sh.itjust.works 3 points 1 week ago (1 children)

I passthrough a GPU (no iGPU on this mobo).
It only hijacks the GPU when I start the VM, for which I haven't configured autostart.
Before the VM is started it's showing the host prompt. It doesn't return to the prompt if the VM is shutdown or crashed, but a reboot would, hence not autostarting that VM.
If it got borked too much, putting a temporary GPU might be easier.

Also, don't break your ssh.
Pretty easy with PKI auth.

[–] berylenara@sh.itjust.works 1 points 1 week ago (1 children)

It only hijacks the GPU when I start the VM

How did you do this? All the tutorials I read hijack the GPU at startup. Do you have to manually detach the GPU from the host before assigning it to the VM?

[–] InEnduringGrowStrong@sh.itjust.works 2 points 1 week ago (1 children)

Interesting.
I'm not doing anything special that wasn't in one of the popular tutorials and I thought that's how it was supposed to work, although it might very well be a "bug" how it behaves right now.

I don't know enough about this, but the drivers are blacklisted on the host at boot, yet the console is still displayed through the GPU's HDMI at that time which might depend on the specific GPU (a vega64 in my case).

The host doesn't have a graphical desktop environment, just the shell.

[–] berylenara@sh.itjust.works 3 points 1 week ago

the drivers are blacklisted on the host at boot

This is the problem I was alluding to, though I'm surprised you are still able to see the console despite the driver being blacklisted. I have heard of people using scripts to manually detach the GPU and attach it to a VM, but sounds like you don't need that, which is interesting

[–] qjkxbmwvz@startrek.website 2 points 1 week ago (1 children)

For very simple tasks you can usually blindly log in and run commands. I've done this with very simple tasks, e.g., rebooting or bringing up a network interface. It's maybe not the smartest, but basically, just type root, the root password, and dhclient eth0 or whatever magic you need. No display required, unless you make a typo...

In your specific case, you could have a shell script that stops VMs and disables passthrough, so you just log in and invoke that script. Bonus points if you create a dedicated user with that script set as their shell (or just put in the appropriate dot rc file).

[–] berylenara@sh.itjust.works 1 points 1 week ago

I'll admit I've done this too 😅 Not ideal but a good idea nonetheless

[–] NeoNachtwaechter@lemmy.world 1 points 1 week ago (1 children)

Proxmox on the host. It uses a webserver for admin stuff.

No other things that run on the host ––> no other things that break on the host.

[–] berylenara@sh.itjust.works 1 points 1 week ago

If you want to lock down the web server and ssh behind a VPN, that's where you can fuck up and lock yourself out though.

[–] mvirts@lemmy.world 1 points 1 week ago (1 children)

Live boot, plug in a display?

Maybe I'm missing something here, but won't booting from live media run a normal environment?

If you don't have a live boot option you can also pull the disk and fix it on another machine, or put a different boot disk in the system entirely.

You can probably also disable hardware virtualization extensions in the bios to break the VM so it doesn't steal the graphics card.

[–] berylenara@sh.itjust.works 3 points 1 week ago* (last edited 1 week ago) (2 children)

A rescue iso doesn't work if you have encrypted disk. I thought everybody encrypted disk nowadays.

If you don’t have a live boot option you can also pull the disk and fix it on another machine, or put a different boot disk in the system entirely.

This is an interesting idea though, as long as the other machine has a different GPU then the system shouldn't hijack it on startup.

You can probably also disable hardware virtualization extensions in the bios to break the VM so it doesn’t steal the graphics card.

AFAIK GPU passthrough is usually configured to detach the GPU from the host automatically on startup. So even if all VMs were broken, the GPU would still be detached. However as another commenter pointed out, it's possible to detach it manually which might be safer against accidental lockouts.

[–] Max_P@lemmy.max-p.me 2 points 1 week ago (1 children)

How's the disk encrypted? I've never heard of anyone setting up an encrypted drive such that you can't manually mount it with the password. It's possible but you'd have to go out of your way to do that and only encrypt the drive with a TPM-managed key. It's kind of a bad idea because if you lock yourself out your data's gone.

[–] berylenara@sh.itjust.works 1 points 1 week ago

I was confused on how secure boot and disk encryption worked, ignore me 😅

[–] mvirts@lemmy.world 1 points 1 week ago (1 children)

😅 naa for me encryption a bigger risk than theft

That said, you should be able to decrypt your disks with the right key even on a live boot. Even if the secrets are in the tpm you should be able to use whatever your normal system uses to decrypt the disks.

If you don't enter a password to boot, the keys are available. If you do, the password can decrypt the keys afaik.

Again, I don't do this but that's what I've picked up here and there so take it with a grain of salt I may be wrong.

[–] berylenara@sh.itjust.works 2 points 1 week ago

Actually that might work. I thought that secure boot and disk encryption would prevent mounting the disk to a different system, but now I can't think of any reason why it would. Good idea