ragebutt

joined 1 month ago
[–] ragebutt@lemmy.dbzer0.com 2 points 1 month ago (3 children)

Bitrot sucks

Zfs protects against this. It historically has been a pain to work with for home users but recently the implementation raidz expansion has made things a lot easier as you can now expand vdevs and increase the size of arrays without doubling the amount of disks.

This is a potential great option for someone like you who is just starting out but still would require a minimum of 3 disks and the associated hardware. Sucks for people like me though who built arrays lonnnnng before zfs had this feature! It was literally up streamed like less than a year ago, good timing on your part (or maybe bad, maybe it doesn’t work well? I haven’t read much about it tbf but from the small amount I have read it seems to work fine. They worked on it for years)

Btrfs is also an option for similar reasons as it has built in protections against bitrot. If you read on this there can be a lot of debate about whether it’s actually useful or dangerous. FWIW the consensus seems to be for single drives it’s fine. My array has a separate raid1 array of 2tb nvme drives, these are utilized as much higher speed cache/working storage for the services that run. Eg if a torrent downloads it goes to the nvme first as this storage is much easier to work with than the slow rotational drives that are even slower because they are in a massive array, then later the file is moved to the large array for storage in the middle of the night. Reading from the array is generally not an intensive operation but writing to it can be and a torrent that saturates my gigabit connection sometimes can’t keep up (or other operations that aren’t internet dependent like muxing or transcoding a video file). Anyway, this array has btrfs and has had 0 issues. That said I personally wouldn’t recommend it for raid5/6 and given the nature of this array I don’t care at all about the data on it

My array has xfs. This doesn’t protect against bitrot. What you can do if you are in this scenario is what I do: once a week I run a plugin that checksums all new files and verifies checksums of old files. If checksums don’t match it warns me. I can then restore the invalid file from backup and investigate for issues (smart errors, bad sata cable, ecc problem with ram, etc). The upside of my xfs array is that I can expand it very easily and storage is maximized. I have 2 parity drives and at any point I can simply pop in another drive and extend the array to be bigger. This was not an option with zfs until about 9 months ago. This is a relatively “dangerous” setup but my array isn’t storing amazing critical data, it’s fully backed up despite that, and despite all of that it’s been going for 6+ years and has survived at least 3 drive failures

That said my approach is inferior to btrfs and zfs because in this scenario they could revert to snapshot rather than needing to manually restore from backup. One day I will likely rebuild my array with zfs especially now that raidz expansion is complete. I was basically waiting for that

As always double check everything I say. It is very possible someone will reply and tell me I’m stupid and wrong for several reasons. People can be very passionate about filesystems

[–] ragebutt@lemmy.dbzer0.com 2 points 1 month ago (5 children)

Yeah I have a 15 drive array.

You can raid 1 and that’s basically just keeping a constant copy of the drive. A lot of people don’t do this because they want to maximize storage space but if you only have a 2 drive array it’s probably your safest option

it’s only when you get to 3 (2 drive array + parity) that you have some potential to maximize storage space. Note that here you’re still basically sacrificing the space of an entire drive but now you basically double it and this is more resilient overall because the data is spread out over multiple drives. But it costs more because obviously you need multiple drives

Keep in mind none of these are back up solutions though. It’s true that when a drive dies in a raid array you can rebuild the data from other drives but it is also true that this operation is extremely stressful and can lead to death of the array. Eg in raid 1 a single drive dies and when adding a new drive the second drive that held the copy of your data starts having sector corruption during rebuild of the new drive, or in raid 2 one of the 3+ drives dies and when you rebuild from parity the parity drive dies for similar reasons. These drives are normally only being accessed occasionally and the rebuild operation is basically seeking to every sector on the drive if you have a lot of data, and often puts the drive under a lot of read operation for a very long period of time (like days) especially if you get very large modern drives (18,20,24tb)

So either be okay with your data going “poof” or back up your data as well. When I got started I was okay with certain things going “poof”, like pirated media, and would backup essential documents to cloud providers. This was really the only feasible solution because my array is huge (about 200tb with about 100tb used). But now I have tape backup so I back everything up locally although I still back up critical documents to backblaze. Depends on your needs. I am very strict about not wanting to be integrated to google, apple, dropbox, etc. and my media collection is not simply stuff I can retorrent, it’s a lot of custom media I’ve put together the “best” version of to my taste. but to set something up like this either takes a hefty investment or if you’re like me years of trawling ewaste/recycling centers and decommission auctions (and it’s still pricey then but at least my data is on my server and not googles)

[–] ragebutt@lemmy.dbzer0.com 2 points 1 month ago (7 children)

That’s a pretty good question. I’ve never had it come up though; every drive in the nas is purchased and thrown in there. Although now that I’m thinking about it I don’t think I’ve ever purchased a brand new drive for my nas. I only ever buy refurbs from places that decommission server drives so I guess my “years” are inflated a bit, at least 2-3. Maybe I should adjust that number down! Although it’s been fine for years tbf

[–] ragebutt@lemmy.dbzer0.com 6 points 1 month ago (9 children)

Hard drives can last a long long time. I have test equipment with hard drives from the 90s that still run fine. That said when hard drives fail they fail quickly

I run a 15 drive nas. You’ll often see a few smart errors one day then total drive failure the next day. Sometimes the drive fails completely without any smart warning, especially if it’s that old. I try to retire drives from my nas before they fail for that reason (if they hit 7 year service life, and that’s pretty long but my nas is just a home server thing)

[–] ragebutt@lemmy.dbzer0.com 55 points 1 month ago (12 children)

If republicans can find a spiritual successor to trump that is charismatic, captures people in the way he does, and is actually physically attractive we are done for. I mean we may be done for already but then we definitely are

[–] ragebutt@lemmy.dbzer0.com 5 points 1 month ago

Yeah there are plenty of apps that can rip from tidal, apple music, etc. noteburner, deemix, deezloader, musify, notecable, and noteburner are all ones that I tried where they successfully ripped audio from streams to flac but spectrals showed the flac was transcoded from lossy source.

Granted this is basically inaudible and super nitpicky, like honestly show me the person who can truly hear the difference between a modern 320 mp3 and a 16bit flac in a double blind situation. But if you’re using these rippers to upload to a private tracker, especially a popular release, guarantee someone will check

That said streamrip can get deezer 16 bit, 24bit tidal mqa (which isn’t actually lossless), and 24/192 qobuz but you need a premium account and things break from time to time

https://github.com/nathom/streamrip

Apple music remains a very closely guarded secret although I recently saw this: https://github.com/zhaarey/apple-music-downloader . I have to create a burner and vm to play with this though bc it’s pretty sketch

[–] ragebutt@lemmy.dbzer0.com 5 points 1 month ago

only thing I would add to this thread is occasionally usenet can be handy if you’re looking for music that’s fairly mainstream. If you’re looking for some weird 7” that was self released with 50 copies that’s obviously not gonna work though

[–] ragebutt@lemmy.dbzer0.com 6 points 1 month ago (5 children)

Most of the publicly available ones that rip streaming services to lossless fail spectral checks. They can rip high quality MP3s which they then transcode to flac but if you were to upload this somewhere like RED you’d get shit for it. Literally every one I’ve found has failed the spectral check thread on RED

This MAY not apply for Spotify as they don’t stream lossless to begin with

The people that can actually rip fully lossless files from deezer, apple music, qobuz, tidal, etc guard that info like crazy. The second the method gets public you better believe all those companies are patching it out. Plus it probably doesn’t hurt that being the one with the keys to the method gets you like infinite ratio

[–] ragebutt@lemmy.dbzer0.com 8 points 1 month ago

oh duh

https://github.com/wasi-master/13ft/blob/main/docker-compose.yaml - this is the 12ft.io replacement i use. there are a few clones but this is the one I like, it's real barebones and uses very little overhead

https://komga.org/ - komga library https://github.com/Snd-R/komf - komf - this isn't strictly necessary but it fetches metadata for your komga library from sites like manga updates. can be a bit of a pain to configure https://github.com/Snd-R/komf-userscript - this is a tampermonkey script that makes komf MUCH easier to use https://github.com/dazedcat19/FMD2 - this is an app that rips manga from most of the "free manga" indexer sites like mangadex, bato, etc. docker and kubernetes version at https://github.com/ElryGH/docker-FMD2

you can read directly via komga web but frankly it kind of sucks for that. i prefer using an app. tachiyomi was the gold standard but companies threatened it and they stopped development. there are several forks now that are all good in various ways. i prefer mihon https://mihon.app/ but there are alternatives that have different feature sets

[–] ragebutt@lemmy.dbzer0.com 15 points 1 month ago (3 children)

A clone of 12ft.io but the old version before they got into beef with the New York Times and kneecapped it. It doesn’t work on every single article with a paywall but it works on the overwhelming majority (including New York Times articles)

And it doesn’t really count because I knew I’d use it but komga+komf+fmd2. I list it though because I didn’t realize I’d use this stack so much. I can now read with my phone, my laptop, my ereader, etc. tachiyomi/mihon works, reading progress is synced, and I never have to visit one of those garbage manga aggregation sites ever again

[–] ragebutt@lemmy.dbzer0.com 39 points 1 month ago

This isn’t wrong, per say, but it’s an oversimplification of a complicated relationship

Cortisol can influence how sensitive the body is to oxytocin, for one. Similarly chronic stress can inhibit oxytocin release. Most people can recognize this effect: high stress scenarios lower the effect of all the stress remedies you’ve suggested. Doesn’t mean to not try them of course

The timing and context of cortisol release play an important role in whether it supports or hinders oxytocin's effects. Short-term stress responses might be adaptive, while long-term chronic stress can be harmful to the body's oxytocin system. As a result cortisol isn’t inherently “bad”. (This is aside from its role in metabolism, insulin response, circadian rhythms, etc)

[–] ragebutt@lemmy.dbzer0.com 11 points 1 month ago

Funny enough OxyContin actually causes a temporary increase in cortisol. Opioids cause a stress response, body reacts by stimulating the HPA axis, and cortisol is released. Though long term use lead to dysregulation in cortisol levels and eventually blunted or irregular cortisol levels. withdrawal leads to spikes, and both withdrawal and chronic use lead to difficulty managing stress as a result

The more you know

view more: ‹ prev next ›