cross-posted from: https://programming.dev/post/9319044

Hey,

I am planning to implement authenticated boot inspired from Pid Eins’ blog. I’ll be using pam mount for /home/user. I need to check integrity of all partitions.

I have been using luks+ext4 till now. I am hesistant hesitant to switch to zfs/btrfs, afraid I might fuck up. A while back I accidently purged ‘/’ trying out timeshift which was my fault.

Should I use zfs/btrfs for /home/user? As for root, I’m considering luks+(zfs/btrfs) to be restorable to blank state.

  • The Doctor@beehaw.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    10 months ago

    That is not the case. In the context of btrfs, RAID-1 means “ensure that two copies of every data block are available in the running volume,” not “ensure that every bit of both of these drives is identical at all times.” For example, I have a btrfs volume in my server with six drives in it (14 TB each) set up as a RAID-1/1 (both data and metadata are mirrored). It doesn’t really matter which two drives of the six have copies of a given data block, only that two copies exist at all.

    Compare it to… three RAID-1 metadevices (mdadm), with LVM over top, and ext4 (let’s say) on top of that. When a file is created in the filesystem (ext4), LVM ensures that it doesn’t matter on which pair of drives it was written, and mdadm’s RAID-1 functionality ensures that there are always two identical copies of the file (on two identical copies of a drive).