Quantcast
Channel: Raspberry Pi Forums
Viewing all articles
Browse latest Browse all 4919

Networking and servers • Re: Move Synology NAS Drives to Pi 5

$
0
0
Always partition madm disks, even if it's just one partition spanning the entire disk. Same with LVM. ie: never say "/dev/sda". Instead partition and say "/dev/sda1". I forget the technical reasons except for some vague recollection of encountering this back in industry on badly configured redhat boxes. Either some linux tools got confused over the "signature" and refused to run and someone had used the "--force" option and/or they'd not written the MBR onto all disks. In all cases, by the time we got got called in the disks were not recoverable. When you're paying IBM for 2 hour attendance on kit, it doesn't matter if they turn up on time, if all they do is flop about like a horker in skyrim for a couple of weeks. If you don't need hardware raid, don't do it.

If software raid (mdam) has enough grunt to deal with network demands, use software raid. You, at least, stand a chance of being able to recover it. It's not enough to just install mdadm and just leave it though. It needs to be tweaked (set internal intent bitmap) else by default you'll find it periodically rebuilding itself which is a solid state killer. You might get lucky - maybe 'smartd' is able to extract something useful off a usb attached device but assume not. One Jmicron device is not another, even in the same delivery,

The trouble with mdadm is you have to know how to recover it. You'll not know that until you've deliberately invented weird failure scenarios. The advent of solid state makes weirdness more likely. For want of detailed explanation, the summary is, if you must use raid1, use mdadm but partition both disks into two identically sized partitions. Run both /dev/sda1 and /dev/sda2 as raid1 for a few months with /dev/sdb1 and /dev/sdb2 as hot spares. Then force fail one of the /dev/sda partitions so that its /dev/sdb counterpart takes over. That way, when /dev/sda fails (having been hammered) you'll have enough time to notice and replace /dev/sda.

Easier solution is to run your NAS off one disk and periodically rsync it to the other.

If you're thinking "hang on, if everything is on /dev/sda for six months, what if it fails?" Exactly. Raid1 isn't a backup. You'll have needed to backup/rsync from the outset. Having not used it, It's why I can't advise on OMV. It may have solutions.

Statistics: Posted by swampdog — Fri Dec 13, 2024 1:18 am



Viewing all articles
Browse latest Browse all 4919

Trending Articles