Hi all! I'm running Snapraid in Docker and wrote an integration test, just to make sure I got everything right before unleashing it on my precious data. The test mounts a few volumes, but Snapraid won't touch them, complaining that they reside on the same disk. I'm working around that with loop devices, which however are a mess to clean up when tests fail. How would I best implement these tests? Is there a way to disable the same-disk check?
snapraid touch fails with an error: "Error timing file path/to/file Operation not permitted" I'm running snapraid as a non-privileged user who, however, has r/w access to the file.
Thank you, I overlooked that. On 01/07/2022 20:57, UhClem wrote: [If I understand your proposed layout correctly ...] Yes, your normal (non-SnapRAID) data operations will be faster, due to raid5 striping. BUT, your SnapRAID operations (sync, scrub, fix etc.) will be painfully slow, since you will be accessing those partitions concurrently, requiring extremely excessive disk seeks. Please advise: RAID5 + snapraid https://sourceforge.net/p/snapraid/discussion/1677233/thread/6a70bf60f9/?limit=25#7263...
A correction: RAID5 is split into partitions to meet size cap for the parity drive On Fri, 1 Jul 2022 at 11:09, George Georgovassilis georgeuoa@users.sourceforge.net wrote: What do y'all think about this setup: RAID5 (3 hard drives for data) + snapraid (1 hard drive for parity) The reasoning: - RAID5 provides larger throughput than a single drive - tolerates failure of one data drive and/or one parity drive without compromising availability Comparing to a 3 data drives + 1 parity drive setup: - higher...
What do y'all think about this setup: RAID5 (3 hard drives for data) + snapraid (1 hard drive for parity) The reasoning: - RAID5 provides larger throughput than a single drive - tolerates failure of one data drive and/or one parity drive without compromising availability Comparing to a 3 data drives + 1 parity drive setup: - higher throughput - about same data safety (I value that with snapraid you get to keep partial data even when a majority of disks break) - less net capacity - about same availability...
I can't say - swing is not my thing. You don't really have a guarantee that the try/catch...
I can verify this on ubuntu 12.04 with openjdk 1.7.55. I debugged the issue a little...