I have this large tower case, with a formerly kick-ass Intel motherboard and 4-core i7. Splurge on some obsolete and cheap memory DIMMs and 32GB of perfectly functioning awesomeness spins to re-life.
A perfect target system for my linux environment. Stick in a small SSD and presto-chango - Linux Ubuntu Studio springs to life. Ah, the familiar stirrings of a long lost friend to play with - my fingers still remember the vi magic keystrokes and combinations... the full story will be here, someday.
But this is about the other inherited computer parts - the stack of now obsolete spinning magnetic media drives. There is plenty of room in the cavernous case for them all, plus more.
So, yes, I know - spinning magnetic media storage is soo last decade, and for sure not the place where you want to put crucial mission critical data. Or precious family memories.
Live data is best handled by Solid State Disks, these days, and that is what I do. But still, there is this stack of perfectly functioning 4 TB spinning magnetic media disks. 5 of them. That is 4TB of capacity, per disk. 20 TB total. I do pine to use them, for something...
I experiment. I consider. I discover 2 are reporting SMART errors and have failure codes and basically inform me that they're at the end of their tether - no more spare sectors, no more low level formatting to recover even reduced capacity - I suspect these two had a hard crash - heads contacting the disc surface, and detritus flying around inside the sealed cavity. Ok, that leaves 3 with perfectly robust health reports. But 3... that is not a good number for simple RAID configurations.
I can mirror 2, but that gives only 4 TB of redundant data, and leaves one disk unused. RAID 5 doesn't work on 3 disks as it uses whole disks for a parity function so needs at least 4 or more. I also was lusting after a striping option to accelerate some read operations.
So I concocted a scheme - what if I arrange 6 partitions on the 3 disks of 2TB each. I'd have a total of 12TB in 6 partitions, and I could mirror up pairs on separate disks for redundancy, and with judicious arrangement would end up with a single 6TB volume mapped onto striped, mirrored physical partitions.
The google elves were somewhat helpful - nothing like that had ever been reported that they could find. But tantalizing hints drifted up in the search nets. Creating Logical Volumes seemed the way to go - this would let me set up the 6 logical partitions on the 3 physical disks. But the Logical Volume Manager couldn't seem to stitch these together into a single array with mirroring between the exact partitions I desired. For that, I needed the Disk Manager (or is that the Disk Mapper ?) DM utilities. This allowed me to specify the stripe sets and mirror sets and bingo - my single volume springs into being - 6TB of striped, mirrored fault-tolerant spinning magnetic media goodness.
Of course, it is only theoretically fault-tolerant - if a disk were to fail, I theoretically know where the mirrored partitions are, and I can theoretically mount them with their partner absent, and spin up the whole volume without two of the partitions, and have all the data be there. But I haven't written the script that does that. I suppose there are only 3 possible failure cases. All told, 4 use cases, the normal all is healthy state, which is currently running. Then the cases where disk 1 is down, disk 2 is down, or disk 3 is down. But then what - replace the failed disk, and the kernel rebuilds the newly inserted mirrored partition? I suspect there is a a bit of wizardry to uncover here. But at least I know it is theoretically possible. That is enough for now. If a disk ever fails, I will puzzle it out.
Or, if I suddenly find myself with a bunch of time to try failure cases and do a dry run recovery. Or happen upon a set of 3 disks to play with to try this out on... oh, I just remembered I have an even older rack server with 6 500GB disks I can play with... I will keep you posted..
I do want to have to STOP when a disk fails - silently operating with reduced redundancy is not smart - this is not a continuously running system with a rebuilding while on-line requirement.
I also have not done any performance testing yet - I don't know if I am getting any benefit from the striping/mirroring arrangement I have concocted - in theory, it ought to give me a bit of a read boost, as there is two places any given bit of data resides. I made sure the partitions were switched for locations on the disk - the mirrored partitions are both on the inside AND outside partitions on their physical disks.
So the super smart kernel drivers can figure out which physical head is closest to the desired sector and command that head to move, only, and queue up transfers from 3 separate disks in sequence, as every read of more than the block size ( - gosh, what did I set that to, again?) requires data from each of the 3 disks. For bunches of random reads, it ought to improve access and read times by upto a factor of 2, maybe even 3. It is a curious thing to be explored.