Ok, i know this subject has been discussed before, including a nice HOW-TO by dudemanbubba: ( http://community.wdc.com/t5/WD-ShareSpace/HOWTO-Sharespace-RAID-5-Data-Recovery/td-p/287736)) but my situation is a little differen. I dont' have access to a desktop computer, only a laptop, therefore can't insert a PCI Multi-Sata Card.
Anyways, my situation: my wife plugged in a kettlle and powered it on. For some reason it tripped the whole house, switching everything off, inclduing the NAS drive. When we discovered it was the problem kettle, we swtiched everything back on and I worryingly ran to look at the NAS drive, which showed 4 failed drives. I did the normal things, switched it off and truned it back on...the same... I accessed the GUI interface which displayed the drives as failed, giving me the option to remove or format. At first I thought the drives where buggered but after reading around I realised it was the RAID controller not the drives which had failed.
I accessed the NAS drive using a Gentoo LIVECD, utilising SSH.
ssh root@"your NAS IP address" --- (withouth the parenthesis)
password: "welc0me" --- (without the parenthesis)
YOU WILL THEN BE IN THE "SSH" SHELL
I succesfully logged into the system and typed 'fdisk -l' and to my delight could see all the drives pop up with their relvent partitions. That supported the idea the drives were fine. For some reason though, WD format each drive with 4 partitons, the last holding all the data and which the chosen RAID (5 in my case) is applied. After talking around with a few people, it seems WD utilise the hard drives for a very basic UNIX/LINUX base system (hence the other 3 partitions), suggesting the RAID configuration is software based. This make things a little complicated.
Next, I examined the drives using "cat /proc/mdstat" and "mdadm --examine --scan" revealing that the drives were good too (I don't have the outputs of these, sorry). I don't remeber eveything about the details but the outputs showed the drives were clean, not degraded. It surprises me that I can manage to access the system this way but the system maintains the drives have failed!? but they obviously have not.
Anyways, I deceide to be safe (and because i thiought i didn't have the relevent hardware) to take to an IT store. They didn't do data recovery per se but did have some success in RAID recovery, and they were cheap. I also made sure they cloned the drives before their recovery attempt, which they did normally anyway. After a week of them messing around, they stated they could not access the GUI interface nor see their partitions. This got me worried they'd mucked it all up. But, alas, they did not. When I got it back home, it was still in the same state as I sent it.. I could till access and see the drives and their partitions and access the GUI interface.. Why could they not???? I reset the GUI interface so it would be defaulted for easy access for them... I got further than they did in a few hours to their week... Not taking things their again.. Useless..
Now I have it home, i'm reattempting recovery.. I bought an addittional 2TB drive to 2TB I already had and have them both in individual USB caddies. I did thing about using DD to clone the relevant partions to the other drives but have been pursuaded to use ddrescue instead. DD will stop if it finds any bad blocks, ddrescue does not.
This is how I have set up things cuurently:
I have one of the 1TB drives from the NAS drive housed in one of the USB caddies and a 2TB drive in the other, which has 2 partitions on it. I've initated the ddrescue to clone the 4th partition of the NAS drive to the 1st partion of the 2TB drive.
ddrescue -v -f /dev/sdb4 /dev/sdc1 bs=64k
(-v = verboses (displays text of whats going on) the porcess
-f = forces the process to overwrite the destination
/dev/sda4 (the 4th partition of the NAS drive (holding all the data you need) BE AWARE: THIS CAN CHANGE DEPENDING ON THE ORDER YOUR SYSTEM REGISTERES THE HARD DRIVES. IN MY CASE, I HAVE AN INTERNAL HARD DRIVE (dev/sda) AND TWO USB DRIVES, ONE HOLDING THE NAS DRIVE (/dev/sdb), THE OTHER IS THE DESTINATION DRIVE (/dev/sdc).
/dev/sdc1 = 1st partition of the destination hard drive.
bs=64k = block size is 64k (this was determined by examining the NAS using 'mdadm --examine /dev/sdb4' - however, 64k is pretty standard from what I understand)
This is where i'm upto. I'm currently cloning the 4th partition of the NAS drives into 4 partitions on 2 2TB drives (ensuring to maintain the order of the NAS drives as they are in the NAS housing).This ensures the original drives don't become damaged by mistakes i may make.
I will then attempt to reassemble the RAID using mdadm.
My only concern regarding this method is I'm sure not it will work, whether i can reassmeble the RAID with 2 TB with 2 partitions on each.. I've searched the interent for an answer and by asking around but I haven't found an answer. Is this possible, i'm assuming it can at this point...
Does anyone have any suggestions???