My WD NAS Raid Drive was set for Raid 1 and the controller circuit board died. After verifying the drtives were okay and it was the internal controller, I saved the drives and chucked the housing and bad card. A year later, I am trying to recover the data. The drives are detected and each hase 4 partitions, but none of the partitions are recognized by the system as FAT32, or NTFS and of course, there is no drive letter assigned to them. I am currently in the process of a raw data recovery on one of the drives, but that will take 200 hours and will likely return poor results.
Here is the disk layout:
Is there any way to recover the data?
Thanks for any help.
The NAS probably has (had) an embedded Linux server, in which case those 4 partitions would contain Linux file systems, ext2 or ext3 or ext4.
Try Linux Reader for Windows:
fzabkar, you were right about those being linix partitions. I downloaded the linix reader and although it seems to recognizxe the linix poartitions, it can not open them. It doesn’t give a reason for failing to open them, it just reports that it can’t. Here is what the partition of interest looks like:
Any ideas on how to open this volume?
I can’t see your latest image until it is approved, but you could try booting from a Ubuntu Live CD.
As for why Linux Reader can’t mount your file systems, I would have thought that it should have been able to mount at least one of the partitions, since it would seem highly unlikely that all could be damaged.
One other alternative might be to reconstruct the RAID as a software RAID using mdadm (a Linux tool).
Okay, it seems like progress is being made. I am running ubuntu from the CD, I installed mdadm and ran it. here is what I get:
root@ubuntu:/home/ubuntu# mdadm --query /dev/sdc4
/dev/sdc4: is not an md array
/dev/sdc4: device 1 in 2 device inactive raid1 array. Use mdadm --examine for more detail.
root@ubuntu:/home/ubuntu# mdadm --examine /dev/sdc4
Magic : a92b4efc
Version : 0.90.00
UUID : 3ecf2439:68631e4b:6322b6b8:f9c53d56
Creation Time : Sun Feb 15 10:18:53 2009
Raid Level : raid1
Used Dev Size : 972703552 (927.64 GiB 996.05 GB)
Array Size : 972703552 (927.64 GiB 996.05 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 4
Update Time : Thu Dec 20 16:01:37 2012
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : 81daafc1 - correct
Events : 1564640
Number Major Minor RaidDevice State
this 1 8 20 1 active sync
0 0 8 4 0 active sync /dev/sda4
1 1 8 20 1 active sync
What is the next step?
Sorry, my knowledge of RAID wouldn’t fill the back of a postage stamp. I’m just relaying information I’ve seen in other threads and other forums.
That said, ISTM that, even though your RAID is a mirror, you may need to install both drives before mdadm will be ale to see your data.
Here is the man page for mdadm:
I would avoid any command that relates to creating or building an array. Instead I would concentrate on the “examine” and “assemble” commands.
In the Examples section …
echo ‘DEVICE /dev/hd[a-z] /dev/sd*[a-z]’ > mdadm.conf
mdadm --examine --scan --config=mdadm.conf >> mdadm.conf
“This will find what arrays could be assembled from existign IDE and SCSI whole drives (not partitions) and store the information is the format of a config file. This file is very likely to contain unwanted detail, particularly the devices= entries. It should be reviewed and edited before being used as an actual config file.”
mdadm --assemble --scan
“This will assemble and start all arrays listed in the standard confile file. This command will typically go in a system startup file.”
An alternative which you might like to try before installing the second drive is to explore the “–run” option.
“This will fully activate a partially assembled md array.”