RAID5 problems with 4TB WD Sharespace

Hi @all,

I have a really bad problem with my five-year-old wd sharespace here. It is the 4TB version, with latest firmware installed. As far as I know did one drive fail and another drive was accidentally temporary removed from the raid5-array. So the sharespace said as drive state for each drive “failed” in the webinterface. So I removed all drives from the sharespace and connected them to an PC running a Ubuntu-Live from USB, because I’m not quite familiar with the small busybox Linux on the sharespace. The first step then was to number the drives and to clone all the four drives with “dd”, to avoid messing up with the “original” state, in case the disks must be sent to an datarecovery service or I want another try doing the restore myself. While the dd worked well on three drives, it stopped with an I/O Error on the last drive. I did an “smartctl -a” then and put the drive away, because the SMART state wasn’t good at all and I didn’t want to mess anything around with the corrupted and half-working “magic” disk. I went on my restoring attempts, with the three disks left.
Until now I have tried almost all the tips and tricks posted on the WD Community and otherwhere on the internet, without any success.
Every attempt, to get the array running with “mdadm --assemble” failed with the message “mdadm: /dev/md2 assembled from 2 drives and 1 spare - not enough to start the array”.  My last try then was to re-create the RAID5-Array with the following command:
mdadm --create -l5 -n4 -c64 --parity=left-symmetric --metadata=0.9 /dev/sda4 missing /dev/sdc4 /dev/sdd4
Then I recreated the LVM things, because it didn’t find any VolumeGroup by itself and ran an fsck.ext3 over the /dev/lvmr/lvm0, but the result wasn’t mountable at all.

I’m now back again to the beginning - I copied the backup over the original disks - with all three left disks.
My actual state is the following:
I have 3 working disks left. The disk with the sharespace-slot-number two (/dev/sdb) has failed, so I removed it. The other Disks have an good S.M.A.R.T state and work without problems (no I/O Errors or so). But I’m not quite sure in which order the Raid has to be re-created. If I put the three disks back in the sharespace, the two other RAID1 Volumes are working perfectly. At the end of this posting you find the detailed outputs from the commands.
Is there any chance to get my data back, without paying very much money to a datarecoverer e.g… KrollOnTrack?
I would be very happy, if there is any possible solution!
Thank you very much in advance for every advice!

If I did some mistakes in English language or grammar, I’m really sorry for that, but English isn’t my first language - I’m German.

Before you ask: No, I don’t have any backup of this data and it is really important. I will never forget this scary time and I will do backups every week from now on.

Greetings,
andreas5232

HERE THE OUTPUTS

root@ubuntu:~# fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1      417689      208844+  fd  Linux raid autodetect
/dev/sda2          417690     2506139     1044225   fd  Linux raid autodetect
/dev/sda3         2506140     2923829      208845   fd  Linux raid autodetect
/dev/sda4         2923830  1953520064   975298117+  fd  Linux raid autodetect

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      417689      208844+  fd  Linux raid autodetect
/dev/sdc2          417690     2506139     1044225   fd  Linux raid autodetect
/dev/sdc3         2506140     2923829      208845   fd  Linux raid autodetect
/dev/sdc4         2923830  1953520064   975298117+  fd  Linux raid autodetect

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      417689      208844+  fd  Linux raid autodetect
/dev/sdd2          417690     2506139     1044225   fd  Linux raid autodetect
/dev/sdd3         2506140     2923829      208845   fd  Linux raid autodetect
/dev/sdd4         2923830  1953520064   975298117+  fd  Linux raid autodetect

root@ubuntu:~# mdadm --examine /dev/sda4
/dev/sda4:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : f11bc9cc:bdfa99f7:4b4a2716:dcad9126
  Creation Time : Sun Oct 26 17:36:43 2008
     Raid Level : raid5
  Used Dev Size : 975097920 (929.93 GiB 998.50 GB)
     Array Size : 2925293760 (2789.78 GiB 2995.50 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2

    Update Time : Sun Jan 13 16:08:26 2013
          State : clean
 Active Devices : 2
Working Devices : 3
 Failed Devices : 3
  Spare Devices : 1
       Checksum : 56de5670 - correct
         Events : 21416348

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8        4        0      active sync   /dev/sda4

   0     0       8        4        0      active sync   /dev/sda4
   1     1       0        0        1      faulty removed
   2     2       8       36        2      active sync   /dev/sdc4
   3     3       0        0        3      faulty removed
   4     4       8       52        4      spare   /dev/sdd4

root@ubuntu:~# mdadm --examine /dev/sdc4
/dev/sdc4:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : f11bc9cc:bdfa99f7:4b4a2716:dcad9126
  Creation Time : Sun Oct 26 17:36:43 2008
     Raid Level : raid5
  Used Dev Size : 975097920 (929.93 GiB 998.50 GB)
     Array Size : 2925293760 (2789.78 GiB 2995.50 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2

    Update Time : Sun Jan 13 16:08:26 2013
          State : clean
 Active Devices : 2
Working Devices : 3
 Failed Devices : 3
  Spare Devices : 1
       Checksum : 56de5694 - correct
         Events : 21416348

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8       36        2      active sync   /dev/sdc4

   0     0       8        4        0      active sync   /dev/sda4
   1     1       0        0        1      faulty removed
   2     2       8       36        2      active sync   /dev/sdc4
   3     3       0        0        3      faulty removed
   4     4       8       52        4      spare   /dev/sdd4

root@ubuntu:~# mdadm --examine /dev/sdd4
/dev/sdd4:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : f11bc9cc:bdfa99f7:4b4a2716:dcad9126
  Creation Time : Sun Oct 26 17:36:43 2008
     Raid Level : raid5
  Used Dev Size : 975097920 (929.93 GiB 998.50 GB)
     Array Size : 2925293760 (2789.78 GiB 2995.50 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2

    Update Time : Sun Jan 13 16:08:26 2013
          State : clean
 Active Devices : 2
Working Devices : 3
 Failed Devices : 3
  Spare Devices : 1
       Checksum : 56de56a2 - correct
         Events : 21416348

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     4       8       52        4      spare   /dev/sdd4

   0     0       8        4        0      active sync   /dev/sda4
   1     1       0        0        1      faulty removed
   2     2       8       36        2      active sync   /dev/sdc4
   3     3       0        0        3      faulty removed
   4     4       8       52        4      spare   /dev/sdd4

To my understanding, you need to replace the drive that went bad so the unit will rebuild the RAID 5 and restore all the information

I tried this already, but there was no rebuild or anything else starting, the raid stayed as it was and I were unable to find a LVM VG or even to mount the volume. Anyway thank you for your help! I think I will send the disks to a professional datarecovery company, because I simply don’t now, how to solve this tricky RAID-LVM-Filesystem puzzle.

Have you tried the R-Studio recovery (details in another thread)? It isn’t perfect, and some reports say it can’t cope with the file system, BUT, it worked for me in a similar situation.