WD ShareSpace - RAID5 problem

Something happened to my ShareSpace 4x1GB device, all 4 hard srives are flashing amber. I had all 4 drives setup in RAID5 mode.

here’s status from web interface:

ALERT: 11/28 20:12:09 [Volume Status] Data Volume ‘DataVolume’ doesn’t exist!

Disk Manager:

Disk Volume Size Description Status  
HDD 4 Unassigned 931.51 GB WDC WD10EACS-00D6B1 Failed Safely Remove Disk  
HDD 3 Unassigned 931.51 GB WDC WD10EACS-00D6B1 Failed Safely Remove Disk  
HDD 2 Unassigned 931.51 GB WDC WD10EACS-00D6B1 Failed Safely Remove Disk  
HDD 1 Unassigned 931.51 GB WDC WD10EACS-00D6B1 Failed

Safely Remove Disk 

Volumes & Raid Management

Volume Type Disk Usage Size Status  
DataVolume Raid 5   Unknown Unknown Failed

Modify

 

Firmware: 2.3.02 with MioNet 4.3.0.8

Now some output from SSH:

~ $ df -h
Filesystem Size Used Available Use% Mounted on
/dev/md0 2.1M 2.1M 0 100% /old
/dev/md0 197.4M 87.8M 99.4M 47% /
/dev/ram0 61.6M 60.0k 61.6M 0% /mnt/ram

~ $ fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 26 208844+ fd Linux raid autodetect
/dev/sda2 27 156 1044225 fd Linux raid autodetect
/dev/sda3 157 182 208845 fd Linux raid autodetect
/dev/sda4 183 121601 975298117+ fd Linux raid autodetect

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 26 208844+ fd Linux raid autodetect
/dev/sdb2 27 156 1044225 fd Linux raid autodetect
/dev/sdb3 157 182 208845 fd Linux raid autodetect
/dev/sdb4 183 121601 975298117+ fd Linux raid autodetect

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 26 208844+ fd Linux raid autodetect
/dev/sdc2 27 156 1044225 fd Linux raid autodetect
/dev/sdc3 157 182 208845 fd Linux raid autodetect
/dev/sdc4 183 121601 975298117+ fd Linux raid autodetect

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdd1 1 26 208844+ fd Linux raid autodetect
/dev/sdd2 27 156 1044225 fd Linux raid autodetect
/dev/sdd3 157 182 208845 fd Linux raid autodetect
/dev/sdd4 183 121601 975298117+ fd Linux raid autodetect

~ $ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
md1 : active raid1 sdd2[3] sdc2[2] sdb2[1] sda2[0]
1044160 blocks [4/4] [UUUU]

md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
208768 blocks [4/4] [UUUU]

unused devices:

~ $ mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Tue Feb 17 12:55:03 2009
Raid Level : raid1
Array Size : 208768 (203.91 MiB 213.78 MB)
Used Dev Size : 208768 (203.91 MiB 213.78 MB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Wed Nov 28 20:19:50 2012
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

UUID : 23b731a1:f2a8f596:983a9a48:6138f290
Events : 0.2518326

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
~ $ mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Tue Feb 17 12:55:04 2009
Raid Level : raid1
Array Size : 1044160 (1019.86 MiB 1069.22 MB)
Used Dev Size : 1044160 (1019.86 MiB 1069.22 MB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Tue Nov 27 19:23:36 2012
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

UUID : e5bf8ddc:aa2cb22b:f6a2dc5c:25dc881f
Events : 0.513254

Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
2 8 34 2 active sync /dev/sdc2
3 8 50 3 active sync /dev/sdd2
~ $

It looks like RAID5 became 2 RAID1 arrays, but the data still should be there. What’s the best route to reasemble RAID5 without losing any data ? I also have another ShareSpace set up as 2 x RAID1, I could uset it if needed.

Appreciate any help.

1 Like

Hi, you can check the following links and see if they work for you. 

http://community.wdc.com/t5/WD-ShareSpace/HOWTO-Sharespace-RAID-5-Data-Recovery/m-p/371001#M3054

http://community.wdc.com/t5/WD-ShareSpace/WD-SHARESPACE-RAID-5-FAILURE/td-p/363255/highlight/true