WDC WD10EADS-11M2B2 replacement?

Hi,

I have a WD ShareSpace 4GB operating in raid 5 mode. One of the WDC WD10EADS-11M2B2 disks shows faulty so I need to replace it as we are currently running in degraded mode.

I can’t source a WDC WD10EADS-11M2B2 

Below are the WD 1TB drives I can source locally, are any of them a suitable replacement ? I suspect it’s the WD10EFRX as it says that’s for NAS but Id like confirmation.

Thanks


WD Black 1TB, WD1003FZEX
WD 1TB Black, SATA III, 7200RPM, 64MB
$105.00

WD Blue 1TB, WD10EZEX
WD 1TB Blue, SATA III, 7200RPM, 64MB
$76.00

WD Red 1TB, WD10EFRX
WD 1TB Red, SATA III, IntelliPower, 64MB, NAS HDD for 1 to 8 Bay NAS
$96.00

WD Green 1TB, WD10EZRX
WD 1TB Green, SATA III, IntelliPower, 64MB
$75.00

WD RE 1TB, WD1003FBYZ
WD 1TB RE, SATA III, 7200RPM, 64MB, 24x7 Reliability
$139.00

WD VelociRaptor 1TB, WD1000DHTZ
WD 1TB VelociRaptor, SATA III, 10000RPM, 64MB
$269.00

WD Purple 1TB, WD10PURX
WD 1TB Purple, SATAIII, IntelliPower, 64MB, Designed for 24x7 Surveillance Storage
$89.00

Welcome to the Community.

Maybe you should try contacting WD’s Technical Support about this. You can do so either by phone or email.

To Contact WD for Technical Support
http://support.wdc.com/contact/index.asp?lang=en

Support by Country
http://support.wdc.com/country/index.asp

I receive a Blue Caviar today.

more info this evening.

Well…

the short answer : with BlueCaviar WDC-WD10EZEX : it worked for me

the long answer is :

I’ve received a WD Blue Caviar 1To (WDC WD10EZEX)

I installed it 10 minutes ago

1/ shutdown

Feb  3 18:34:07 : System Shutdown - System will be shutdown.

2/ unscrew the Sharespace box

3/ I CAREFULY touch anything metal to avoid electric shock on new HDD (I make it twice to be sure)

4/ remove old disk and its belt

5/ install the belt and place the new disk

6/ screw back the Sharespace

7/ switch it on

Feb 3 18:38:34 syslogd started: BusyBox v1.1.1

less than 5 minutes!

Now my 1st tests:

on the Sharespace web-interface:

HDD 4 DataVolume 931.51 GB WDC WD10EZEX-08M2NA0 Good

my 2nd tests: 

and much more interesting on ssh interface:

cat /proc/mdstat

show the array are resyncing (or are sync are delayed)

it show the really big array (/dev/md2 which is mounted on /DataVolume) will need approx 15-18 H (my array was 57% full)

as it took me about 60H to perform my backup, I let it resync

mdadm --detail /dev/md2

don’t show anything interesting, except the spare (and resyncing) partition is /dev/sda4

smartctl -a /dev/sda

show the reference of the disk :

Device: WDC WD10EZEX-08M2NA0 Version: 01.0
=== START OF INFORMATION SECTION ===
Device Model: WDC WD10EZEX-08M2NA0

—// snip //—

ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 100 253 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0027 100 253 021 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 1
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 100 253 000 Old_age Always - 0
9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 0
10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 1
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 0
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 1
194 Temperature_Celsius 0x0022 119 114 000 Old_age Always - 24
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0

ok it is a all new HD

 my 3rd and last (and really not useful) tests :

fdisk -l /dev/sda

show everything exactly similar to the other disks

finally all my tweaked conf file (/etc/exports) have not been regenerated.

You know what?

I’m happy.

UPDATE…

I got an issue

After resync is 100%

abnormal shurtdown

(happened make it twice)

I didn’t see anything in the logs

investigations in progress, but i think i have to do some kind of reset…

After 3 syncs of the raid array (15H each)

After 1 init of the array (still 15H) (using the web-interface)

And EVERY time an abnormal shutdown issue (no response from WD by the way (I know… the load of work)

I finally decided to create the array by myself:

0/ I stop the array

mdadm --stop /dev/md2

(I had to kill the pending formatting process before…)

1/ I wipe all the partition of the future array :

mdadm --zero-superblock /dev/sda4

mdadm --zero-superblock /dev/sdb4

mdadm --zero-superblock /dev/sdc4

mdadm --zero-superblock /dev/sdd4

2/ I create the array

mdadm --create --metadata 0.9 --verbose --assume-clean --level=5 --chunk=64 --raid-devices=4 /dev/md2 /dev/sda4 /dev/sdb4 /dev/sdc4 /dev/sdd4

3/ After checked the array is created and in a clean state (cat /proc/mdstat)

I reboot the Sharespace

4/ Now I am currently waiting the formatting process to finish

cat /tmp/progress.mke2fs.DataVolume

UPDATE :

the formating process took 10-20 minutes (piece of cake)

I had to recreate all the shares (using the web interface) as all users were still alive.

Now I am downloading my backup  : ETA 2,5 days

Conclusions (at least for me):

1/ Changing the disk could be done, but as for all raid5 array, some cares should be taken.

NEVER, EVER don’t use all the available space of the disk (to be honest, I had and still have this bad behaviour)

easy to understand example:

one array with 4 disks : 1Tb, 1Tb, 1Tb, 2Tb (yes 2).

of course the array will be 4x 1Tb leading to a 3Tb available raid5 array.

for several months (not saying some years) it works.

one day the 2Tb disk crash and is replaced by a smaller one,

lets say for example 1000Gb disk… which is in fact 0,98To…

no luck, too small, the array will never use this disk.

2/ If you have a backup, consider this seriously : maybe it is easier (and healthier) to recreate from scratch…

3/ During the process I lost the LVM architecture…

…I really don’t care, as I use ALLLLLL (yes 5!) the available space.

4/ During the process It seems I won 10Gb… (maybe the LVM arch.) from 2,70Tb to 2,71Tb

and last but don’t least.

5/ IT CAN BE DONE.

If I could have brought you some hope, I will be fine with this.

If you want some advices or a part of this experience, I’ll be glad to share