Read through the entire thread and you will realize you are not alone. I still can’t believe the WD has this firmware online and continues to brick peoples units.
Your unit is probably in the same state as most who have had this issue. The drives are actually funcitoning fine, but the ssh, samba and http services are not starting rendering the box useless. You can frustrate your self for a long time if you try to get it working again. I haven’t seen anybody post a success at getthing the unit back to proper working condition without a complete unit rebuild by zeroing out the drives and letting it reset to factory conditions. Although this trashes your data.
If you have data on the drives that you want to retrieve, you will need a temporary computer and another usb drive or something large enough to be able to copy it off. You didn’t indicate which unit you have or if you have a raid 1 or a raid 5 configuration. I put together a HOWTO to rebuild a raid 5. Someone else did a HOWTO for rebuilding a raid 1. Both can be found in this forum.
I believe they come standard setup as raid 5 when there are 4 drives but from what you describe it sounds like raid 1. Raid 1 would give you approximately 1/2 of the total drive capacity which would be 8 TB / 2 at around 4TB. If it were raid 5 it would be individual drive size x (n-1) where n is the total number of drives. On a raid 5 - four drive system you should get 3 / 4 of the total space or around 6 TB.
Looking at the raid 1 HowTo, it appears to be instructions for a 2 disk system where each of the disks are essentially exact copies of each other and is pretty straight forward. However, I am not sure how a four disk raid 1 works but I imagine that you have two pairs of identical disks. It probably stripes the data on each pair and you most likely can’t just take each disk out individually as shown in the tutorial. You will probably have to use the raid 5 HowTo. I went back and looked at the process I wrote and don’t see anything that would cause any hiccups. When you rebuild the array it should automatically see that the drives were part of a raid array since you don’t actually specfiy in the process. I believe there is some metadata on the drives that will tell Ubuntu how they were setup as long as you make sure they are attached in the proper order. If there is anyone out there that knows more of the inner workings of raid 1 please sound off!
If you are not real computer savvy, I may recommend you seek data retrieval services from a pro. If you are good at tinkering then I believe my HowTo should work.
I hope you make it through this without losing data. Good luck!
Exactly same thing has happened to me. I waited until now to update FW.
Did manual update.
Waited
no http access.
ICMP was working fine.
Ive ended up taking one of the disks out (that way if I screw up only one copy of data is lost) then mounted RAID in degraded mode, on PC booted to Ubuntu 11 off CD. Worked a treat. Copied the 1TB to another drive and now I’ve come back to see if I can get the drives working.
Have to say I’ve had way too many problems with WD SS and I’m looking for another solution but right now I just want it to WORK!.
I’m using my Sharespace at home. Reading through the release notes and the problems people are having, does this latest firmware update have any improvements for a home user?
If its working without issue… don’t try and fix it! :smileyvery-happy:
If you are having problems and want the firmware to fix things… then back up your data and be prepared for a complete rebuild if things go south on the update. When I applied the firmware, it toasted the unit to where it was inaccessible but once it did a full rebuild the firmware was updated and the unit worked fine… just back up your data!
If its working without issue… don’t try and fix it! :smileyvery-happy:
If you are having problems and want the firmware to fix things… then back up your data and be prepared for a complete rebuild if things go south on the update. When I applied the firmware, it toasted the unit to where it was inaccessible but once it did a full rebuild the firmware was updated and the unit worked fine… just back up your data!
Good luck!
Thanks for the reply, my Sharespace is running okay, mainly because I haven’t updated the firmware in a long time :smiley: When I first got my SS I had a lot of problems, but since I’ve left it alone it runs slow but reliably. I had thought of replacing the 1Tb drives with 2 Tb, but now the price of hard disks have rocketed, I’ve decided to just leave the SS as is and buy a different NAS.
Like everybody else, I updated the firmware to 2.3.01, did not get any error messages, but ended up with all 4 leds showing amber in a 4x1TB raid5 array on my wd sharespace and nothing showing on the home network.
Using ssh from a linux machine, I was able to get into the wd sharespace os and rebuild the raid5 array:
$ ssh admin@192.138.1.130
$ su -
password XXXXX
$ mdadm --assemble --force /dev/md2 /dev/sda4 /dev/sdb4 /dev/sdc4 /dev/sdd4
mdadm: /dev/md2 has been started with 3 drives (out of 4).
$ pvcreate /dev/md2
No physical volume label read from /dev/md2
Physical volume "/dev/md2" successfully created
$ vgcreate lvmr /dev/md2
Volume group "lvmr" successfully create$ lvcreate -l 714182 lvmr -n lvm0
Logical volume "lvm0" created
$ fsck /dev/lvmr/lvm0
fsck 1.38 (30-Jun-2005)
fsck: WARNING: couldn't open /etc/fstab: No such file or directory
e2fsck 1.38 (30-Jun-2005)
Couldn't find ext2 superblock, trying backup blocks...
ext3 recovery flag is clear, but journal has data.
Recovery flag not set in backup superblock, so running journal anyway.
NASRAID: recovering journal
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Filesystem contains large files, but lacks LARGE_FILE flag in superblock.
Fix<y>?
yes
Pass 3: Checking directory connectivity
/lost+found not found. Create<y>?
yes
Pass 4: Checking reference counts
Pass 5: Checking group summary information
...
Free blocks count wrong for group #22316 (32254, counted=29092).
Fix<y>?
yes
Free blocks count wrong (719813722, counted=646135193).
Fix<y>?
yes
...
Free inodes count wrong for group #22316 (16384, counted=16367).
Fix<y>?
yes
Free inodes count wrong (365674484, counted=365312054).
Fix<y>?
yes
NASRAID: *****FILE SYSTEM WAS MODIFIED*****
NASRAID: 362442/365674496 files (6.3% non-contiguous), 85187175/731322368 block
$ mount -t ext3 /dev/lvmr/lvm0 /DataVolume -o rw,noatime
$ exportfs -av
exporting *:/DataVolume/Download
exporting *:/DataVolume/Public
exporting *:/DataVolume/tc
Note that the last part (exportfs -av) is required if you had an nfs share set up on the wd sharespace and need to connect to it from a linux machine to copy off data.
I had a dialogue with wd on my problems - while they apologised, they were not willing to help with the linux commands required to re-build the array.
So, after copying my data off the wd sharespace. I set it to rebuild the raid5 array via the gui.
I got a message it would take 16-17 hours, but in fact it took about four days:
03/20 22:33:27 wd-1-4tb daemon.warn wixEvent[1016]: Volume Status - Volume 'DataVolume' is resyncing, 80% had completed.
03/20 02:49:09 wd-1-4tb daemon.warn wixEvent[1016]: Volume Status - Volume 'DataVolume' is resyncing, 60% had completed.
03/19 07:04:37 wd-1-4tb daemon.warn wixEvent[1016]: Volume Status - Volume 'DataVolume' is resyncing, 40% had completed.
03/18 09:30:06 wd-1-4tb daemon.warn wixEvent[1016]: Volume Status - Volume 'DataVolume' is resyncing, 20% had completed.
03/17 10:06:34 wd-1-4tb daemon.warn wixEvent[1016]: Volume Status - Volume 'DataVolume' is resyncing, 0% had completed.
After the rebuild appeared to complete - I did not see 100% in the log - the power light stopped blinking and stayed solid green and also the disk leds switched from amber to green.
On reboot, the disk leds are amber, the power light is solid green and the gui reports the array as failed.
So, I guess the firmware update really did kill my sharespace…