Datavolume doesn't exist ! message

Thanks Nathan,

However, my post was in the context of my previous posts in this thread - where you’ll find all relevant pieces and understand why it won’t start the rebuild automatically. Apologies for not re-inserting…

But, for the record, the <blah> part is:

~ $ mdadm --assemble -f /dev/md2 /dev/sd[abcd]4
mdadm: cannot open device /dev/sdd4: No such device or address
mdadm: /dev/sdd4 has no superblock - assembly aborted


For additional info:

~ $ pvdisplay
  Incorrect metadata area header checksum
  /dev/sdd3: open failed: No such device or address
  /dev/sdd4: open failed: No such device or address
  Incorrect metadata area header checksum
  /dev/sdd3: open failed: No such device or address
  /dev/sdd4: open failed: No such device or address
  Incorrect metadata area header checksum
  Incorrect metadata area header checksum
  Incorrect metadata area header checksum
  /dev/sdd3: open failed: No such device or address
  /dev/sdd4: open failed: No such device or address
  — NEW Physical volume —
  PV Name               /dev/sda4
  VG Name
  PV Size               5.45 TB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               yFd455-sK3y-Zvcl-oBhn-eGAI-FsmA-3ScJD4

  — NEW Physical volume —
  PV Name               /dev/sdb4
  VG Name
  PV Size               5.45 TB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               QnU0sF-Slnc-eBAH-TBHT-GwXp-zPM8-puJCwO

Just by way of an update, had to use FDISK to create a partition structure for the new disk to match the others. Then, the superblock was still missing for sdd4, so I found out you can use mdadm --create to create a new raid, but it’s allegedly smart enough to realise there’s an existing raid configured and thus the data *may* not be overwritten.

sdd should look like this (8TB WS):

Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdd1               1          26      208813+  fd  Linux raid autodetect
/dev/sdd2              27         156     1044225   fd  Linux raid autodetect
/dev/sdd3             157         182      208845   fd  Linux raid autodetect
/dev/sdd4             183 18446744073709527469 18446744073514118086+  fd  Linux raid autodetect

Then I tried:

mdadm --assemble -f /dev/md2 /dev/sd[abcd]4
mdadm: no RAID superblock on /dev/sdd4
mdadm: /dev/sdd4 has no superblock - assembly aborted

Then:

mdadm --create /dev/md2 --verbose --level=5 --raid-devices=4 --spare-devices=0 /dev/sd[abcd]4
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 64K
mdadm: /dev/sda4 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Fri Apr  3 13:54:17 2009
mdadm: /dev/sdb4 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Fri Apr  3 13:54:17 2009
mdadm: /dev/sdc4 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Fri Apr  3 13:54:17 2009
mdadm: size set to 1952050048K
Continue creating array? y
mdadm: array /dev/md2 started.

Then:

/ $ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
md1 : active raid1 sdc2[2] sdb2[1] sda2[0]
      1044160 blocks [4/3] [UUU_]

md2 : active raid5 sdd4[4] sdc4[2] sdb4[1] sda4[0]
      5856150144 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
      [>…]  recovery =  4.1% (80921856/1952050048) finish=1865.2min speed=16718K/sec
md0 : active raid1 sdc1[2] sdb1[1] sda1[0]
      208768 blocks [4/3] [UUU_]

unused devices: <none>

…wonder if there will be any data when when the “recovery” has finished.  I somehow doubt it, but I’ll let you know whether my stumbling around ultimately yields a positive outcome.

Thanks for that–sometimes these details end up changing and if you are successful, the extra info might be helpful to the next person who comes along.

The ShareSpace should be able to add the new drive back into the array and rebuild if it is completely blank (all zeros and a quick erase doesn’t count for some reason).  The mdadm method does need the disk to be formatted properly, and I’d usually do this using some other command to save the formatting of the array and then apply it to the disk–I can’t recall which yet.

I’ll be crossing my fingers for you, although if the array status shows three disks as “up”, it looks as though it may have kept the existing configuration.  My favorite command for this sort of thing is ‘watch’, as in

watch -n 60 cat /proc/mdstat

 which will show the RAID array status and update it every 60 seconds (until you press Ctrl-C).

Well, there’s actually some good news! By way of an update.

After some the re-sync completed (successfully for the first time!!), I had to follow some more steps and a little more mucking around (as per @macwolf on page 4 of this thread), I was able to mount my DataVolume and recover SOME of my data.

/DataVolume was back, (although not according to the web admin pages - I could map it!). However, one of my shares did appear, but was NOT actually there.

I’d skipped the f s c k step, because it was erroring, so after a reboot (and repeat of @macwolf steps), I tried to run the f s c k with the following results:

/dev $ fsck.ext3 /dev/lvmr/lvm0
e2fsck 1.38 (30-Jun-2005)
The filesystem size (according to the superblock) is 1463744512 blocks
The physical size of the device is 731472896 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>?

I said, “N” to this, and it started scanning the inodes quite happily until many of these forced me to quit:

Error reading block 731513067 (Invalid argument) while doing inode scan.  Ignore error<y>?
yes

Force rewrite<y>?
yes

Any ideas if:

  1. How do I resolve this superblock issue?

  2. If I can solve it, will running f s c k possibly bring my other data back?

  3. Should I just quit now and “clean” the drives and completely re-initialise the raid through the admin pages?!!

I guess the good news is I seem to have got a large amount of my data back and copied off the drive…

Couldn’t have done it without this thread and others!

The superblock issue points to an inconsistency with the file system and probably can’t be repaired automatically.  I’m glad you were able to get a good portion of your data back!  I’ll leave it to you to decide when to cut your losses and reinitialize the ShareSpace.

Once you are ready to stop data recovery efforts, however, the best thing you can do is to take the drives out of the ShareSpace and write zeros to the entire disk.  You can use any utility for this but Data Lifeguard Diagnostics for Windows or Disk Utility for OS X both work well.  You have to actually write zeros and a “quick erase” isn’t sufficient.  Once they’re all zeroed out, you can put them back in the ShareSpace and turn it on.  It will take a day or so but the ShareSpace will rebuild itself and reinitialize it and you’ll be good as new then.

Hi I’m simon from Bangladesh . . .

I have face problem to access sharespace drive. . . But there is 4 Hard disk are enable showing. There is something showing like that Datavolume is not Accessble . . . 

But i don’t know how can i recover my data or access to WD Sharespace Drive. I have facing huge problem cause i’m from Fashion CAD Dept & all of my patterns save there ! Now i want it by hook or cook. I don’t know how !!! :mansad:

I’m Familiar with windows but not expert in Linux . . . but can work on it. Is there any hope for recover my files (Pattern) ?

I’m Begging to all to help me about this matter !!! 

Simon - first you need to establish a little more information - what happened to your NAS? How? What’s it’s current state? You need to be very specific because if you’re given the wrong information, you might put yourself in major trouble…

Thanks for Replay,

I don’t about NAS but current status is my 4 Hard Disk are showing good condition but i can’t access to my shared folder. When i try to access there it shows administrative permission is denied. WD wed dashboard showing Datavolume is failed. . .

Please Help Guys . . . Plezzzzzzzzzzzzzzzzzzzzzzzzzz

I can’t access my WD drive . . . It’s shows Datavolume doesn’t exist

Ok, sure we’ll all do what we can…

Firstly, can you SSH in to the box? To do this, you’ll need an SSH client such as Putty (google it), and then you have to log in as root/welc0me

Then, I suggest you read @fibrev’s post on page 2 of here.  NOTE: some of this will depend on what’s wrong with your drive, perhaps post the output of pvdisplay, vgdisplay and lvdisplay here.

I’ll qualify this with this statement - if you ABSOLUTELY must have this data back, I suggest you take it to a data recovery professional - I can only make some amateur suggestions based on my own “hacking” experiences.

Hi.

I followed your steps. Thanks a lot for this thread it really help me. But after I restart the device i can’t access or ping the device. What should I do? Hope you can help me with this.

Footleg,

I hope you can help me with this, until now I the device is not responding but all disk are all green lights now but I the device is not responding. Still it is not available on WD Discovery tool. I just reboot the device after I finished the steps that I followed on your thread.

And what does the WD support team do in order to support their clients?

Nothing… we are thousands of clients in this situation (a Google search will give an idea of the catastrophe) and WD continues to sell this “NAS”

Well, the WD support team encourages customers to keep up-to-date backups and assists customers in RMAing failed drives and replacing them so the unit can rebuild itself.

When a catastrophic failure occurs, the only safe thing that can be done is to contact a data recovery service.  Each major failure is different and trying to bring the array up incorrectly usually destroys any remaining data on the drives.  Even in this thread the overlying theme is that customers who try to manually rebuild their array can’t.  This is because it often isn’t possible.

I am having problems using PUTTY to access my sharespace.  I can connect just fine and enter admin for the user name, however, it is not accepting welc0me as a password.  Obviously SSH is enabled on the device and the screen even tells me that welc0me should be the password, but it is not working.  Any ideas?

The superuser account name is “root”, not “admin”.  You know the password, so you shouldn’t have any problems logging in.

Because the superuser account has access to everything in the system, it is an excellent and efficient way to destroy any running Unix or Linux-based system, including your ShareSpace.  Proceed with caution.

Nathan,

Using root at the prompt and inputting the password still gets me an Access denied message.  Any other ideas?

By the way…using “admin” and “admin” as a password does appear to get me into a read-only mode but I cannot execute any commands.  Didn’t know if that would help diagnose or not…

Hello,

I purchased my 4Tb sharespace on Monday (22/07/2011) and by Thursday it had failed with 4 drives failed, red blinking light and datavolume does not exist message. The user manual was only helpfull in telling me what the amber and red lights meant. I have worked in the IT industry for 30 years and never seen 4 drives fail at once. I am assuming that this is not really the case, but I can’t be bothered attempting to fix when the device lasted less than 4 days.

I have e-mailed the WD support, but after reading the posts here, do not like my chances of a prompt or worthy reply.

I am now more interested in how to clear the data I was copying to the NAS rather than recovery as I intend to return the device to the reseller for a refund and look at other NAS’s. I only purchased this device, becuase the My book premium II would not work with Win 7 professional even after installing the new software that supposedly makes it work. I now have the device attached to an old Win XP and it works fine. Now copying the data to another drive to transfer to my i7 PC. I should have known.

Does anyone know how i can delete the data on the drives without to much effort? I am not a linux/unix user, so don’t understand where to input the commands.

Pull each disk out 1 by 1, insert in to a PC and run the Data Life Guard tools to zero the drive. Can also do health checks this way too… Good luck…