Datavolume doesn't exist ! message

I am experiencing the same sort of problems as Storm8.

I really hope that somebody can help.

Recovery of the data is the most important thing to me at the moment.

Hello,

I also have the same issue with my 4TB Sharespace.

Need some detailed help in getting data off the drives…

Please help.

Thanks

Deepak

I am also having the same problem as Storm8.

I am having the same problem and it looks like  cohiba88 figured that it is due to the raid thinking that it is unable to rebuild from 2 drives. After testing i found that drive 2 and 3 (out of 0123) are not active but cannot be rebuilt into the raid. hence the assemble from 2 drives - not enough to start. so we need to start the array from disk 0 or 1. cohiba88 can you help with that please and maybe give a “Datavolume doesn’t exist ! message for dummies” guide. Please help. THANKS.

Hi hmblm1245 I pretty much gave up on running RAID5. I changed the setup to a not use RAID5 and just span all the disks.  I know this is risky, but its less risky than my volume failing every other month and having to restore.

I think my problem  has to do with my Power setup, where I am, I have frequent power issues lasting about 1-2 minute  5 - 10 times a day.  My UPS lasts about 15 minutes.  When the interruptions happen frequently, the UPS does not get a chance to recharge and then cannot sustain an outage of even 20 seconds.     So, on the first outage, the RAID5 olume breaks because there will be an inconsistency in writing the parity data, and then the system will rebuild the drive, on the subsequent outage, it will then break the RAID5 completely.   So now I’ll end up with a bunch of completely healthy (physically) drives, but no data!!

So my solution was to strip the data with parity so when the power does go out, that there is no inconsistency check to fail (I think this is how it works).  I tested it by yanking the power and then doing it mutiple times.

The next part was to make the Sharespace talk to my UPS, it couldnt! because my UPS did not have a USB port, so I went out and bought a APC SmartUPS, with the Sharespace CANNOT recognize!!!   there is absolutely no documentation about UPS on teh WDC site.  So now I just connect the UPS to my MAC, which instantly recognizes it, and will try to create a script to log into the Sharespace and shut it down gracefully when there is a low battery condition.

Will I ever get a Sharespace again?  NEVER!!!   I’ll stick to independent USB / Network Drives and partition my data instead of centralizing it, together with a 1:1 backup.

I also had my Sharespace connect to a UPS and the power flickered off and on and right after it caused this issue. I am guessing the same thing happened. Is there any other options that I could try to help get the rest of us poor WD Sharespace owner’s data back?

I have been messing aroiund with mdadm and found that when running mdadm /dev/md[0123] i get

 Version : 00.90.01
  Creation Time : Mon Mar  2 01:11:15 2009
     Raid Level : raid1
     Array Size : 208768 (203.91 MiB 213.78 MB)
    Device Size : 208768 (203.91 MiB 213.78 MB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Fri Aug 13 09:53:18 2010
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           UUID : 70a4b827:72b11c03:f91b9113:7d13d1a8
         Events : 0.3531138

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
/dev/md1:
        Version : 00.90.01
  Creation Time : Mon Mar  2 01:11:15 2009
     Raid Level : raid1
     Array Size : 1044160 (1019.86 MiB 1069.22 MB)
    Device Size : 1044160 (1019.86 MiB 1069.22 MB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Thu Aug 12 23:52:46 2010
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           UUID : 1858ad4b:ae658c2a:d86952b3:959a615c
         Events : 0.6798

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
       3       8       50        3      active sync   /dev/sdd2
mdadm: md device /dev/md2 does not appear to be active.
mdadm: md device /dev/md3 does not appear to be active.

when i try to run fibreiv’s fi’x first line mdadm -D /dev/md2

mdadm: md device /dev/md2 does not appear to be active.

same thing that the others are having.

When i run the 2nd line to attempt to assemble the array

mdadm --assemble /dev/md2 /dev/sda4 /dev/sdb4 /dev/sdc4 /dev/sdd4
mdadm: /dev/md2 assembled from 2 drives - not enough to start the array.

how wiould i format this to force array to start if that is even the right solution.

Not sure what this firmware update is. My sharespace does not update to it i am still on the official version 2.1.92 however i stumbled on this page 2.2.8 which should on the last line listed below fix the RAID 5 issue. now how do i apply it or get to download it?

Resolved full configuration restore causing failed RAID 5

http://support.wdc.com/download/notes/WD_ShareSpace_release_notes_2_2_8_r2-tw2.pdf

These release notes provide information on the latest posting of firmware
for WD ShareSpace Network Storage System.
These release notes provide information on the following:
• Version 2.2.8
• Version 2.1.92
• Version 2.1.3
• Version 2.1.1
• Version 1.2.2
• Version 1.0.5 (Initial Release)
• Firmware Installation Procedure

Version 2.2.8:
Resolved Issues:
• Added USB driver to resolve performance issues for NTFS formatted drives
• Added support for HFS+J formatted USB drives
• Added RAID 10 support
• Added support Windows Server 2008 ADS
• Added AFP/TimeMachine support
• Updated Twonky Media Server
• Corrected CTorrent to accept torrents with an apostrophe (‘)
• Corrected CTorrent to accept filenames that start with non-alphanumeric characters that were rejected
• Corrected CTorrent to accept torrent files with a back quote (‘) in the filename
• Corrected CTorrent to accept torrent files with spaces in the name
• Corrected BitTorrent lable in web UI to CTorrent
• Resolved full configuration restore causing failed RAID 5

Please help! I’m having the same problem (Data volume doesn’t exist!") but don’t even understand the 16+ hour fix description. (I’m a photographer, not an IT bod, unfortunately). Please could you explain the steps as if to an ■■■■■? (e.g. I don’t know what SSH means). (I checked for updates to firmware and got the message it was up to date). All of my photographs are on there. Yes, I do have this year’s work in two other places but it’s the historical stuff that’s only on there. RAID seems utterly pointless if you have to have a second RAID array to back up the first!

Best

Calam

I also facing same problem with ‘hmblm1245’.

May I know how to solved above problem??

Thanks

:cry: 

:cry:

My Sharespace become more worst ready coz I keep tried recovered my data.

Now sharespace web interface was corrupt …unable to login thru web…guess OS interface have mess up.

I had tried using putty telnet to that machine…still able to login

Anyone can help me …how to using linux command to re-update firmware?

guess update firmware will be able to solve above problem

sorry for that coz I duno linux 

:cry:

Thanks

so, thanks to all on this thread, have found it invaluable in recovering my wd sharespace!

However, although I can rebuild the NAS, it doesn’t persist, and I have to do it everytime the NAS reboots… sigh.

I tried to create /etc/fstab with an entry for mounting the raid, but it doesn’t work.

I have to the following everytime now:

lvremove /dev/lvmr/lvm0

lvcreate -l 714218 /dev/lvmr -n lvm0

fsck.ext3 /dev/lvmr/lvm0 -y

mount -t ext3 /dev/lvmr/lvmo /DataVolume -o rw

takes ages to check the filesystem, but them mounts easily enough (until the next boot)

Any idea how to persist these settings  !?!?!?!

what a ■■■■ NAS, I’m sorry, but I regret it.

Really useful  thread, guys. I think I’m nearly done (I hope!), trying to recover from the same problem and same cause (multiple power downs…). I’m a Linux nearly-newbie but have done an intensive course in the last 24 hrs.

Similar to others, 4 x 1Tb drives in a RAID5.

I’ve reassembled md2 OK, although I had to force it (mdadm --reassemble -f /dev/md2 /dev/sd[abcd]4)

pvcreate /dev/md2 → OK

vgcreate lvmr /dev/md2 → OK

lvcreate -l 714182 lvmr -n lvm0 → OK, although I had to use 714182 instead of the other values suggested in the thread.

fsck.ext3 /dev/lvmr/lvm0 -y → not OK. 

Output below :


"Couldnt find ext 2 superblock, trying backup blocks…

fsck.ext3: while trying to open /dev/lvmr/lvm0

The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock:

     esfsck -b 8193

__________________________-

I’m now out of time and willpower to persist any more, and I dont like the look of the above, and am even considering a professional data recovery (ugh!). Can anyone help?

I’d be happy to type up my extensive notes for the benefit of others, if I ever actually get this problem solved.

Oh, and /etc/fstab was completely absent. I’ve tried to recreate it, but to no good effect. If I can just get the data recovered then I’ll be happy to factory reset the unit instead…

rofo2010, I took a couple of days to look into my ShareSpace.  There is no /etc/fstab table, nor should there be.  The file system list is actually stored in /etc/auto_mount_list and is used by /etc/init.d/S01auto_mountd to mount your file systems on startup.

If my ShareSpace is anything to go by,

mount -n -o noatime,usrquota /dev/vg0/lv0 /DataVolume

might bring your RAID array up, but you need to be extremely careful as it is very easy to destroy a RAID array and this could prevent you from utilizing a data recovery company in the future.  If you have any doubts, please contact data recovery first.

I have exactly this problem and MUST get my data back.

I am fairly comfortable with Linux shell admin, but have no experirnce with RAID admin or mdadm

So it worries me that when i cat /proc/mdstat I get the following:

Personalities : [linear] [raid0] [raid1] [raid5]

md1 : active raid1 sdd2[3] sdc2[2] sdb2[1] sda2[0]

      1044160 blocks [4/4] [UUUU]

md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]

      208768 blocks [4/4] [UUUU]

unused devices:

Personalities : [linear] [raid0] [raid1] [raid5]md1 : active raid1 sdd2[3] sdc2[2] sdb2[1] sda2[0]      1044160 blocks [4/4] [UUUU]
md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]      208768 blocks [4/4] [UUUU]
unused devices:

Can anyone give me some clear advice as to what commands I can use to put my device back to a RAID 5 system?

Ian

Nathan

My ShareSpace is version:

2.1.96 with MioNet 4.3.1.7 
built on Wed, 13 Jan 2010 18:00:00

My RAID is currently broken with the need to re-assemble the RAID 5  “DataVolume”,  (I am at the need to “force” stage as it says “insufficient disks available” - since no one has actually reported a successful re-build - am am waiting for more information rather than risk it)

Do you think it will make matters worse if I update the firmware? for example could it inadvertantly re-format my precious disks during the update?

Ian

Do not under any circumstances update your firmware or make any changes to the ShareSpace!

Unfortunately, unless you really know what you’re doing, it isn’t safe to rebuild a broken RAID array, and even then the data recovery experts only work on copies–and they reclone from copies after every attempt.  As I understand, it, RAID-5 reconstruction is a very manual process.  You may want to go directly to a data recovery company for evaluation.  This will not be cheap but they are experts for a good reason.

Hi Nathan

Update : the local data recovery guy was not successful - he just tried a few software packages (eg DiskInternals), so slowly that I have up and figured I could do that myself, cheaper. But I’ve had no joy, either - unable to reconstruct a meaningful file system. One package (UFS explorer) did recover some data (so I know its defintiely there!), but with no meaningful filenames or folder structure.

Your suggestion to mount /dev/lv0/lvm0 to /DataVolume makes sense,  and has prompted to take a quick tutorial on LVM. Here’s my output from Vgdisplay, lvdisplay and pvdisplay. Also  /proc/mdstat and mdadm --detail.

~ $ vgdisplay
  — Volume group —
  VG Name               lvmr2
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                255
  Cur LV                1
  Open LV               0
  Max PV                255
  Cur PV                1
  Act PV                1
  VG Size               2.72 TB
  PE Size               4.00 MB
  Total PE              714182
  Alloc PE / Size       714182 / 2.72 TB
  Free  PE / Size       0 / 0
  VG UUID               BTmOyP-Jo0Z-0md0-Vwvv-rceE-063u-Ld3JUx

~ $ lvdisplay
  — Logical volume —
  LV Name                /dev/lvmr2/lvm2
  VG Name                lvmr2
  LV UUID                WdH1O7-3MNP-8S4Y-ZRvr-PaHm-3ojZ-vK36D9
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                2.72 TB
  Current LE             714182
  Segments               1
  Allocation             next free (default)
  Read ahead sectors     0
  Block device           253:0

~ $ pvdisplay
  — Physical volume —
  PV Name               /dev/md2
  VG Name               lvmr2
  PV Size               2.72 TB / not usable 2.00 TB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              714182
  Free PE               0
  Allocated PE          714182
  PV UUID               3TnJyc-8U78-jPX5-XXzc-eTxY-t4wn-oYFzv3

~ $ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
md1 : active raid1 sdd2[1] sdc2[0] sdb2[3] sda2[2]
      1044160 blocks [4/4] [UUUU]

md2 : active raid5 sdd4[3] sdc4[2] sdb4[1] sda4[0]
      2925293760 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid1 sdd1[1] sdc1[0] sdb1[3] sda1[2]
      208768 blocks [4/4] [UUUU]

unused devices:
~ $ mdadm --detail /dev/md2
/dev/md2:
        Version : 0.90
  Creation Time : Wed Jun  9 11:51:06 2010
     Raid Level : raid5
     Array Size : 2925293760 (2789.78 GiB 2995.50 GB)
  Used Dev Size : 975097920 (929.93 GiB 998.50 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Sun Dec  5 08:24:19 2010
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 1af090f3:8b339e7b:198440bb:3f3884c2
         Events : 0.1075146

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
       1       8       20        1      active sync   /dev/sdb4
       2       8       36        2      active sync   /dev/sdc4
       3       8       52        3      active sync   /dev/sdd4
~ $

From the above I infer that :

  • vg0 and lvm0 dont exist, but lvmr2 and lvm2 do… This is most likely my earlier error.
  • the PV of md2 is a little concerning : why 2Tb not usable and Allocatable = yes (but full)? Do I need to recreate md2 or another md?

Part of me is tempted to mount  /lvmr2/lvm2 to /dataVolume. Any comments on the above before I bite the bullet? A similar dump of your own system would be helpful and informative, please.

Still reluctant to spend >$1000 on professional data recovery, when I sense the fix is so close…

cheers

Robin

I don’t mind admitting that this is well out of my sphere of knowledge.  But I do hope that the output from my ShareSpace is helpful:

~ $ vgdisplay
  --- Volume group ---
  VG Name vg0
  System ID             
  Format lvm2
  Metadata Areas 1
  Metadata Sequence No 2
  VG Access read/write
  VG Status resizable
  MAX LV 255
  Cur LV 1
  Open LV 1
  Max PV 255
  Cur PV 1
  Act PV 1
  VG Size 2.72 TB
  PE Size 4.00 MB
  Total PE 714182
  Alloc PE / Size 714182 / 2.72 TB
  Free PE / Size 0 / 0   
  VG UUID X2rY12-ncJL-SeVG-2nTY-wkFY-i3oB-7fIC5x
   
~ $ lvdisplay
  --- Logical volume ---
  LV Name /dev/vg0/lv0
  VG Name vg0
  LV UUID o1R7Fc-koYb-3mMi-JEUG-syim-aYLL-5XVkaO
  LV Write Access read/write
  LV Status available
  # open 1
  LV Size 2.72 TB
  Current LE 714182
  Segments 1
  Allocation next free (default)
  Read ahead sectors 0
  Block device 253:0
   
~ $ pvdisplay
  --- Physical volume ---
  PV Name /dev/md2
  VG Name vg0
  PV Size 2.72 TB / not usable 2.00 TB
  Allocatable yes (but full)
  PE Size (KByte) 4096
  Total PE 714182
  Free PE 0
  Allocated PE 714182
  PV UUID kABig0-d7FO-fIKL-dSOz-2mjw-Kb4y-3wOp0h
   
~ $ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] 
md1 : active raid1 sdd2[3] sdc2[2] sdb2[1] sda2[0]
      1044160 blocks [4/4] [UUUU]
      
md2 : active raid5 sdd4[3] sdc4[2] sdb4[1] sda4[0]
      2925293760 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
      
md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      208768 blocks [4/4] [UUUU]
      
unused devices: <none>
~ $ mdadm --detail /dev/md2
/dev/md2:
        Version : 0.90
  Creation Time : Wed Apr 28 14:34:46 2010
     Raid Level : raid5
     Array Size : 2925293760 (2789.78 GiB 2995.50 GB)
  Used Dev Size : 975097920 (929.93 GiB 998.50 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Mon Dec 6 12:13:16 2010
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 1918d3cd:1ca8afd7:7fcbff8e:57010521
         Events : 0.22494

    Number Major Minor RaidDevice State
       0 8 4 0 active sync /dev/sda4
       1 8 20 1 active sync /dev/sdb4
       2 8 36 2 active sync /dev/sdc4
       3 8 52 3 active sync /dev/sdd4

Got my DataVolume mounted and visable in windows and got my files back thanks to this guide, I’m so thankful and relieved as I thought I’d lost over 10 years of family videos and pictures.

I have noticed on the web interface that it doesn’t have a usage or size for the DataVolume and a status of failed?

Volume Type Disk Usage Size Status  
DataVolume Raid 5      Unknown Unknown Failed

Is there still no way to fix this issue and make a permanant fix for this?

If not, if I take all the data off. Can I do a complete fresh build of the NAS to start afresh and cure all the issues and then copy the data back on?