Datavolume doesn't exist ! message

Mine went something like below, but in my case it ended up not working as I had a SECOND dud disk (2 out of 4!!), so I needed to prep the spare disk, suspect you’ll need to do the same by creating partitions on the new disk that MATCH the existing disks - otherwise you won’t be able to add them to the array.

See how you go with this (when I paste this, the f s c k command will get garbled), note the recovery % will increase and take long time :

~ $ mdadm --assemble -f /dev/md2 /dev/sd[abcd]4
mdadm: forcing event count in /dev/sdd4(3) from 4283848 upto 4283850
mdadm: clearing FAULTY flag for device 3 in /dev/md2 for /dev/sdd4
mdadm: /dev/md2 has been started with 3 drives (out of 4) and 1 spare.
~ $ pvcreate /dev/md2
  Incorrect metadata area header checksum
  Can’t initialize physical volume “/dev/md2” of volume group “lvmr” without -ff
~ $ vgcreate lvmr /dev/md2
  /dev/lvmr: already exists in filesystem
~ $ lvcreate -l 714329 lvmr -n lvm0
  Incorrect metadata area header checksum
  Logical volume “lvm0” already exists in volume group “lvmr”
~ $ fsck.ext3 /dev/lvmr/lvm0
e2fsck 1.38 (30-Jun-2005)
fsck.ext3: while trying to open /dev/lvmr/lvm0
Possibly non-existent or swap device?
fsck.ext3:
~ $ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
md1 : active raid1 sdd2[3] sdc2[2] sdb2[1] sda2[0]
      1044160 blocks [4/4] [UUUU]

md2 : active raid5 sda4[0] sdb4[4] sdd4[3] sdc4[2]
      5854981248 blocks level 5, 64k chunk, algorithm 2 [4/3] [U_UU]
      [>…]  recovery =  0.0% (1562368/1951660416) finish=1826.6min speed=17792K/sec
md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      208768 blocks [4/4] [UUUU]

unused devices: <none>
~ $ mdadm -D /dev/md2
/dev/md2:
        Version : 00.90.01
  Creation Time : Fri Apr  3 13:54:17 2009
     Raid Level : raid5
     Array Size : 5854981248 (5583.75 GiB 5995.50 GB)
    Device Size : 1951660416 (1861.25 GiB 1998.50 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Thu Apr 28 01:07:51 2011
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 0% complete

           UUID : b2c561ae:9acfcff0:2594d787:fb4eb047
         Events : 0.4283850

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
       4       8       20        1      spare rebuilding   /dev/sdb4
       2       8       36        2      active sync   /dev/sdc4
       3       8       52        3      active sync   /dev/sdd4

Have the same issue with my ShareSpace 4TB.

I did upgrade firmware to 2.3.01, enable SSH, & able to go in.

Here’s what I had done & the results…

 ~ $ mdadm --assemble /dev/md2 /dev/sda4 /dev/sdb4 /dev/sdc4 /dev/sdd4

mdadm: /dev/md2 assembled from 1 drive - not enough to start the array.

~ $ mdadm --assemble -f /dev/md2 /dev/sd[abcd]4

mdadm: forcing event count in /dev/sda4(0) from 1814340 upto 1814640

mdadm: forcing event count in /dev/sdc4(2) from 1814340 upto 1814640

mdadm: /dev/md2 has been started with 3 drives (out of 4).

~ $ pvcreate /dev/md2  

No physical volume label read from /dev/md2  

Physical volume “/dev/md2” successfully created  

~ $ vgcreate lvmr /dev/md2  

Volume group “lvmr” successfully created

~ $ lvcreate -l 714329 lvmr -n lvm0  

Insufficient free extents (714182) in volume group lvmr: 714329 required  

//I’m a bit confused at this part, not sure the above result is on track. so i tried to continue with the fsck.ext3 command…

~ $ fsck.ext3 /dev/lvmr/lvmr0

e2fsck 1.38 (30-Jun-2005)

fsck.ext3: while trying to open /dev/lvmr/lvmr0

The superblock could not be read or does not describe a correct ext2 filesystem.  If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock:     e2fsck -b 8193

fsck.ext3:

//Then i print the mdadm detail again…

~ $ mdadm -D /dev/md2

/dev/md2:

         Version : 0.90

   Creation Time : Wed May 20 08:15:02 2009

      Raid Level : raid5

      Array Size : 2925293760 (2789.78 GiB 2995.50 GB)

   Used Dev Size : 975097920 (929.93 GiB 998.50 GB)

    Raid Devices : 4

   Total Devices : 3

 Preferred Minor : 2

     Persistence : Superblock is persistent

    Update Time : Mon Feb 13 22:49:10 2012

           State : clean, degraded

  Active Devices : 3

 Working Devices : 3

  Failed Devices : 0

   Spare Devices : 0

         Layout : left-symmetric

      Chunk Size : 64K

           UUID : 0c5faefb:828743ea:4f044dd3:c01c2358

          Events : 0.1814644

    Number   Major   Minor   RaidDevice State

        0               8        4        0               active sync   /dev/sda4

        1               8       20        1              active sync   /dev/sdb4

        2               8       36        2              active sync   /dev/sdc4

        3               0        0        3               removed

 ~ $

It doesn’t seem it’s rebuilding or anything.

Kindly advise what further steps need to be done.

So I came home from Aruba (vacation) on the 17th to find my Sharespace Drive down… just as it has been described here. I’ve had the unit operating flawlessly for 4 years. And my warranty expired in 11/2011 so I am expecting to pay through the nose. That’s cool - I guess the Aruba attitude hadn’t rubbed off yet “One Happy Island”…

I read the threads and the on-line help with no clear direction. On the 18th I called tech support. Tier 1 support help was thorough and we could not figure out what we needed to do to solve the problem It could be a drive, it could be the Raid configuration or the Box went bad. I was given a case number after paying $14.95 for tier two support.

I was contacted by tier 2 support at 5 PM Monday (yesterday) and we could not figure it out. I was given instructions to check the drives using the Data Lifeguard Diagnostic utility. If I received an error I was instructed to phone right away. Got through 2 drives and an error was found on drive # 3. I called Tier 2 right away.

The agent I spoke has been with WD since 1984 (when they released the HUGE HDD of 40 GB). We worked together to diagnose the problem. We found that the unit was probably bad. Yet I needed to engage a data recovery firm. (there were no guarantees)

[deleted - for privacy violation]

Whether or not I get my data back is not the point to this post… I am cautiously optimistic. The point is the generosity of WD to help me through my problem is highly unexpected and met with my highest gratitude.

I’m going to their website as soon as I get the shipping labels emailed and buy 2 new 6TB drives from WD because of their generosity. Outcome aside, the level of customer service I received makes my $14.95 the best $14.95 I’ve EVER spent!

Thank You WD 

Nice story - hope you get your data back like I did…

Hello everyone.

I also have the “Datavolume doesn’t exist” message on my 8TB Sharespace (RAID5). To get my data back i followed the suggested steps of fibreiv.

Here is what i did so far

/ $ mdadm -D /dev/md2
mdadm: md device /dev/md2 does not appear to be active.

/ $ mdadm --assemble /dev/md2 /dev/sda4 /dev/sdb4 /dev/sdc4 /dev/sdd4
mdadm: device 1 in /dev/md2 has wrong state in superblock, but /dev/sdb4 seems ok
mdadm: /dev/md2 assembled from 3 drives - not enough to start the array while not clean - consider --force.

/ $ mdadm --assemble --force /dev/md2 /dev/sda4 /dev/sdb4 /dev/sdc4 /dev/sdd4
mdadm: clearing FAULTY flag for device 1 in /dev/md2 for /dev/sdb4
mdadm: /dev/md2 has been started with 3 drives (out of 4).

/ $ pvcreate /dev/md2
  No physical volume label read from /dev/md2
  Physical volume "/dev/md2" successfully created

/ $ vgcreate lvmr /dev/md2
  Incorrect metadata area header checksum
  Incorrect metadata area header checksum
  Incorrect metadata area header checksum
  Volume group "lvmr" successfully created

/ $ lvcreate -l 714329 lvmr -n lvm0
  Incorrect metadata area header checksum
  Logical volume "lvm0" created

so my current status is the following

/ $ vgdisplay
  Incorrect metadata area header checksum
  --- Volume group ---
  VG Name lvmr
  System ID
  Format lvm2
  Metadata Areas 1
  Metadata Sequence No 2
  VG Access read/write
  VG Status resizable
  MAX LV 255
  Cur LV 1
  Open LV 0
  Max PV 255
  Cur PV 1
  Act PV 1
  VG Size 5.45 TB
  PE Size 4.00 MB
  Total PE 1429473
  Alloc PE / Size 714329 / 2.72 TB
  Free PE / Size 715144 / 2.73 TB
  VG UUID Eme4PM-vnKr-5Kmr-IyLr-pNDn-3YFA-j3OPHM



/ $ lvdisplay
  Incorrect metadata area header checksum
  --- Logical volume ---
  LV Name /dev/lvmr/lvm0
  VG Name lvmr
  LV UUID eQzAq1-097a-KZPp-xSDM-2wpk-5yUV-JqhEa6
  LV Write Access read/write
  LV Status available
  # open 0
  LV Size 2.72 TB
  Current LE 714329
  Segments 1
  Allocation next free (default)
  Read ahead sectors 0
  Block device 253:0



/ $ pvdisplay
  Incorrect metadata area header checksum
  Incorrect metadata area header checksum
  Incorrect metadata area header checksum
  Incorrect metadata area header checksum
  --- Physical volume ---
  PV Name /dev/md2
  VG Name lvmr
  PV Size 5.45 TB / not usable 4.00 TB
  Allocatable yes
  PE Size (KByte) 4096
  Total PE 1429473
  Free PE 715144
  Allocated PE 714329
  PV UUID hg9IqT-L6yE-VlTw-pgGp-r6Y6-a11z-4b4jAL

  --- NEW Physical volume ---
  PV Name /dev/sdd4
  VG Name
  PV Size 262149.44 TB
  Allocatable NO
  PE Size (KByte) 0
  Total PE 0
  Free PE 0
  Allocated PE 0
  PV UUID hg9MqT-L2yE-VhTw-pgGp-r6Y6-a11z-4b4jAL



/ $ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
md1 : active raid1 sdd2[3] sdc2[2] sdb2[1] sda2[0]
      1044160 blocks [4/4] [UUUU]

md2 : active raid5 sda4[0] sdd4[3] sdb4[1]
      5855125824 blocks level 5, 64k chunk, algorithm 2 [4/3] [UU_U]

md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      208768 blocks [4/4] [UUUU]

unused devices: <none>



/ $ mdadm -D /dev/md2
/dev/md2:
        Version : 0.90
  Creation Time : Tue Aug 31 13:32:37 2010
     Raid Level : raid5
     Array Size : 5855125824 (5583.88 GiB 5995.65 GB)
  Used Dev Size : 1951708608 (1861.29 GiB 1998.55 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Sat Mar 17 18:02:30 2012
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 5f11c967:4dc0ff82:d1347bdf:dc09ce2e
         Events : 0.2492987

    Number Major Minor RaidDevice State
       0 8 4 0 active sync /dev/sda4
       1 8 20 1 active sync /dev/sdb4
       2 0 0 2 removed
       3 8 52 3 active sync /dev/sdd4

now when i try to run 

   fsck.ext3 /dev/lvmr/lvmr0

I get the message some of you also got:

The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193

fsck.ext3:

i also tried    e2fsck -b 8193 /dev/lvmr/lvmr0 with the same result

I am stuck now and have no idea how to continue from here. Should I try to just mount the raid? Is there anything else I can try?

Any help is highly appreciated.

Thanks

The command should be:

$ fsck.ext3 /dev/lvmr/lvm0

 …and not /dev/lvmr/lvmr0

thank you for you answer  - just tried it with

/ $ fsck.ext3 /dev/lvmr/lvm0   and    / $ fsck.ext3 -b 8193 /dev/lvmr/lvm0

the result is the same as before

Hello… 

We’re dead in the water with our 8TB sharespace.  It was running in “degraded status” for the past month or so.  Came in monday to a unit with one bad drive.  Our number 3 drive was flashing yellow.  The other three drives seemed to be working.  From the interface, we could see about a third of our folders but we couldn’t access any files.  We’ve replaced the bad drive.  It didn’t do an auto re-construct of the raid 5.  We are able to see the drives through SSH.  We can see the three drives that didn’t fail and their partitions.  We need help.  Any information you could give us would really be appreciated.

The Videolab Guys.

I didnt see what the dates were but thought I would chime in and put in my somewhat similar troubles, my sharespace 4 tb just reformatted itself , I have 3 TB of stuff on there that was countless hours and some irreplaceable . The surge protector got switched off and I switched back on, My 8TB sharespace started normally but my 4TB kept blinking. I read the led chart which stated that I am screwed most likely as it was rebuilding. I hope so much that it is only rebuilding one drive but it seems to be rebuilding the whole raid. 2000 minutes to go. SO MY BOTTOM line is, don’t keep your data on sharespace as a backup. They are not dependable enough, THANK GOD I took off all my work files and put them on my raid on my computer “”"which works flawlessly " and have it backed up to a spare drive and to CLOUD :o) , I am going to be so pissed and going to move to another drobo or netgear storage server,  Sharespace is way to slow as well. Support doesn’t care per my other previous issue… Time to go someplace else. Cant depend on WD. I was WD since 1995 :o)