Datavolume doesn't exist ! message

Well, isn’t this an interesting thread!

Can’t say I’d had many problems with our 8TB device with RAID5, except for every month or two, it’d tell me one of the drives was missing, but SMART always showed it as good. Clean it, pop it back in and it’d rebuild no problem.

However, after it last happened and a discussion with WDC support, we decided to RMA the drive. The new 2TB disk duely arrived, popped it in and it started to rebuild.  THEN, shock horror, after the 60% complete message, it all went to poop and now I find myself with the “doesn’t exist” issue - albeit slightly different circumstance to what others are seeing here.

After reading this and other threads carefully, I started down this path:

~ $ mdadm --assemble -f /dev/md2 /dev/sd[abcd]4
mdadm: forcing event count in /dev/sdd4(3) from 4283834 upto 4283840
mdadm: clearing FAULTY flag for device 3 in /dev/md2 for /dev/sdd4
mdadm: /dev/md2 has been started with 3 drives (out of 4) and 1 spare.
~ $ pvcreate /dev/md2
  No physical volume label read from /dev/md2
  Physical volume “/dev/md2” successfully created
~ $ vgcreate lvmr /dev/md2
  Incorrect metadata area header checksum
  Incorrect metadata area header checksum
  Incorrect metadata area header checksum
  Volume group “lvmr” successfully created
~ $ lvcreate -l 714329 lvmr -n lvm0
  Incorrect metadata area header checksum
  Logical volume “lvm0” created
~ $ fsck.ext3 /dev/lvmr/lvmr0
e2fsck 1.38 (30-Jun-2005)
fsck.ext3: while trying to open /dev/lvmr/lvmr0

The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>

fsck.ext3:

Despite it not looking good, it seems to be “recovering” something according to the mdadm detail output:

~ $ mdadm -D /dev/md2
/dev/md2:
        Version : 00.90.01
  Creation Time : Fri Apr  3 13:54:17 2009
     Raid Level : raid5
     Array Size : 5854981248 (5583.75 GiB 5995.50 GB)
    Device Size : 1951660416 (1861.25 GiB 1998.50 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Wed Apr 27 11:41:01 2011
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 13% complete

           UUID : b2c561ae:9acfcff0:2594d787:fb4eb047
         Events : 0.4283848

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
       4       8       20        1      spare rebuilding   /dev/sdb4
       2       8       36        2      active sync   /dev/sdc4
       3       8       52        3      active sync   /dev/sdd4

I guess I’ll wait and see what happens, but not very optimistic right now.

If anyone has anything to contribute to my situation, please feel free to do so! :neutral_face:

Thanks in advance.

1 Like