ShareSpace LVM gone, but the system still works?

Weird issue. In brief:

  • SS serves content just fine
  • /shares is empty apart fom USB drive
  • LVM blank
  • MD fine
  • GUI shows no shares, complains that system is not right if you add shares
  • System has rebooted with no LVM volumes

I discovered this when I saw my backup was failing for the past month, because the source directory, in /shares/ was gone.

pvscan is empty:

[root@WDShareSpace ~]# pvscan
  No matching physical volumes found

/etc/lvm has only the .cache file, and empty archive and backup directories.

MD is happy:


[root@WDShareSpace ~]# mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Mon Jul 19 17:45:19 2010
     Raid Level : raid1
     Array Size : 208768 (203.91 MiB 213.78 MB)
  Used Dev Size : 208768 (203.91 MiB 213.78 MB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Wed Mar 13 09:48:29 2013
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           UUID : 61b75ab5:079556fe:6a2d51bf:39960281
         Events : 0.343817

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
[root@WDShareSpace ~]# mdadm --detail /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Mon Dec 10 22:00:27 2012
     Raid Level : raid1
     Array Size : 1044160 (1019.86 MiB 1069.22 MB)
  Used Dev Size : 1044160 (1019.86 MiB 1069.22 MB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Wed Mar 13 09:40:44 2013
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           UUID : b9f31595:1e6b62d7:cf02841f:7ccc6541
         Events : 0.65272

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
       3       8       50        3      active sync   /dev/sdd2
[root@WDShareSpace ~]# mdadm --detail /dev/md2
/dev/md2:
        Version : 0.90
  Creation Time : Mon Jul 19 17:57:04 2010
     Raid Level : raid5
     Array Size : 2925438336 (2789.92 GiB 2995.65 GB)
  Used Dev Size : 975146112 (929.97 GiB 998.55 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Wed Mar 13 08:36:37 2013
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 78036f01:5d790fea:0b175f1f:d5460cdd
         Events : 0.3657262

    Number   Major   Minor   RaidDevice State
       0       8        4        0      active sync   /dev/sda4
       1       8       20        1      active sync   /dev/sdb4
       2       8       36        2      active sync   /dev/sdc4
       3       8       52        3      active sync   /dev/sdd4


There are these entries in the last 2 reboots I did:

rcS: Failed to activate LV on data volume.
rcS: Failed to activate LV on extend volume.

Not sure why the LAST reboot triggred the no /shares if this has happened before

The error messages come from:


echo -n “Data volume LV activate : "
vgchange -a y vg0 >/dev/null 2>&1
RETVAL=$?
if [”$RETVAL" == 0]; then
  echo “OK”
else
  rm -rf /dev/vg0
  rm -f /dev/mapper/vg0-lv0
  echo “Fail to activate LV”
  echo “rcS: Failed to activate LV on data volume.” >> /var/log/messages
fi

echo -n “Extend volume LV activate : "
vgchange -a y vg1 >/dev/null 2>&1
RETVAL=$?
if [”$RETVAL" == 0]; then
  echo “OK”
else
  rm -rf /dev/vg1
  rm -f /dev/mapper/vg1-lv1
  echo “Fail to activate LV”
  echo “rcS: Failed to activate LV on extend volume.” >> /var/log/messages
fi

The GUI shows no shares, and says the system is an incorrect state if I try to add one.

SAMBA shares out of /Datavolume, which is the MD volume.

I changed my rsync script to use /DataVolume instead of the dir in /shares, so I have a backup again.

What is the function of LVM on this system? Obviously something or there wouldn’t be errors in the GUI.

What the heck kind of Frankenstein’s Monster did WD create?

I did replace a failed drive back in December. When it rebuilt the array, did it undo LVM and forget to put it back? I looked through the PHP code, and sure enough, tons of LVM commands, but again: what is the function of LVM on the WD ShareSpace?

Also, no way to get the UUID I see. I’ll have to see if vol_id is in the package system and install it.

Hello, I’m with the same situation after a factory reset.

Everything is working but without LVM.

rcS: Failed to activate LV on data volume.
rcS: Failed to activate LV on extend volume.
Mar 25 21:15:09 WDShareSpace syslog.info syslogd started: BusyBox v1.1.1
Mar 25 21:15:12 WDShareSpace daemon.info wixEvent[1041]: HDD Status - HDD 4 is found.
Mar 25 21:15:15 WDShareSpace daemon.info wixEvent[1041]: HDD Status - HDD 3 is found.
Mar 25 21:15:20 WDShareSpace daemon.info wixEvent[1041]: HDD Status - HDD 2 is found.
Mar 25 21:15:24 WDShareSpace daemon.info wixEvent[1041]: HDD Status - HDD 1 is found.
Mar 25 21:15:40 WDShareSpace daemon.info wixEvent[1041]: USB Status - USB device inserted on port 2.
Mar 25 21:15:41 WDShareSpace daemon.warn wixEvent[1041]: Network Link - NIC 1 link is down.
Mar 25 21:15:45 WDShareSpace daemon.info wixEvent[1041]: Network Link - NIC 1 link is up 1000 Mbps full duplex.
Mar 25 21:15:45 WDShareSpace daemon.info wixEvent[1041]: System Startup - System startup.
Mar 25 21:15:45 WDShareSpace daemon.info wixEvent[1041]: Network IP Address - NIC 1 got IP address 192.168.15.2 from DHCP server.

~ $ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
md1 : active raid1 sdd2[3] sdc2[2] sdb2[1] sda2[0]
1044160 blocks [4/4] [UUUU]

md2 : active raid5 sdd4[3] sdc4[2] sdb4[1] sda4[0]
5855125824 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
208768 blocks [4/4] [UUUU]

unused devices:

~ $ pvdisplay

~ $
~ $ vgdisplay

No volume groups found
~ $
~ $ lvdisplay
No volume groups found
~ $

~ $ mount
/dev/root on /old type cramfs (ro)
proc on /old/proc type proc (rw,nodiratime)
/dev/md0 on / type ext3 (rw,noatime)
proc on /proc type proc (rw,nodiratime)
/proc/bus/usb/ on /proc/bus/usb type usbfs (rw)
/dev/pts on /dev/pts type devpts (rw)
trusteesfs on /trustees type trusteesfs (rw)
/dev/md2 on /DataVolume type ext3 (rw,noatime)
/dev/ram0 on /mnt/ram type tmpfs (rw)
/dev/md2 on /shares/music type ext3 (rw,noatime)
/dev/md2 on /shares/photo type ext3 (rw,noatime)
/dev/md2 on /shares/Public type ext3 (rw,noatime)
/dev/md2 on /shares/movie type ext3 (rw,noatime)