Datavolume doesn't exist ! message

Nathan,

Any update on what I can try for a password to get into the system??  welc0me does not work.

Sorry I took so long to reply.  I had to get ahold of a ShareSpace I could test with (once again thanks to WDJeremy on this one).

The ‘admin’ user is just a standard user and does not have administrative rights when using SSH.  This means you can look around and access the ShareSpace but not change any system files.  Unfortunately, this also means that it isn’t possible to change the superuser password without actually logging into that account first.

If ‘root’:‘welc0me’ doesn’t work, then there’s not much you can do.  If everything else is working properly then there’s nothing to worry about.  If you are committed to accessing your ShareSpace’s running OS, then I would recommend backing up the data on the ShareSpace, erasing every drive in the ShareSpace, then letting it rebuild over a day or two.  Once it rebuilds you should be able to copy your data back and SSH in using PuTTY.

I have a 1.5 world book and i am getting the same message when i login and goto status i get the following

System Information
Device Name MyBookWorld
Version 01.02.06 with MioNet 4.3.1.13
built on Thu Oct 21 10:19:41 CST 2010
Date & Time Fri, 05 Aug 2011 13:29:26
System Uptime 0 day, 0:28
IP Address 192.168.5.187
DataVolume Usage Failed

Can i please get a step by step on how to correct this issue i am using Win XP and downloaded putty for this but i am not the most knowledgeable and can not get putty to log in.

By the way this is a single disc not a multiple disk worldbook

Last 5000 System Log Entries
 08/05 14:28:57  MyBookWorld daemon.alert wixEvent[3311]: Volume Status - Volume 'DataVolume' doesn't exist.
 08/05 14:28:57  MyBookWorld daemon.warn wixEvent[3311]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed.
 08/05 14:28:09  MyBookWorld daemon.info init: Starting pid 3968, console /dev/ttyS0: '/sbin/getty'
 08/05 14:28:09  MyBookWorld daemon.info init: Starting pid 3967, console /dev/null: '/bin/touch'
 08/05 14:28:09  MyBookWorld daemon.info wixEvent[3311]: System Startup - System startup.
 08/05 14:28:09  MyBookWorld syslog.info miocrawler: === Walking directory done.
 08/05 14:28:09  MyBookWorld syslog.info miocrawler: === mc_trans_updater_init() ...Done.
 08/05 14:28:08  MyBookWorld syslog.info miocrawler: mc_trans_updater_init() ...
 08/05 14:28:08  MyBookWorld syslog.info miocrawler: === inotify init done.
 08/05 14:28:08  MyBookWorld syslog.info miocrawler: === mcUtilsInit() Done.
 08/05 14:28:06  MyBookWorld syslog.info miocrawler: mcUtilsInit() Creating free queue pool
 08/05 14:28:05  MyBookWorld syslog.info miocrawler: === mc_db_init ...Done.
 08/05 14:28:05  MyBookWorld syslog.info miocrawler: ++++++++ database exists: ret = 0
 08/05 14:28:05  MyBookWorld syslog.info miocrawler: mc_db_init ...
 08/05 14:28:05  MyBookWorld syslog.info miocrawler: +++++++++++++++ START OF ./miocrawler at 2011:08:05 - 14:28:04 [Version 01.09.00.96] ++++++++++++++
 08/05 14:27:59  MyBookWorld daemon.info wixEvent[3311]: Network IP Address - NIC 1 use static IP address 192.168.5.187
 08/05 14:27:59  MyBookWorld daemon.info wixEvent[3311]: Network Link - NIC 1 link is up 1000 Mbps full duplex.
 08/05 14:27:52  MyBookWorld daemon.warn wixEvent[3311]: Network Link - NIC 1 link is down.
 08/05 14:27:52  MyBookWorld syslog.info syslogd started: BusyBox v1.1.1
 08/05 14:26:09  MyBookWorld syslog.info System log daemon exiting.
 08/05 14:26:08  MyBookWorld daemon.info init: Starting pid 4278, console /dev/null: '/usr/bin/killall'
 08/05 14:26:04  MyBookWorld daemon.warn wixEvent[3322]: System Reboot - System will reboot.
 08/05 14:23:53  MyBookWorld daemon.alert wixEvent[3322]: Volume Status - Volume 'DataVolume' doesn't exist.
 08/05 14:23:52  MyBookWorld daemon.warn wixEvent[3322]: Media Server - Media Server cannot find the path to one or more of the default folders: /Public/Shared Music, /Public/Shared Pictures or /Public/Shared Videos. Please verify that these folders have not been removed or that the names have not been changed.
 08/05 14:23:06  MyBookWorld daemon.info wixEvent[3322]: System Startup - System startup.

Hi all,

Thanks for the great posts.  Unfortunately, I too have encountered this error.  And after reading the posts (wish I did before I purchased!!), I am looking for a way to get my data off and then I am going to send this crappy device back.

I have tried to follow the instructions, but ran into some difficulties.

Firstly, here is what the mdadm -examine showed:

~ $ mdadm --examine /dev/sd[abcd]4
mdadm: No md superblock detected on /dev/sda4.
/dev/sdb4:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 3cb68146:6a9c54c6:1dc5ce53:4bda1c76
  Creation Time : Fri Jun 24 18:30:30 2011
     Raid Level : raid5
  Used Dev Size : 975146112 (929.97 GiB 998.55 GB)
     Array Size : 2925438336 (2789.92 GiB 2995.65 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2

    Update Time : Tue Aug 16 18:58:45 2011
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 909dce20 - correct
         Events : 496939

         Layout : left-symmetric
     Chunk Size : 64K

      Number Major Minor RaidDevice State
this 1 8 20 1 active sync /dev/sdb4

   0 0 8 4 0 active sync
   1 1 8 20 1 active sync /dev/sdb4
   2 2 8 36 2 active sync /dev/sdc4
   3 3 8 52 3 active sync /dev/sdd4
/dev/sdc4:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 3cb68146:6a9c54c6:1dc5ce53:4bda1c76
  Creation Time : Fri Jun 24 18:30:30 2011
     Raid Level : raid5
  Used Dev Size : 975146112 (929.97 GiB 998.55 GB)
     Array Size : 2925438336 (2789.92 GiB 2995.65 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2

    Update Time : Tue Aug 16 18:58:45 2011
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 909dce32 - correct
         Events : 496939

         Layout : left-symmetric
     Chunk Size : 64K

      Number Major Minor RaidDevice State
this 2 8 36 2 active sync /dev/sdc4

   0 0 8 4 0 active sync
   1 1 8 20 1 active sync /dev/sdb4
   2 2 8 36 2 active sync /dev/sdc4
   3 3 8 52 3 active sync /dev/sdd4
/dev/sdd4:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 3cb68146:6a9c54c6:1dc5ce53:4bda1c76
  Creation Time : Fri Jun 24 18:30:30 2011
     Raid Level : raid5
  Used Dev Size : 975146112 (929.97 GiB 998.55 GB)
     Array Size : 2925438336 (2789.92 GiB 2995.65 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 2

    Update Time : Tue Aug 16 18:58:45 2011
          State : active
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 90963919 - correct
         Events : 496940

         Layout : left-symmetric
     Chunk Size : 64K

      Number Major Minor RaidDevice State
this 3 8 52 3 active sync /dev/sdd4

   0 0 8 4 0 active sync
   1 1 8 20 1 active sync /dev/sdb4
   2 2 8 36 2 active sync /dev/sdc4
   3 3 8 52 3 active sync /dev/sdd4

 This is what happened if I tried to assemble:

~ $ mdadm --assemble /dev/md2 --force /dev/sda4 /dev/sdb4 /dev/sdc4 /dev/sdd4
mdadm: no recogniseable superblock on /dev/sda4
mdadm: /dev/sda4 has no superblock - assembly aborted

So then I thought I would try to assemble without the ‘bad’ one:

~ $ mdadm --assemble /dev/md2 --force /dev/sdb4 /dev/sdc4 /dev/sdd4
mdadm: failed to RUN_ARRAY /dev/md2: Invalid argument

 But it wouldn’t run :frowning:

This is what the cat /proc/mdstat output showed:

~ $ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
md1 : active raid1 sdd2[3] sdc2[2] sdb2[1] sda2[0]
      1044160 blocks [4/4] [UUUU]

md2 : inactive sdb4[1] sdd4[3] sdc4[2]
      2926038720 blocks
md0 : active raid1 sdd1[2] sdc1[3] sdb1[1] sda1[0]
      208768 blocks [4/4] [UUUU]

unused devices: <none>

And the mdadm -D showed:

~ $ mdadm -D /dev/md2
/dev/md2:
        Version : 0.90
  Creation Time : Fri Jun 24 18:30:30 2011
     Raid Level : raid5
  Used Dev Size : 975146112 (929.97 GiB 998.55 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Tue Aug 16 18:58:45 2011
          State : active, degraded, Not Started
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 3cb68146:6a9c54c6:1dc5ce53:4bda1c76
         Events : 0.496939

    Number Major Minor RaidDevice State
       0 0 0 0 removed
       1 8 20 1 active sync /dev/sdb4
       2 8 36 2 active sync /dev/sdc4
       3 8 52 3 active sync /dev/sdd4

 So I tried to add the ‘bad’ one:

~ $ mdadm /dev/md2 --re-add /dev/sda4
mdadm: hot add failed for /dev/sda4: No such device

 With no luck.

The pvdisplay shows nothing (just goes back to a command prompt).

If I try to do a pvcreate it fails.

Here is the dmesg output:

ufsd: module license 'Commercial product' taints kernel.
ufsd: driver loaded at bf003000 built on Oct 29 2009 11:35:46
UFSD version 7.07 (Oct 29 2009, 11:15:24)
NTFS read/write support included
Hfs+/HfsX read/write support included
$Id: ufsdvfs.c,v 1.235.2.1 2009/09/24 13:04:55 shura Exp $ (LBD=ON)
Attempt to mount non-MTD device "/dev/md2" as JFFS2
FAT: Unrecognized mount option "usrquota" or missing value
Trustees: Building new trustee hash
Trustees: Added element to trustee hash: j 2, name : /Public
Trustees: Added element to trustee hash: j 13, name : /Download
Trustees: Added element to trustee hash: j 10, name : /Chris
Trustees: Added element to trustee hash: j 19, name : /Tressa
Trustees: Added element to trustee hash: j 0, name : /Natasha
Trustees: Added element to trustee hash: j 14, name : /Hunter
Trustees: Added element to trustee hash: j 5, name : /.timemachine
Trustees: Added element to trustee hash: j 1, name : /shares
Trustees: Added element to trustee hash: j 11, name : /Configuration
md: bind<sdb1>
RAID1 conf printout:
 --- wd:1 rd:4
 disk 0, wo:0, o:1, dev:sda1
 disk 1, wo:1, o:1, dev:sdb1
..............................<6>md: syncing RAID array md0
md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
md: using maximum available idle IO bandwith (but not more than 200000 KB/sec) for reconstruction.
md: using 128k window, over a total of 208768 blocks.
md: bind<sdc1>
md: bind<sdd1>
md: md0: sync done.
RAID1 conf printout:
 --- wd:2 rd:4
 disk 0, wo:0, o:1, dev:sda1
 disk 1, wo:0, o:1, dev:sdb1
RAID1 conf printout:
 --- wd:2 rd:4
 disk 0, wo:0, o:1, dev:sda1
 disk 1, wo:0, o:1, dev:sdb1
 disk 2, wo:1, o:1, dev:sdd1
RAID1 conf printout:
 --- wd:2 rd:4
 disk 0, wo:0, o:1, dev:sda1
 disk 1, wo:0, o:1, dev:sdb1
 disk 2, wo:1, o:1, dev:sdd1
 disk 3, wo:1, o:1, dev:sdc1
..............................<6>md: syncing RAID array md0
md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
md: using maximum available idle IO bandwith (but not more than 200000 KB/sec) for reconstruction.
md: using 128k window, over a total of 208768 blocks.
md: md0: sync done.
RAID1 conf printout:
 --- wd:4 rd:4
 disk 0, wo:0, o:1, dev:sda1
 disk 1, wo:0, o:1, dev:sdb1
 disk 2, wo:0, o:1, dev:sdd1
 disk 3, wo:0, o:1, dev:sdc1
md: md2 stopped.
md: bind<sda4>
md: bind<sdc4>
md: bind<sdd4>
md: bind<sdb4>
md: md127 stopped.
md: md127 stopped.
md: md2 stopped.
md: unbind<sdb4>
md: export_rdev(sdb4)
md: unbind<sdd4>
md: export_rdev(sdd4)
md: unbind<sdc4>
md: export_rdev(sdc4)
md: unbind<sda4>
md: export_rdev(sda4)
md: md2 stopped.
md: md2 stopped.
md: md2 stopped.
md: bind<sdc4>
md: bind<sdd4>
md: bind<sdb4>
md: md2: raid array is not clean -- starting background reconstruction
raid5: device sdb4 operational as raid disk 1
raid5: device sdd4 operational as raid disk 3
raid5: device sdc4 operational as raid disk 2
raid5: cannot start dirty degraded array for md2
RAID5 conf printout:
 --- rd:4 wd:3 fd:1
 disk 1, o:1, dev:sdb4
 disk 2, o:1, dev:sdc4
 disk 3, o:1, dev:sdd4
raid5: failed to run raid set md2
md: pers->run() failed ...
md: md2 stopped.
md: unbind<sdb4>
md: export_rdev(sdb4)
md: unbind<sdd4>
md: export_rdev(sdd4)
md: unbind<sdc4>
md: export_rdev(sdc4)
md: md2 stopped.
md: md2 stopped.
md: md2 stopped.
md: bind<sdc4>
md: bind<sdd4>
md: bind<sdb4>
md: md2 stopped.
md: unbind<sdb4>
md: export_rdev(sdb4)
md: unbind<sdd4>
md: export_rdev(sdd4)
md: unbind<sdc4>
md: export_rdev(sdc4)
md: bind<sdc4>
md: bind<sdd4>
md: bind<sdb4>
md: md2: raid array is not clean -- starting background reconstruction
raid5: device sdb4 operational as raid disk 1
raid5: device sdd4 operational as raid disk 3
raid5: device sdc4 operational as raid disk 2
raid5: cannot start dirty degraded array for md2
RAID5 conf printout:
 --- rd:4 wd:3 fd:1
 disk 1, o:1, dev:sdb4
 disk 2, o:1, dev:sdc4
 disk 3, o:1, dev:sdd4
raid5: failed to run raid set md2
md: pers->run() failed ...
md: md2 stopped.
md: unbind<sdb4>
md: export_rdev(sdb4)
md: unbind<sdd4>
md: export_rdev(sdd4)
md: unbind<sdc4>
md: export_rdev(sdc4)
md: md2 stopped.
md: bind<sdc4>
md: bind<sdd4>
md: bind<sdb4>
md: md2: raid array is not clean -- starting background reconstruction
raid5: device sdb4 operational as raid disk 1
raid5: device sdd4 operational as raid disk 3
raid5: device sdc4 operational as raid disk 2
raid5: cannot start dirty degraded array for md2
RAID5 conf printout:
 --- rd:4 wd:3 fd:1
 disk 1, o:1, dev:sdb4
 disk 2, o:1, dev:sdc4
 disk 3, o:1, dev:sdd4
raid5: failed to run raid set md2
md: pers->run() failed ...
md: md2 stopped.
md: unbind<sdb4>
md: export_rdev(sdb4)
md: unbind<sdd4>
md: export_rdev(sdd4)
md: unbind<sdc4>
md: export_rdev(sdc4)
md: md2 stopped.
md: bind<sdc4>
md: bind<sdd4>
md: bind<sdb4>
md: md2: raid array is not clean -- starting background reconstruction
raid5: device sdb4 operational as raid disk 1
raid5: device sdd4 operational as raid disk 3
raid5: device sdc4 operational as raid disk 2
raid5: cannot start dirty degraded array for md2
RAID5 conf printout:
 --- rd:4 wd:3 fd:1
 disk 1, o:1, dev:sdb4
 disk 2, o:1, dev:sdc4
 disk 3, o:1, dev:sdd4
raid5: failed to run raid set md2
md: pers->run() failed ...
md: md2 stopped.
md: unbind<sdb4>
md: export_rdev(sdb4)
md: unbind<sdd4>
md: export_rdev(sdd4)
md: unbind<sdc4>
md: export_rdev(sdc4)
md: md2 stopped.
md: md2 stopped.
md: bind<sdc4>
md: bind<sdd4>
md: bind<sdb4>
md: md2: raid array is not clean -- starting background reconstruction
raid5: device sdb4 operational as raid disk 1
raid5: device sdd4 operational as raid disk 3
raid5: device sdc4 operational as raid disk 2
raid5: cannot start dirty degraded array for md2
RAID5 conf printout:
 --- rd:4 wd:3 fd:1
 disk 1, o:1, dev:sdb4
 disk 2, o:1, dev:sdc4
 disk 3, o:1, dev:sdd4
raid5: failed to run raid set md2
md: pers->run() failed ...
md: md2 stopped.
md: unbind<sdb4>
md: export_rdev(sdb4)
md: unbind<sdd4>
md: export_rdev(sdd4)
md: unbind<sdc4>
md: export_rdev(sdc4)
md: md2 stopped.
md: bind<sdc4>
md: bind<sdd4>
md: bind<sdb4>
md: md2: raid array is not clean -- starting background reconstruction
raid5: device sdb4 operational as raid disk 1
raid5: device sdd4 operational as raid disk 3
raid5: device sdc4 operational as raid disk 2
raid5: cannot start dirty degraded array for md2
RAID5 conf printout:
 --- rd:4 wd:3 fd:1
 disk 1, o:1, dev:sdb4
 disk 2, o:1, dev:sdc4
 disk 3, o:1, dev:sdd4
raid5: failed to run raid set md2
md: pers->run() failed ...
raid5: device sdb4 operational as raid disk 1
raid5: device sdd4 operational as raid disk 3
raid5: device sdc4 operational as raid disk 2
raid5: cannot start dirty degraded array for md2
RAID5 conf printout:
 --- rd:4 wd:3 fd:1
 disk 1, o:1, dev:sdb4
 disk 2, o:1, dev:sdc4
 disk 3, o:1, dev:sdd4
raid5: failed to run raid set md2
md: pers->run() failed ...
md: md2 stopped.
md: unbind<sdb4>
md: export_rdev(sdb4)
md: unbind<sdd4>
md: export_rdev(sdd4)
md: unbind<sdc4>
md: export_rdev(sdc4)
md: md2 stopped.
md: md2 stopped.
md: bind<sdc4>
md: bind<sdd4>
md: bind<sdb4>
md: md2: raid array is not clean -- starting background reconstruction
raid5: device sdb4 operational as raid disk 1
raid5: device sdd4 operational as raid disk 3
raid5: device sdc4 operational as raid disk 2
raid5: cannot start dirty degraded array for md2
RAID5 conf printout:
 --- rd:4 wd:3 fd:1
 disk 1, o:1, dev:sdb4
 disk 2, o:1, dev:sdc4
 disk 3, o:1, dev:sdd4
raid5: failed to run raid set md2
md: pers->run() failed ...

I don’t understand why I cannot get the array up with 3 disks and get my data off.  Is RAID 5 not meant to be that you can recover if you lose a disk?

Please help!

hi

ijust got this error too

i’m not using sphereshare but mybook world edition 2

i have a ssh access and tried serching for my file via winscp

there is no RAID5 but Stripe .

is there any way to save my data??

Hi,

I hope you already solved your problem.
I had the same problem and have not found the solution in the community, so I’ll put the solution I found to help those who might have the same problem.

To view “Filesystem volume name” enter by SSH and do:
$ Tune2fs /dev/md2 -l

This shows the superblock.

To assign the name of the volume do:
$ Tune2fs /dev/md2 -L DataVolume

Restarts the system:
$ reboot

The error “Data Volume ‘DataVolume’ does not exist!” should have disappeared.

Hi,

I’m experiencing a similar problem, i got the same error “Datavolume doesn’t exist ! message”

but i can’t get it to work with your solutions.

I’ve tried the commands on 2nd page by fibreiv with no result,neither the solution by orlmor worked for me.

solution by fibreiv:

~ $ mdadm -D /dev/md2
mdadm: md device /dev/md2 does not appear to be active.

if i issue the next commands give error.

Solution by orlmor.

tune2fs: No valid superblock on /dev/md2

also, if i ussue:

~ $ pvdisplay
~ $
no output

~ $ vgdisplay
  No volume groups found

~ $ lvdisplay
  No volume groups found
~ $

any ideas on how to solve this?

…you can’t just re-assemble the disk, you have to recreate - follow the step for pvcreate etc. also.

this is my result

/ # mdadm -D /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Fri Nov 19 18:38:35 2010
Raid Level : raid0
Array Size : 3900549760 (3719.85 GiB 3994.16 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 2
Persistence : Superblock is persistent

Update Time : Fri Dec 30 18:52:05 2011
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Chunk Size : 64K

UUID : c89d95b9:dce5f7b9:23acd969:c1d9f3b5
Events : 0.27

Number Major Minor RaidDevice State
0 8 4 0 active sync /dev/sda4
1 8 20 1 active sync /dev/sdb4

still can not access my data

DataVolume Usage Failed

Did you ever resolve this issue (queimporta)? I have the same problem, and the two suggested resolutions yielded the same results you mentioned…

Trying to save family pictures and home videos. Thought a Raid 5 was safe… Not happy about WD products. Had similar issues with a worldbook previously (That one wasn’t RAID).

Thanks all for sharing…

@slg23 it would be useful to know what steps you’ve followed and what happened at each…?

which suugested steps did you follow?

i have the exact same problem, down to the sequence

rik_t81 wrote:

Got my DataVolume mounted and visable in windows and got my files back thanks to this guide, I’m so thankful and relieved as I thought I’d lost over 10 years of family videos and pictures.

 

 

.that doesn’t help much - can you post you results and command outputs? What happened at each step? What is your system config? Firmware version and so on.

If your data really means that much to you, the overwhelming advice is to take it to a professional - speaking mainly for myself, we’re mostly amatuers at this stuff who have had varying degrees of success.

Hi all,

Any help would be most appreciated!  I have the same issue (Datavolume doesn’t exist) and I’ve tried hard to follow the suggestions but I know nothing about linux.  I have 4x2Gb drives in a Raid5 array.

~ $ mdadm --assemble /dev/md2 /dev/sda4 /dev/sdb4 /dev/sdc4 /dev/sdd4
mdadm: /dev/md2 assembled from 3 drives - not enough to start the array while not clean - consider --force.
~ $ mdadm --assemble -f /dev/md2 /dev/sd[abcd]4
mdadm: /dev/md2 has been started with 3 drives (out of 4).

~ $ mdadm --examine /dev/sd[abcd]4

Number Major Minor RaidDevice State
this 3 8 52 3 active sync /dev/sdd4

0 0 8 4 0 active sync /dev/sda4
1 1 8 20 1 active sync /dev/sdb4
2 2 0 0 2 faulty removed
3 3 8 52 3 active sync /dev/sdd4

So I determined /dev/sdc4 was the issue.  I’ve then run 

~ $ smartctl -i /dev/sdc4

to determine the serial number and swapped out that drive with a brand new drive.

I was hoping the WD would be smart enough to rebuild itself at powerup but sadly not.

So I’ve then tried…

~ $ mdadm --assemble -f /dev/md2 /dev/sd[abcd]4
mdadm: cannot open device /dev/sdc4: No such device or address
mdadm: /dev/sdc4 has no superblock - assembly aborted

and now I have no idea what to do!  Can someone please assist?

Thanks.

…try to assemble with drive c, i.e.

mdadm --assemble -f /dev/md2 /dev/sd[abd]4

Then you can add the disk later.

Hi,

The last thing I’d run was

~ $ mdadm --assemble -f /dev/md2 /dev/sd[abcd]4

mdadm: cannot open device /dev/sdc4: No such device or address

mdadm: /dev/sdc4 has no superblock - assembly aborted

Do you mean I should remove the new drive and runn the same command, rebuild using 3 drives then add the forth back in?

Thanks.

…just try and rebuild as you are - i.e. just [abd]

3 disks should be enough to rebuild the array.

Thanks Footleg, easy enough!

I’m at work at the moment but will try this evening and let you know how it goes.

Cheers.

…no worries - good luck. I was able to rebuild mine with 3 disks, mount the volume, get my data off. Then zero’d each disk in turn, popped them back in and let the WD rebuild itself clean.

Hi Footleg (and everyone else),

This is probably a dumb question, but before I continue does this look correct?  I’m really hoping to save my data if at all possible!

~ $ mdadm --assemble -f /dev/md2 /dev/sd[abd]4
mdadm: /dev/md2 has been started with 3 drives (out of 4).
~ $ pvcreate /dev/md2
/dev/sdc3: open failed: No such device or address
Can’t initialize physical volume “/dev/md2” of volume group “vg0” without -ff
~ $ pvcreate -ff /dev/md2
/dev/sdc3: open failed: No such device or address
Really INITIALIZE physical volume “/dev/md2” of volume group “vg0” [y/n]?

Assuming I continue from here, any chance you can confirm my next steps?  I’m thinking…

pvcreate /dev/md2

vgcreate lvmr /dev/md2

lvcreate -l 714329 lvmr -n lvm0  (but not 714329 as 4x2tb so I’d need to find correct value)

fsuk.ext3 /dev/lvmr/lvmr0

mount -t ext3 /dev/lvmr/lvmr0 /DataVolume -o rw,noatime

and then hopefully I can at least get my data off??  

Thanks again for your help!