[GUIDE] Debrick a MyBookLive DUO

The first thing I did when I got my MBLD 6TB was make it RAID1.

My question: Does anyone out here know of a program that will read the data partition directly and allow copying/recovery of files directly to another drive via Windows?

If not, why isn’t there such a program?

For the last 8 hours, I’ve been trying to mount one of the MBLD drives using an Ubuntu boot disk (9.04 first and then 13.04) with no luck. At least 13.04 let me get network and gdisk installed but I can’t find the right verbage to mount a GPT partition from a failed RAID1 (that I can only assume is software RAID). Therefore, I can’t recover anything!

Why don’t I want to go through the GUIDE 1 steps?

Because I don’t ever want to put this incredibly under-engineered piece of **bleep** back into operation.

What REALLY ticks me off is that WD has the resources to make such a program but they won’t. Or maybe they can’t and maybe I don’t want it because they obviously can’t engineer something that works any better than this piece of **bleep**.

I have had nothing but trouble with it since I bought it and all I want is to drop kick it to the freaking moon. I will never ever waste my money on another WD product, especially since I see SO many problems out here and WD does NOTHING! WD support is completely and utterly useless, as proven by the number of people out here trying to find fixes on their own. WD doesn’t even seem to be the least bit embarrassed by this fact and that is truly pathetic.

Hi nfodiz,

I will outline my situation as best I can from memory.

  1. I have a MBLDuo 4TB (2x2TB) which was set up in RAID 1

  2. I obtained two 3 TB RED drives to put into the MBLD

  3. I first followed “[GUIDE] Debrick MyBookLive v2.0”, Guide 2 -3TB image, on one of the drives.

  4. Somewhere about here I realized that I was working from a guide for single drive MBL. Since I was already in progress I went ahead with the image and dropped the drive back into the MBLD case by itself.

  5. Ran Quick Factory Restore and MBLD rebooted with the single drive functioning correctly: Status Good

  6. At this point I wasn’t sure what I could get away with, so I tried installing the unallocated Drive B and running a Quick Restore. I don’t remember the exact response here, except that the unallocated drive did not come online.

  7. Pulled Drive B and installed the 3 TB image on it. Reinstalled it in the MBLD, ran Quick Restore.

  8. Unit rebooted and came up with green light. Logged into Dashboard and tried to run the Firmware Update that was being offered. This failed, with the message that I should install the second drive.

  9. At this point, the status bar in the Dashboard was showing green, with 3 GB used out of 3 TB. The Storage section of the Dashboard is not present.

  10. Decided that the 3 TB image had locked in the single drive mode and I had to start over.

  11. Went to the MBL Duo Guide 2

  12. When connected via SATA to PC booted from System Rescue CD, deleted all partitions with Gparted.

  13. “mdadm -S /dev/md0” reported “Stopped md0” The next two commands reported that the targets did not exist.

  14. I now rebooted and tried the command sequence again to see if I had messed something up, but the results were the same. Ran the script. Errors were reported regarding a non-standard number of (I think) blocks. I can’t be sure of the exact messages at this point. However, in the end, the script reported successful completion.

  15. Repeated sequence with Drive B, with same results.

  16. Put both drives back in case, booted unit, ran quick restore.

  17. SSH into drive, ran mkswap, got missing device complaint, ran mdadm --create…etc.

  18. After reboot, ran swapon -s command. Got the headers- Filename, Type, etc.-but no actual data. No SMART complaint.

  19. After reboot, unit was reporting good status, but Storage tab is still missing.

  20. Attempted Firmware Update, but it failed again- wanted both drives installed (they are). Note: the MBLD offered update several times in the course of these efforts, but I refused them until the end.

firmware_update02.PNG

This is pretty much the current situation. If I have to, I can put the original, unaltered drives back in and use the unit that way, but it seems that I’ve just missed some fine point in setting up the RED drives. It is working, but only as a single-drive unit. Any suggestions would be greatly appreciated.

Thanks!

EDIT: Oh yeah. Here’s the output of the Mount command-

MyBookLive:~# mount
/dev/md0 on / type ext3 (rw,noatime,nodiratime,barrier=1)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755,size=5M)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,size=5M)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
tmpfs on /tmp type tmpfs (rw,size=100M)
/var/log on /var/log.hdd type none (rw,bind)
ramlog-tmpfs on /var/log type tmpfs (rw,size=20M)
/dev/sda4 on /DataVolume type ext4 (rw,noatime,nodiratime)
/DataVolume/cache on /CacheVolume type none (rw,bind)
/DataVolume/shares on /shares type none (rw,bind)
/DataVolume/shares on /nfs type none (rw,bind)
none on /sys/kernel/security type securityfs (rw)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
MyBookLive:~#

Another EDIT:

@nfodiz: I’ve been reading through your exchange with Herokolco and printing out important parts so I can follow the sequence while I’m booted from the Rescue CD. I’m going to have another go at it with the added information once I think I’ve got the latest conclusions down.

Another thought: Something I read in the thread made me remember that I made a Config file and stored it on my computer. I’m going to see if I can restore the ‘Current Config’ and get any good results. Otherwise, I’ll run through the script procedure again.

Well, the Config restore did not seem to make a significant difference. I still don’t have the Storage section in Dashboard. I’m running a Factory quick restore just to make sure.

EDIT01 : Factory Quick Restore made no difference. I still have no Storage tab. MBL Duo does pass quick diagnostic.

EDIT02: Completed script procedure again. Did not wipe existing partitions in parted beforehand. Console output will be inserted below. Same complaints about Blocks Too Large, but proceeded as before.

When drives were loaded back into MBL Duo and booted up, Dashboard came up with the Storage tab. Woohoo! Major sense of relief for that. Ran Quick factory restore. Drives are now correctly recognized by model number and size. They came up in Spanned mode.

SSH:

swapon -s returns:

MyBookLiveDuo:~# swapon -s
Filename                                Type            Size    Used    Priority

Running:

MyBookLiveDuo:~# mkswap /dev/md2
mkswap: /dev/md2: warning: don’t erase bootbits sectors
        on whole disk. Use -f to force.
Setting up swapspace version 1, size = 500608 KiB
no label, UUID=a5e736b5-7762-439b-bf66-4fd4691df6a7

Now I get-

MyBookLiveDuo:~# swapon -s
Filename                                Type            Size    Used    Priority
/dev/md2                                partition       500544  3584    -1

It appears that everything is now in order! I am going to run the firmware update and then reset to RAID 1.

Update was successful. Array is now rebuilding!

@ nfodiz - KUDOS and MANY THANKS for all your work on this Guide! Once I got all the details in order in my mind things worked out GREAT! I would mark it as Accept as Solution if I could find the way to do it. In any case, BRAVO!

Console output from script running:

root@sysresccd /root % mkdir /mnt/usb

root@sysresccd /root % mount -t vfat /dev/sdb1 /mnt/usb

root@sysresccd /root % cd /mnt/usb

root@sysresccd /mnt/usb % mdadm -S /dev/md0

mdadm: stopped /dev/md0

root@sysresccd /mnt/usb % mdadm -S /dev/md2

mdadm: error opening /dev/md2: No such file or directory

root@sysresccd /mnt/usb % mdadm -S /dev/md3

mdadm: error opening /dev/md3: No such file or directory

root@sysresccd /mnt/usb % ./debrick.sh rootfs.img /dev/sda destroy

********************** DISK           **********************

script will use the following disk:

Model: ATA WDC WD30EFRX-68A (scsi)

Disk /dev/sda: 3001GB

Sector size (logical/physical): 512B/4096B

Partition Table: gpt

Disk Flags:

Number  Start   End     Size    File system  Name     Flags

 3      15.7MB  528MB   513MB                primary  raid

 1      528MB   2576MB  2048MB  ext3         primary  raid

 2      2576MB  4624MB  2048MB  ext3         primary  raid

 4      4624MB  3001GB  2996GB  ext4         primary  raid

 is this REALLY the disk you want? [y] y

********************** IMAGE          **********************

********************** IMPLEMENTATION **********************

everything is now prepared!

device:       /dev/sda

image_img:    rootfs.img

destroy:      true

this is the point of no return, continue? [y] y

32+0 records in

32+0 records out

33554432 bytes (34 MB) copied, 0.248524 s, 135 MB/s

32+0 records in

32+0 records out

33554432 bytes (34 MB) copied, 0.23149 s, 145 MB/s

32+0 records in

32+0 records out

33554432 bytes (34 MB) copied, 0.243844 s, 138 MB/s

32+0 records in

32+0 records out

33554432 bytes (34 MB) copied, 0.24067 s, 139 MB/s

Testing with pattern 0x00: done                                                

Reading and comparing: done                                                 

GNU Parted 3.1

Using /dev/sda

Welcome to GNU Parted! Type ‘help’ to view a list of commands.

(parted) mklabel gpt                                                     

(parted) mkpart primary 528M  2576M                                       

(parted) mkpart primary 2576M 4624M                                      

(parted) mkpart primary 16M 528M                                         

(parted) mkpart primary 4624M -1M                                        

(parted) set 1 raid on                                                    

(parted) set 2 raid on                                                   

(parted) set 3 raid on                                                   

(parted) set 4 raid on                                                    

(parted) quit                                                            

Information: You may need to update /etc/fstab.

Warning: blocksize 65536 not usable on most systems.                     

mke2fs 1.42.7 (21-Jan-2013)

mkfs.ext4: 65536-byte blocks too big for system (max 4096)

Proceed anyway? (y,n)

Warning: 65536-byte blocks too big for system (max 4096), forced to continue

Filesystem label=

OS type: Linux

Block size=65536 (log=6)

Fragment size=65536 (log=6)

Stride=0 blocks, Stripe width=0 blocks

45565440 inodes, 45714840 blocks

0 blocks (0.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=133677056

698 block groups

65528 blocks per group, 65528 fragments per group

65280 inodes per group

Superblock backups stored on blocks:

      65528, 196584, 327640, 458696, 589752, 1638200, 1769256, 3210872,

      5307768, 8191000, 15923304, 22476104, 40955000

Allocating group tables: done                           

Writing inode tables: done                           

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done  

destroying was done, would you like to continue with installation? [y]

mdadm: size set to 1999808K

mdadm: array /dev/md0 started.

mke2fs 1.42.7 (21-Jan-2013)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

125184 inodes, 499952 blocks

24997 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=515899392

16 block groups

32768 blocks per group, 32768 fragments per group

7824 inodes per group

Superblock backups stored on blocks:

      32768, 98304, 163840, 229376, 294912

Checking for bad blocks (read-only test):   0.00% done, 0:00 elapsed. (0/0/0 errdone                                                 

Allocating group tables: done                           

Writing inode tables: done                           

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done

mdadm: added /dev/sda2

synchronize raid… done

copying image to disk…

3999616+0 records in

3999616+0 records out

2047803392 bytes (2.0 GB) copied, 92.9394 s, 22.0 MB/s

mdadm: stopped /dev/md0

all done! device should be debricked!

Rebuild took ~8 hours. Drive was essentially empty.

Mount command now returns-

MyBookLiveDuo:~# mount
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755,size=5M)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,size=5M)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
rootfs on / type rootfs (rw,noatime,nodiratime,barrier=1)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
tmpfs on /tmp type tmpfs (rw,size=100M)
/var/log on /var/log.hdd type none (rw,bind)
ramlog-tmpfs on /var/log type tmpfs (rw,size=20M)
none on /sys/kernel/security type securityfs (rw)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
//192.168.1.100/Safe Point on /media/WDSAFE/75cf216d43c35eea64fc2cca520736c2 type cifs (rw)
/dev/md3 on /DataVolume type ext4 (rw,noatime)
/DataVolume/shares on /shares type none (rw,bind)
/DataVolume/shares on /nfs type none (rw,bind)
/DataVolume/cache on /CacheVolume type none (rw,bind)
MyBookLiveDuo:~#

I didn’t catch this thread until today but I’m glad you figured it out.

1 Like

Thanks again for the outstanding Guides!

I am attempting to recover my MBLD from the dreaded yellow light. I followed Guide 1 and still cannot get past the Yellow light. My MBLD is in Raid 0 and I am wondering if I need to tweak something to get the debricking process to work right.

When I run the debrick script, the part where it is checking for bad blocks ends up showing something like (100(forgot exact amount)/0/0 errors). Now does that mean it has found 100+ errors?

I really appreciate any input anyone can give me!

Most likely the hard drive in question is about to bite the dust

http://en.wikipedia.org/wiki/Badblocks

1 Like

Dang! I just bought the thing a couple of months ago too. So everything I have on there is pretty much gone now correct?

Once i put it back together and plug it in, the yellow light stays on but I can get to the login screen by typing the MBLD IP into my browser. It prompts me for the admin password and I am not sure what the default password is. I tried my previous password but that doesn’t seem to work. Any ideas what it is?

If the MBLD had a factory restore done, the password should be blank…NO password, that is.

About your data- since you have a setup that could run the script procedure, you might try using that to attempt data recovery. If the RescueCD you used for the script does not help, you could try some of the utilities on the Hirens BootCD. You can get an ISO image to burn the CD here:

http://www.hirensbootcd.org/download/

Be aware that it is several hundred MB.

The CD can boot into mini-XP, DOS, or Linux Live. I suggest the last one, in default mode to start out. There are menus to get booted up and to select utilities once you do. This disc has an amazing number of recovery and other apps on it. I was able to recover a lot of stuff from a nearly totally failed 2 TB Green drive, so I have very positive feelings about this approach.

1 Like

I should add that it took me more than one run of the script procedure to get new drives working. Fortunately for me, but not for you, I did not care about contents, and further more, still had the original drives unaltered.

I’ve just read in another thread about complications with encrypted drives, too. I have to get going now, but I’ll pick this up when I can get back to it later today.

1 Like

Quick question before I try this out. Do I need to have both drives connected to recover the data?

Just guessing, but I suppose you would. I’m afraid that I probably had a mental lapse regarding RAID 0, swapping it with RAID 1. It’s hard to imagine a scenario in which you could get a RAID controller to even recognize the drives as being associated.

HOWEVER, it also occurs to me that MBLDs DON’T DO RAID 0. The drives are SPANNED, which is similar to JBOD. That is, data is written to drive A until it is full, and then moves on to drive B. That suggests that you ought to be able to hook up drive A and attempt recovery on it by itself. The same would go for drive B if you had that much data on the array. If ANY recovery is possible at all, only a file which crossed the “span” so that part was on A and part on B might have problems. Even that might be overcome if you could trick HJ Split or similar into rejoining the fragments.

2 Likes

I am currently booted into the Hiren BootCD. I tried with only drive A to mount the partition with the data but I got errors. So I tried to make it reassemble the RAID but it complained about not having all of the discs. I just added Drive B so that both drives are connected to my computer but still I am unable to get them to mount properly. I am not very good with Linux so if anyone knows any tricks to getting this to work I would grealy appreciate it.

In the Linux boot, lower right corner, there is an icon which pops up a long menu of programs. Test Disk is in there, among others. That said, I have not have much luck with it.

Did you actually get the A drive to mount? I had problems with that on my failing Green drive when attempting with the drive mounting app. I forget the name, but I think it is in the quick launch bar at the lower left. However, up on the desktop, upper leftish, was an icon called something like “Drive Status”. Running this and hitting on the problem drive gave errors briefly at first, but then the drive mounted. At that point I was able to copy files and folders to another drive via the file manager.

1 Like

I got the array to start rebuilding…or so it says. Not sure how to check on the status, I used the command mdadm --assemble mounting the 2 big partitions from each drive as /dev/md0. Still no luck getting that to open.

Under the Disk Health application, Disk A is showing errors.

Sometimes just having to sit back and wait is the hardest part of all.

1 Like

So far after assembling the RAID array (even though its not really RAID), I found this post  http://mybookworld.wikidot.com/forum/t-375874/pull-data-from-removed-hard-drive#post-1253716

Using the debugfs command and specifying the block size, I am able to see the root directory of my drive but thats as far as I can get. When I try to navigate or dump my files, I get an error stating “File not found by the ext2_lookup.” Now from what I understand, the MBLD functions with a ext4 fs which I have no idea how to get around. Any ideas?

Edit: So I decided to try to the pull the entire “Shares” folder which was at the root and it looks like its working! I ran the debugfs in catastrophic mode this time too. I hope this works…

1 Like

I can only offer best wishes. You have certainly been adaptable and persistent. I’m pretty much a cookbook operator, following recipes/instructions in Linux as it were, so it’s hard for me to say much that would help. It does seem that you have some encouraging possibilities. I hope they pan out.

The Kudos are really to your credit at this point!

I think I got pretty much everything backed up. I will proceed with a full wipe and reinstall later today. Thanks for putting me on the right track Kieren!

1 Like