My error started with "failed to mount"...so i removed the case and now just want to fix the partitions

Hello to all,

Based on feedback given in this forum to other users, the consensus answer to possibly recovery/restore the partition structure of a 3 TB WD Red NAS HD/My Cloud drive seems to be to restore from a raw image to one or more partitions. I have spent a few weeks running various windows and Linux based recovery tools, just to understand if any data is still present on my drive. Well, there is over 800GB of something, but I have not seen one actual file that I suspect should be there.

Below is a screenshot from gparted. Is it obvious to anyone if restoring to the 2 Linux-raid drives (/dev/sdc1 and /dev/sdc2) is the recommended path. Its what I want to do, I just want to do it correctly. I am working primarily in Windows, but can follow and execute Linux, with a little help from my friends.

I have two or three forum pages bookmarked for reference, but last night as I was attempting to download the image file, I couldn’t find an active, trustworthy looking link.

The one file I did download was for a debian package. Is there a path for doing this in a Windows environment? If not, would any distro/live cd that picks up my USB drive be sufficient for working on the two broken partitions?

Thanks for the advice…looking forward to attempting this recovery.

Cheers,
Whalon

The best way to work with what your attempting to do, search the contents of the drive and possibly “unbrick” the drive is to use a Linux Live boot disc/CD or to use a Linux OS running on the PC. Windows does not have the proper drivers to access and open Linux formatted partitions/drives. While there are software one can install to read/write to Linux drives in Windows, its best to work with the My Cloud bare drive from within the Linux environment. Ubuntu (http://www.ubuntu.com/download) is one popular Live version that one can download, burn to a CD/DVD and boot their PC to the Linux Live environment.

If one wants to “unbrick” their My Cloud hard drive (which may cause the loss of all user data on the drive) there are numerous unbricking methods listed in this subforum. One can use the search feature (magnifying glass icon upper right) to find those threads.

Good evening Bennor:

I used an Ubuntu 12.04 live cd to boot up and connect the sata drive. Attempted to mount sudo mount /dev/sdb1 /mnt/usb -t auto the drive and ran into an error, “unknown file system type Linux_raid_member”.

I started with sudo fdisk -l, and received the error message about fdisk not supporting GPT.

Then ran sudo parted -l and received error “cant have a partition outside the disk”

I did run e2fsck /dev/sdb1 and it quickly performed Pass 1 through Pass 5…not convinced this did anything.

At this point, I am just looking to get it mounted, possibly checked, and then will plan my next step.

You may need to install Mdadm after booting with the Live CD.

sudo apt-get install mdadm

Then try to mount the drive.

There is the following link that may provide some pointers on how to deal with the “file system type Linux_raid_member” error.

Hard Disk Mounting - unknown filesystem type ‘linux_raid_member’

Bennor: That page you suggested was very helpful. I had to upgrade my Ubuntu LiveCD usb to 14.04 and figure out how much scratch space I needed to upgrade all the packages. Mdadm, gdisk, and others finally all together at my fingertips!!.

So here is the latest.

  1. Cant mount yet with line sudo mount /dev/sdc /mnt/usb -t auto

    This returns mount: /dev/sdc already mounted or /mnt/usb busy

  2. For mdadm, I tried sudo mdadm --assemble --scan

    This returns `mdadm: No arrays found in config file or automatically’ While setting up mdadm, I chose no
    configuration, as mentioned in the article. Should I revisit these settings?

Lastly, when running `sudo gdisk /dev/sdc’ I get this info:

Partition table scan MBR: protective BSD: not present APM: not present GPT: present

Found valid GPT with protective MBR; using GPT.

Warning! Secondary partition table overlaps the last partition by 4294966385 blocks! You will need to delete this partition or resize it in another utility.

Command (? for help):

All along I have been riding the hope that the partition table was corrupt, not the drive itself. Am I looking at editing the PT and pffsetting a value by the number of blocks mentioned above?

Here is the output from gdisk, printing the partition table

Bennor: I did some more searching around the site and found a post/procedure you provided, with brilliant detail on copying img files to partitions on the drive. Based on the details I have provided, is this the best avenue for me to attempt?

You recommend starting with Step 13

From “MyCloud HD replacement again

You really seem to know your stuff here, so thank you very much for your guidance.
-W

The method I used to setup a new hard drive for use with the My Cloud enclosure involves removing existing partitions and creating new partitions will destroy any existing data on the hard drive.

It may work to just push the “img” files to their appropriate partitions as various unbricking procedures indicate, and possibly not loose the existing user data on a properly partitioned My Cloud. Note that any sort of procedure to fix your My Cloud is done at your own risk!

Generally it is recommended to perform a reset or system restore after reassembling the hard drive with the My Cloud back plane to fix any issues the firmware may have with displaying the correct hard drive size.

Can you shed any light on the partition error I posted earlier? It seems like since a specific number of blocks is being referenced, I should be able to use this info to my advantage somehow.

Warning! Secondary partition table overlaps the last partition by 4294966385 blocks! You will need to delete this partition or resize it in another utility.

Found a post where the author of gdisk indicates the error refers to the last partition, in my case #8. The guidance then was to delete it.

Regarding data recovery, I already consider this drive a lose, so anything from here is a bonus. Whether I get the data back or just end up learning a little more Linux for next time, its still a win for me.

Also, when I downloaded the archive with the img files, config.img was not there, only kernel.img and rootfs.img. Should I look elsewhere for the missing file?

Cannot shed any light on the specific partition error your experiencing. I’m not all that well versed in fixing partition issues on Linux partitions. According to the following link one could try to resize the partition using gparted or similar Linux partitioning tool. But of course changing the partition may result in data loss.

http://askubuntu.com/questions/150378/how-to-fix-mbr-partition-prior-to-ubuntu-installation-a-partition-overlaps-gpt

Which “archive” did you download? The “original_v3.04.01-230.tar.gz” file (from the directions I used and which you linked to previously) that I just downloaded, and which was extracted using 7Zip on a Windows 7 PC, has all three "img"files.

When i circled back to my downloads folder, i now see what you see…3 IMGs and the md5. Good to go there…

So referring back to the screenshot in post #7, in your opinion, do you agree that #3 seems to be the issue. I am interpreting the whole “no partition outside the disk” to mean sectors 1-30719 may need to be added to # 3 partition?

I have no opinion on your particular error as I’m not very knowledgeable about Linux partitions and how to fix errors within them. The link I provided indicates one possible way to deal with the error your seeing.

If you are not going to try and save any user data one could always just nuke (delete/remove) all of the partitions and start from scratch like one would do with both the new drive method I used or the various other unbricking methods that can be found in this subforum by doing a search for “unbrick”.

I read you loud and clear Bennor. Just trying to proceed cautiously…I’m trying my hardest not to put myself in a position where i mess up and find out later i could have done something different.

Going to review the material you’ve presented and decide what to do…ty for your candid responses.

I think i am out of space on my LiveCD USB. When attempting to install mdadm, i ran sudo apt-get update and was returned the unable to synchronize mmap error. I researched this to mean out of space in some way. I then ran df -h and it shows me that /dev/loop1 was 100% full at 989M.
Here is the output.
ubuntu@ubuntu:~$ df -h Filesystem Size Used Avail Use% Mounted on udev 3.9G 12K 3.9G 1% /dev tmpfs 790M 1.5M 788M 1% /run /dev/sdb1 16G 3.1G 13G 21% /cdrom /dev/loop1 989M 989M 0 100% /rofs /cow 2.0G 1.3G 640M 67% / none 4.0K 0 4.0K 0% /sys/fs/cgroup tmpfs 3.9G 1.1M 3.9G 1% /tmp none 5.0M 0 5.0M 0% /run/lock none 3.9G 80K 3.9G 1% /run/shm none 100M 40K 100M 1% /run/user /dev/md127 1.9G 729M 1.1G 41% /media/ubuntu/dea09609-1dfc-4e3f-bc27-08e8988ceaae /dev/sda1 150G 137G 13G 92% /media/ubuntu/BA3C89A83C89606D /dev/loop0 2.0G 1.3G 640M 67% /media/ubuntu/c5213a83-af28-ff4a-8ea0-4bc54e0621ed /dev/sdd1 7.6G 7.2G 379M 96% /mnt/usb2

/dev/sdb1 refers to the primary partition of the flash drive i am using for the LiveCD. It has plenty of space. Would I need to resize /dev/sdb1 first to allocate free space, then resize /dev/loop1 to take in what was allocated? Is there something else i should do instead?

I also deleted /var/cache, as that looked like low-hanging fruit…but it didn’t help my cause.

Bennor:
I made it through all the steps in the unbrick guide, but had some issues that i had to feel my way through.
At step 13, when i start up the SW raid, i do not have a /dev/md0, but instead /dev/md127. I first tried:

sudo mdadm -A /dev/md0 /dev/sdc1 /dev/sdc2
mdadm: /dev/sdc1 is busy - skipping
mdadm: /dev/sdc2 is busy - skipping

but this lead no where. When i replaced md0 with md127, i was able to start/stop the RAID sw and completed the 6 DD steps by moving the 3 img files to their respective locations. All transfers were successful, except for the transfer to the /dev/sdc2 partition.

So in your opinion, should i attempt to create /dev/md0 and proceed, or does /dev/md127 serve the same purpose? GParted does list both SW raid partitions as /dev/md127.

What happens if you run mdadm --stop /dev/md* at step 12 rather than sudo mdadm --stop /dev/md0?

Edit to add: Further if one followed the directions I used they should have run the following line at step 10 to initially create the RAID (all one line of code/text):
sudo mdadm --create /dev/md0 --level=1 --metadata=0.9 --raid-devices=2 /dev/sdb1 /dev/sdb2

Note: Substitute the correct “sdx” value for your configuration.

Once getting my Ubuntu issues worked out, i started directly at Step 13, as to avoid intentionally losing data.

I just went back to execute the command you mentioned, but first had to umount /dev/md127. I did not have to do this the first time, so i just throw that out there as a sign of progress, hopefully.

mdadm --stop /dev/md127 returns
mdadm: stopped /dev/md127

The presence of this /dev/md127 mount point is really throwing me off. I am ready to execute Step 10 if that makes the most sense still?

My guess is the reason you are seeing md127 is due to not destroying and rebuilding the partitions/RAID prior to Step 13. When your Linux OS auto loads the RAID partition is naming it md127. The direction I used do indicate, at step 12, the following if one doesn’t have /dev/md0:

If /dev/md0 not found type the following to find RAID mount point: 
sudo ls /dev 
or
sudo grep md

Never been so happy to see that neon blue light! After completing the procedure, i had to perform both the 4-sec and 40-sec hard reset before having access to Dashboard. Between doing the 4-sec and 40-sec, there was a firmware update I was forced to process.

I did end up creating a new /dev/md0 RAID partition (Step 10) and completed all steps thru Step 15. I did not have to proceed to Step 16.

Thank you x 3 Bennor.

You are an awesome individual.

WH