How to unbrick a totally dead My Cloud?

Hi, all,

I have had a My Cloud 4TB several days ago. It was working very well after I installed the Transmission, Aria2 and FatRat. When I installed the amule-daemon ( http://packages.debian.org/wheezy/net/amule-daemon),,) the amule-daemon removed almost all of My Cloud system:cry: it became a totally brick!

I follow below guides to unbrick, it still is a dead brick, most of the guides are for MBL.

  1. http://community.wd.com/t5/WD-My-Cloud/Guide-to-direct-backup-unbrick-disassemble-process-video-like/td-p/628611

  2. http://community.wd.com/t5/WD-My-Cloud/How-to-unbrick-a-totally-dead-WD-My-Cloud/m-p/632129#M2347

  3. http://community.wd.com/t5/My-Book-Live/GUIDE-How-to-unbrick-a-totally-dead-MBL/m-p/435724

  4. http://community.wd.com/t5/My-Book-Live/GUIDE-Debrick-MyBookLive-v2-0/td-p/567582

I have picked up the hard disk and accessed it to a computer with systemrescuecd system, ran below 2 options:

  1. dd if=/mnt/usb/rootfs.img of=/dev/sda bs=1M

  2. ./debrick.sh rootfs.img /dev/ sda destroy

both of them are useless!:angry:

Can anyone can help me?

Why you use a script to debrick MBL on MyCloud?

If the partition layout of MyCloud is correct you only need to write firmware OS image (decompressed 2 gb file) over each raid partition (2 gb each).

 When  write process is finished successfully you only need to shutdown, reconnect HDD to mini board, attach power supply and ethernet cable to router and wait for blue light.

It should work if is a soft brick and partition layout is not messed.

Hi, sammarbella,

Thanks for your reply.

It’s too late. When I saw your solution, I have rebuilt the partition.

So, any other suggestions?

I saw that MBL has 4 partition in the NFODIZ thread and in one method he rebuild the partition table trought a script.

I’ve not the knowledge to repartition the disk of MyCLoud who has 8 partitions and empty space at the begining.

My disk (3TB one):

WDMyCloud:~# parted
GNU Parted 2.3
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: ATA WDC WD30EFRX-68E (scsi)
Disk /dev/sda: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number Start End Size File system Name Flags
 3 15.7MB 528MB 513MB linux-swap(v1) primary
 1 528MB 2576MB 2048MB ext3 primary raid
 2 2576MB 4624MB 2048MB ext3 primary raid
 5 4624MB 4724MB 99.6MB primary
 6 4724MB 4824MB 101MB primary
 7 4824MB 4826MB 1049kB primary
 8 4826MB 4828MB 2097kB primary
 4 4828MB 3001GB 2996GB ext4 primary

(parted)

Sure there are members of this forums who can help to rebuid partition table maybe NFODIZ could adapt his script for my cloud case.

Hi, sammarbella,

Could you login your Cloud and check if there are below scripts? Or the scripts like them.

  1. freshInstall.sh

  2. partitionDisk.sh

  3. disk-param.sh

  4. swap.c

As you said, the partitions are different between BML and My Cloud.

Maybe when I fix the partitions issue of My Cloud, copy the system files into it, mount the disk, etc. The cloud will be unbricked.

Any idea let me know.

Thanks!

When I mount the rootfs.img to Ubuntu, I found the system.conf file, maybe somebody know how to use them:

## Image parameters
build_image="true"
build_container="false"
system_boot_type="drive"
rootfs_raid="true"
main_partition_image="project-control"
installer_image="installer-control"
master_package_name="sq"
container_id=wd.container
group_id=wd.group
supergroup_id=wd.supergroup
PROJECT_GROUP="nas_v2_5X"

## hardware mapping
LED_HW_CONTROL_LOCATION="/sys/class/leds/system_led/color"

## Note, for gen-1.0, apis are inside the component. For 2.1 and later, apis are all together within separate component
REST_API_ROOT="/var/www/rest-api"
DOCUMENT_ROOT="/var/www/htdocs"
##rest_api_generation="gen-1.0"
rest_api_generation="gen-2.1"

## system paths
WDCOMP_DIR="/etc/wdcomp.d"
reformatDataVolume=/etc/.reformat_data_volume
updateInProgress=/etc/.updateInProgress
freshInstall=/etc/.fresh_install
upgradeMountPath=/mnt/rootfs
pkg_upgrades_dir=/CacheVolume/upgrade

## Drive parameters
driveConnectorType="ahci"
DVC_DRIVE_COUNT=1
DVC_PART_COUNT=4
DVC_PARTED_CMDS_ONE_DRIVE='mklabel gpt mkpart primary 528M 2576M mkpart primary 2576M 4624M mkpart primary 16M 528M mkpart primary 4624M -1M set 1 raid on set 2 raid on quit'
DVC_PARTED_CMDS_TWO_DRIVE='mklabel gpt mkpart primary 528M 2576M mkpart primary 2576M 4624M mkpart primary 16M 528M mkpart primary 4624M -1M set 1 raid on set 2 raid on set 3 raid on set 4 raid on quit'
dataVolumeDevice="/dev/sda4"
swapDevice="/dev/sda3"
rootfsDevice="/dev/md0"
backgroundPattern=0xE5
dataVolumeFormat="ext4"
dataVolumeLazyInitMountOpt="init_itable=10"
dataVolumeSystemReservedPercent="2"
cacheVolumeDevice=""
sharesDevice=""
bootDevice=""
config_dir=/etc/nas/config
install_param="${config_dir}/disk-param.conf"

 Others should be useless for unbrick My Cloud.

I think below scripts should be helpful for me, but how to use them?

reformatDataVolume=/etc/.reformat_data_volume

freshInstall=/etc/.fresh_install

install_param="${config_dir}/disk-param.conf"

Hi, sammarbella,

Base on the system.conf file, I saw below script:

mklabel gpt mkpart primary 528M 2576M mkpart primary 2576M 4624M mkpart primary 16M 528M mkpart primary 4624M -1M set 1 raid on set 2 raid on quit

 Is this the partition rule? If yes, how to use it?

Thanks.

I split above script to several parts, post as below:

root@y-System-Product-Name:/# parted
GNU Parted 2.3
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) select /dev/sdb                                                  
Using /dev/sdb                                                             
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you
want to continue?
Yes/No? y                                                                 
(parted) mkpart primary 528M 2576M                                       
(parted) mkpart primary 2576M 4624M                                       
(parted) mkpart primary 16M 528M                                          
(parted) mkpart primary 4624M -1M                                         
(parted) set 1 raid on                                                    
(parted) set 2 raid on                                                    
(parted) quit                                                             
Information: You may need to update /etc/fstab.

 

Using the script, I got 4 partitions, so, I think the /dev/sdb5, 6, 7, 8 should be built by the Debian system which in My Cloud.

Next step, I want to copy the system file into related partition.

:smiley:Can anybody tell me how to do this step?

You need to download the firmware image for the MyCloud, extract it and use “dd” to write the two raid partitions.

Find the rootfs.img file in the zip:

root@pogo1:/shares/tunes/WD# ls
. … WDMyCloud-030104-139-20131028.zip
root@pogo1:/shares/tunes/WD# unzip *
Archive: WDMyCloud-030104-139-20131028.zip
inflating: sq-030104-139-20131028.deb

root@pogo1:/shares/tunes/WD# dpkg -x *deb .

root@pogo1:/shares/tunes/WD# ls
. … CacheVolume WDMyCloud-030104-139-20131028.zip sq-030104-139-20131028.deb
root@pogo1:/shares/tunes/WD# cd CacheVolume
root@pogo1:/shares/tunes/WD/CacheVolume# ls
. … upgrade
root@pogo1:/shares/tunes/WD/CacheVolume# cd upgrade
root@pogo1:/shares/tunes/WD/CacheVolume/upgrade# ls
. … postinst_all postinst_body postinst_container postinst_footer postinst_header postinst_image rootfs.img rootfs.md5
root@pogo1:/shares/tunes/WD/CacheVolume/upgrade# ls -al *img
-rw-r–r-- 1 root root 2047803392 Oct 28 17:26 rootfs.img
root@pogo1:/shares/tunes/WD/CacheVolume/upgrade#

 

Now, use dd to WRITE it to the two RAID partitions (assuming sdb):

 

# dd if=./rootfs.img of=/dev/sdb1

# dd if=./rootfs.img of=/dev/sdb2

 

1 Like

cnlinya wrote:

Next step, I want to copy the system file into related partition.

:smiley:Can anybody tell me how to do this step?


You can write the system OS image like JRman wrote is the easiest and fastest method.

If you are a GUI guy (like me)  you can write it using a GUI tool like i said in the other thread download the firmware unpack the system OS file rootfs.img and use disk utility from your ubuntu ( i did in lubuntu but it should be the same).

Download http://download.wdc.com/nas/WDMyCloud-030104-139-20131028.zip

Unzip WDMyCloud-030104-139-20131028.zip to get sq-030104-139-20131028.deb.

Unzip WDMyCloud-030104-139-20131028.zip to get data.tar.

Unzip data.tar to get a folder named CacheVolume.

Inside CacheVolume there is another folder named upgrade.

Inside upgrade there is a bunch of files the needed is rootfs.img

Open disk util in ubuntu (disk, disks, gnome-disks several names…just alt-f2 "gnome-disks) select the first raid partition in MyCloud HDD , click on the gears icon (the one in the middle of the util screen)

click on “Restore Disk Image”, browse using the combobox and select rootfs.img.

Click open and finish the write process, repeat these steps with the second raid partition, close util, shutdown system, reconnect HDD to miniboard, ethernet cable to router and power supply.

If Partition layout is ok and hardware is ok this should recover a soft brick in MyCloud.

1 Like

Hi, Jrman,

Thanks for your reply.

Just need to dd the rootfs.img to sdb1 and sdb2? No other operations? I will try it later.

Do I need to copy the bootmd0.scr file to My Cloud? If yes, where can I get it?

And do sync & swap operation?

Let me know your advice.

Thanks.

Hi, sammarbella,

Thanks for your reply.

I will try your method later, because I am not at home:smiley: And I will post the result here.

Thanks for your help!

Did you see my last post on page 1?

No more operations needed if partition layout and hardware are ok.

P.S. Cross-posting…:smiley:

Hi, sammarbella,

Yes, I saw it. Thanks for your help, I will have a try when I at home.

I wish everything will be ok.

Hello,
Do I have to open my WD My Cloud or is it posible to do this trough USB port on the back side? I do not want to lose my warenty. As I read several posts I think that my cloud is bricked because I changed DHCP to Static IP and now I have only green light flashing.

Thank you for reply

Hi, AtaSmrk,

Sorry, I do not know how to unbrick the MyCloud with the USB. I am a bird, too.

Good luck.

Hi, sammarbella,

I did “#dd if=./rootfs.img of=/dev/sdb1  #dd if=./rootfs.img of=/dev/sdb2”, but it’s useless.

I think I have to create a /dev/md0 to hold the rootfs.img, because I saw rootfsDevice=“/dev/md0” in the system config file.

I’m wondering if you could do me a favor to post below file here?

  1. freshInstall=/etc/.fresh_install

  2. reformatDataVolume=/etc/.reformat_data_volume

  3. install_param=“${config_dir}/disk-param.conf”

Thanks a lot.

 

I think rootfsDevice=“/dev/md0” reffers to Soft-RAID mdadm.

 

md0 is a raid array who is type 1 (mirrored) composed of 2 partitions in our case sdb1 and sdb2. 

 

http://en.wikipedia.org/wiki/Mdadm

 

Creating an array[edit]


mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sda1 /dev/sdb1

Create a RAID 1 (mirror) array from two partitions. If the partitions differ in size, the array is the size of the smaller partition. You can create a RAID 1 array with more than two devices. This gives you multiple copies. Whilst there is little extra safety in this, it makes sense when you are creating a RAID 5 array for most of your disk space and using RAID 1 only for a small /boot partition. Using the same partitioning for all member drives keeps things simple.

I don’t found these files you ask for but i found this:

 

/etc/mdmadm/mdadm.conf

 

mdadm.conf

Please refer to mdadm.conf(5) for information about this file.

# by default, scan all partitions (/proc/partitions) for MD superblocks.

alternatively, specify devices to scan, using wildcards if desired.

DEVICE partitions

auto-create devices with Debian standard permissions

CREATE owner=root group=disk mode=0660 auto=yes

automatically tag new arrays as belonging to the local system

HOMEHOST

instruct the monitoring daemon where to send mail alerts

#MAILADDR root

This file was auto-generated on Thu, 30 Aug 2012 16:25:22 -0700

by mkconf 3.1.4-1+8efb9d1MAILADDR root

 

/proc/partitions

 

major minor  #blocks  name

   8        0 2930266584 sda
   8        1    1999872 sda1
   8        2    1999872 sda2
   8        3     500736 sda3
   8        4 2925551616 sda4
   8        5      97280 sda5
   8        6      98304 sda6
   8        7       1024 sda7
   8        8       2048 sda8
   9        1    1999808 md1

 

 

 

 

 

 

 

 

Hi, sammarbella,

I found a script named “masterInstall.sh” under " /usr/local/sbin", maybe it can help me:smiley:

Do you know this script?

Thanks.

Hi cnlinya, have you managed to fix yours? I’m in the same boat! Just restored both partitions with dd, but haven’t tested it yet since I’m not at home and doing it remotely.