Is there a 8 tb img here and step by step instruction

If you are trying to use an 8TB drive then this is the entry you want.

/dev/sdb

Ok here is where it stops

Partition number? 0
Error: Partition doesn’t exist.
(parted) mklabel gpt
Warning: Partition(s) on /dev/sdb are being used.
Ignore/Cancel? i
Warning: The existing disk label on /dev/sdb will be destroyed and all data on
this disk will be lost. Do you want to continue?
Yes/No? y
(parted) mkpart primary 528M 2576M
(parted) mkpart primary 2576M 4624M
(parted) mkpart primary 16M 528M
(parted) mkpart primary 4828M 100%
(parted) mkpart primary 4624M 4724M
(parted) mkpart primary 4724M 4824M
(parted) mkpart primary 4824M 4826M
(parted) mkpart primary 4826M 4828M
(parted) set 1 raid on
(parted) set 2 raid on
(parted) quit
Information: You may need to update /etc/fstab.

rickt1962@rickt1962-Aspire-5517:~$ sudo mkfs -t ext4 /dev/sdb4
mke2fs 1.42.9 (4-Feb-2014)
/dev/sdb4 is apparently in use by the system; will not make a filesystem here!
rickt1962@rickt1962-Aspire-5517:~$

I found that when I get that message after doing the partitioning I have to reboot Linux, and as indicated above I typically rerun the apt-get update && apt-get install mdadm after rebooting Linux. Double check with fdisk -l post reboot to ensure the 8TB drive hasn’t changed from the /dev/sdb location.

Thank you for helping with this !!! I will reboot tomorrow :slight_smile:

I have a (working) 2nd gen 6TB that I want to upgrade to an 8TB. Once I start the process, if I ran into problems and wanted to just put the 6TB drive back in, would it still work
or am I pretty much committed once I start the process?

As long as you didn’t do anything to the 6TB disk. It will work when you put it back.

Which process? If you haven’t started yet, make sure to follow the directions for teh second gen v2.x single bay My Cloud units as their unbricking process is different than those used on the first gen v4.x single bay My Cloud units.

As long as you didn’t make any changes to that 6TB drive you should be able to swap it back into the My Cloud drive enclosure and boot it up without issue. Once you; format, repartition, or remove firmware files from that 6TB drive it may not longer boot properly.

Hi - I have a 4TB WDMyCloud. I recently got a great deal on the 10TB easystore. I was considering to upgrade the HDD of the WDMyCloud to 10 TB. Clarifications

  1. If i understand your post correctly, the instructions work with any size - Right?
  2. There are mentions that there is an end sector number required that is dependent upon the size of the disk. Would you know for 10TB?

Please advise.

I took an 8TB EasyStore drive and put it into a first gen single bay My Cloud following Fox_Exe’s directions. My experience is posted in this post:

I did not make any changes to the directions, I did not know or as far as I know change the “end sector” number to match the 8TB WD Red drive. I followed Fox_Exe’s directions exactly using the values for the partitions indicated in the directions. The My Cloud firmware should resize the user data partition automatically to fill the available space when one reboots the My Cloud after performing the various steps. I used a Linux Ubuntu boot CD to perform the various steps.

The second gen single bay My Cloud has a slightly different set of steps to that of the first gen My Cloud and uses different firmware.

Thank you for your prompt response. I have 2 more clarifications

  1. How do I make out whether my WDMYDrive is Gen 1 or 2. The part number is WDBCTL0040HWT-00. Current firmware is v04.05.00.327. Does this help?
  2. I just opened my 10TB easystore and to my disappointment, it was not a Red drive. the details are - SATA / 256MB Cache WD100EMAZ-00WJTAO. Now the Red drives are the “NAS Drives”. Do you have any point of view if i should go ahead with the switch?

You have a first gen single bay My Cloud.
First gen uses v4.x firmware and the P/N ends in “-00”.
Second gen uses v2.x firmware and the P/N ends in “-10”.

It’s possible the WD100EMAZ-00WJTAO might be a Red drive just with a different label. But who knows. You can use any SATA 3.5 inch hard drive in the My Cloud enclosure. I’ve experimented with using a; 250GB, WD 500GB Green, WD 1TB Blue and now a WD 8TB Red drive in my first gen enclosure over the last few years. No problems with any of the drives. The 10TB drive should work.

Thank you. I will try out the switchover during the weekend. Fingers crossed! :slight_smile:

I’ve been going through the fox_exe process and it seems to go through perfect with no errors but my 8tb drive doesn’t seem to boot up. I get the solid led but it stays solid for about 45 seconds then flashes off for about 1 second then seems to repeat that process.

I can use the other methods to flash the 4tb image but the drive never updates to full capacity using that method so I thought I’d try this one.

How long should the 1st time boot normally take? With the 4tb image it’s only about 5 full minutes so I wouldn’t expect the 8tb to be much longer? Any advice on what I could be missing?

-han

What generation is your single bay My Cloud (not My Cloud Home as that is an entirely different device)? The directions for the single bay/single drive first gen are different than the second gen.

When one unbricks and boots for the first time they may get a red front LED. See if you can access the My Cloud Dashboard. If so, perform a System Only reset via the Dashboard > Settings > Utilities page. That typically clears the red LED and fixed the 0K issue in the Capacity section.

Drive size shouldn’t matter as I’ve unbricked from 250GB up to 12TB mechanical drives for use in a first gen single bay My Cloud using Fox_exe’s directions. Haven’t had any luck getting any of my SSD drives to work however. I do use Ubuntu (either a boot disc/USB flash drive or a VM install of Ubuntu) to perform the unbrick steps. If unbricking a first gen, try using the v3.x firmware file to unbick rather than the v4.x file. Using the v4.x file sometimes (in my experience) generates an out of space error when pushing one of the IMG files to one of the partitions during the process.

Thanks for the info, Bennor. This is actually about the 4th time I replaced the drive in this old 1st gen unit. The first two times was a rather lengthy manual process (1tb original drive) that backed up then restored all the data to the new hd via terminal on a linux boot CD. 2nd time I used the .img method which is quick and worked great once I hit 3tb. But seems a bit of a waste to put in a 4 and I had a couple 8tb drives ‘laying around’ so wanted to use one of those.
I finally figured out the process through trial and error. I had to leave off the 2nd raid command or the 3rd one wouldn’t work. After skipping that one step it finally did boot with the red light and I was able to run a quick restore to get it back to my green (blue) light and show the full 7.9tb.

You can’t know how ecstatic I am about getting this process down. I repeated the process with a spare 4tb drive and it worked perfectly so I feel like I can use this box forever as long as the hardware itself keeps up.