New Drive Capacity Not Showing

I’ve rebooted the unit and ran the diagnostic. It still shows X of 2TB used

You can’t just go swapping drives like that.

youll have to run the Debrick guide found in this forum to get the added space.

Thanks for the reply.

But I must say, that seems like an awfully over the top and complex way to deal with upgrading drives.

The data has moved and mirrored and all works, all it has to do is report how much space there is.

“Make sure your thumbdrive containing the two files is plugged in, your MyBookLive hard drive is hooked up to your computer via e-sata to sata or sata to sata cable and you disconnect any other hard drives to ensure you don’t accidentally delete data on those drives”

I mean what’s all this about? So I have to strip my system down, pull harddrives out of the NAS enclosure… **bleep**

Biomech wrote:

But I must say, that seems like an awfully over the top and complex way to deal with upgrading drives.

The drives in the Duo were never meant to be “upgraded.”

Nothing states that and I wouldn’t have bought it if it did.
This should be a very very very simple process and is completely unacceptable.

You mirrored a 2TB disk onto a 4TB disk.  That means that only 2TB of hte 4TB disk are availble.   The rest is unallocated.

You need to increase the partition size on the DataVolume partition on your 4TB disk so that there is no unallocated space on teh 4TB disk.  It’s a bit Linux intensive, so research it properly before doing it.

easier is to just follow the instructions you’ve been given already and use the debrick guide. You’re not the first person to do this, so read the guides of people who have been so kind to give you step by step instructions.  use Guide 2

Thank you for the explanation and reasoning, that makes more sense now. However, I still think it should be far more straight forward.

I’ll try the debrick guide, but can I ask, do I need to run the guide on both drives or just one?

I’ll have to pull them out of the MBL and do it from a laptop. I have a sata-USB adapter I use for plugging in external drives

you should remove teh partitions on both disks so that they are in factory clean state. then do a debrink on one disk, then put that disk in MBLD slot A, then put other disk in slot B. Let the MBLD mirror the A over to the B.

I’m sure it doesn’t but like to triple check - removing the partitions isn’t going to destroy the data is it?

of course removing a partition will also destroy the data.   did you read the guide i linked to?   It clearly said the process for for NEW clean disks - no data.   If you want to keep your data then you have to modify the existing partitions using linux.   I honestly think you are a bit in over your head on this one.

As I said, upgrading to larger capacity drives should NOT be this complex

Ok how about this for a plan, to save some time, because if I debrick the new drives and have them clear and then add an old one in to get the data rebuilt back it’s going to mirror again and end up with 2TB instead of the 4TB right

So what if I pull the 4TB drives out, clear them and remove all partitions, then put both empty 4TB into the MBL chassis - that’s going to give me 2x 4TB right?

If so, I’d then create the shares and copy the data from the old 2TB drive (either USB on the back of the MBL or via the PC / Router whichever is quicker) onto the new empty 4TB drive in the MBL which would RAID1 it as it copied, right.

Does that make sense? I’m happy leaving things to run over night etc. The rebuild before took 7 hours per disk so no big issue with that.

(There’s 1.3TB of data in total that needs to be reinstated)

Biomech wrote:

As I said, upgrading to larger capacity drives should NOT be this complex

It’s not complex on the drives that support capacity increase.  As has already been mentioned, this drive does not, so it should be no surprise that a workaround is complex.

as to your other questions, no, that won’t work.  If you just clear the partitions, you have now bricked your drive as the OS is also wiped when you clear the partitions.  That’s what the Debrick guide undoes.

1)  remove partitions on both the 4tb disks to get them to ‘factory’ state

  1. use debrick guide for new drives on one of the 4tb disks.

  2. insert the debricked disk into the MBLD enclosure and boot it up.

  3. after confirming it works, then power down MBLD, insert the other unpartitioned 4tb disk into slot B of MBLD.

  4. attach the 2TB disk via powered usb cable to the USB port on the MBLD and let the MBLD ‘mount’ the external disk.

  5. using SSH, copy the files from the DataVolume of the 2tb disk to the new DataVolume partition of the RAID1 4tb

the only way to avoid all this is to take your existing situation and change the partition sizes on the DataVolume partition to capture all the unallocted space.  but that takes some linux skills.

Thanks, that’s the kind of thing I was thinking. 

I’ve retained the old drives which have all the data on so worst case scenario I can put them back in.

But I’ll certainly give this a go - thank you

I’ve run the debrick Guide 2 but it isn’t working. When I put the 2 drives into the enclosure, the red light stays solid.With old/other drives it does work.

I have dashboard and ssh access.

This is the result of the guide, as you can see there are some errors here and you’re going to know a lot more than me about what/why it’s doing what it’s doing.

root@sysresccd /mnt/usb % ./ rootfs.img /dev/sdb destroy

********************** DISK **********************

script will use the following disk:

Model: ATA WDC WD40EFRX-68W (scsi)
Disk /dev/sdb: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
3 15.7MB 528MB 513MB primary raid
1 528MB 2576MB 2048MB ext3 primary raid
2 2576MB 4624MB 2048MB ext3 primary raid
4 4624MB 2000GB 1996GB ext4 primary raid

is this REALLY the disk you want? [y] y

********************** IMAGE **********************

********************** IMPLEMENTATION **********************

everything is now prepared!
device: /dev/sdb
image_img: rootfs.img
destroy: true

this is the point of no return, continue? [y] y

dd: error writing ‘/dev/sdb1’: No space left on device
10+0 records in
9+0 records out
10371072 bytes (10 MB) copied, 0.00516922 s, 2.0 GB/s
dd: error writing ‘/dev/sdb2’: No space left on device
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.000549151 s, 0.0 kB/s
dd: error writing ‘/dev/sdb3’: No space left on device
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.000553634 s, 0.0 kB/s
dd: error writing ‘/dev/sdb4’: No space left on device
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.000543612 s, 0.0 kB/s
Testing with pattern 0x00: done
Reading and comparing: done
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 528M 2576M
(parted) mkpart primary 2576M 4624M
(parted) mkpart primary 16M 528M
(parted) mkpart primary 4624M -1M
(parted) set 1 raid on
(parted) set 2 raid on
(parted) set 3 raid on
(parted) set 4 raid on
(parted) quit
Information: You may need to update /etc/fstab.

Warning: blocksize 65536 not usable on most systems.
mke2fs 1.42.13 (17-May-2015)
mkfs.ext4: Device size reported to be zero. Invalid partition specified, or
partition table wasn’t reread after running fdisk, due to
a modified partition being busy and in use. You may need to reboot
to re-read your partition table.

destroying was done, would you like to continue with installation? [y] y

mdadm: /dev/sdb1 is not a block device
mdadm: Cannot find /dev/md0: No such file or directory
mke2fs 1.42.13 (17-May-2015)
The file /dev/md0 does not exist and no size was specified.
mdadm: error opening /dev/md0: No such file or directory

synchronize raid… mdadm: Cannot find /dev/md0: No such file or directory

copying image to disk…
dd: writing to ‘/dev/md0’: No space left on device
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.00186105 s, 0.0 kB/s
mount: /dev/md0 is not a block device (maybe try `-o loop’?)
cp: cannot stat ‘/mnt/md0/usr/local/share/bootmd0.scr’: No such file or directory
./ line 361: /mnt/md0/etc/nas/service_startup/ssh: No such file or directory
umount: /mnt/md0: not mounted
mdadm: /dev/md0 does not appear to be an md device
BLKGETSIZE: Inappropriate ioctl for device
BLKGETSIZE: Inappropriate ioctl for device

all done! device should be debricked!
root@sysresccd /mnt/usb %

Both drives detected in the dashboard and reported as passing the test. However, the left coloumn says “failed”. I also get alerts, “Drive B Failed” and “DataVolume failed to mount”.

I’ve created the swap as instructed and using parted the partition table looks correct.

Factory reset quick and full. No luck.
Removed drive B, it’s gone from the details list but the dashboard shows A and B failed
The option to edit mode is greyed out.

Any ideas?

This is the structure of Drive A and B, they are identical

My old drives work perfectly, although they do spend 7-8 hours rebuilding the RAID if you just take them out and put them straight back in everytime

Hi @Biomech The scripts you are using maybe not stable enough with your device configuration, luckily I just tried an easier way to upgrade the capacity and succeed, probably you want to have a look

In your case you can start with put the old disks back, I hope manually execute parted command is not very difficult for you.

Hi @Tang_Jianyu,

What commands did you do in Parted?
When I put the new 4TB drives in it just cloned the old 2TB drives so it thought there was only 2TB

You can try below commands, must make sure the parameter used is matching yours:

  • select current editing disk:

(parted) select /dev/sda

  • check partition table

(parted) print free

you should see below partition in the list

4 4624MB 2000GB 1996GB primary raid

  • resize

(parted) resize 4 4624M 4001G

  • Now quite
    (parted) quit

  • Do the same on sdb

Reboot and run factory reset.

Thanks, I’ll give that a go this week and see what happens :slight_smile: