How to grow the raid array size?

Hello

One of my 2x2TB drive failed. I run the device on raid (mirror). I took the opportunity and ended up replacing the HDs on my MBWE II (WhiteLight) with bigger HDs. It synced all the data already on the new HDs. It also recognize the new HD sizes (5TB each). Now, the problem is that it shows both drives in the raid array however it displays the old size (1,87 TB) instead of the new one (5TB)

What should I do ti fix this?

Thanks
Gilson

you would have to enable SSH on your unit. Then read carefully this from these websites to resize (grow/expand) your RAID 1 volume.

http://zackreed.me/articles/64-increasing-the-size-of-mdadm-raid1-disks

https://raid.wiki.kernel.org/index.php/Growing

Just to make sure I got this right. Since the GUI interface shows 2 drives of 5TB each the only thing I need to do is to resize the raid 1, correct?

It looks like I just need to run the following commands through ssh as root:

mdadm --grow /dev/md2 --size=max

e2fsck -f /dev/md2

resize2fs /dev/md2

e2fsck -f /dev/md2

Please confirm. Thanks

One last question: I have already synced all the data. I will not have lost my data after running those commands, will I? Thanks

The best thing to do, backup your data. Nothing is guarantee. You should not lose your data. But to make sure is to backup your data.

Awesome

I first ran the command:
~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1]
md2 : active raid1 sda4[2] sdb4[3]
1950277343 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
256896 blocks [2/2] [UU]

md3 : active raid1 sdb3[1] sda3[0]
987904 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
1959872 blocks [2/2] [UU]

unused devices:

So, the next commands I should run are:
mdadm --grow /dev/md2 --size=max
e2fsck -f /dev/md2
resize2fs /dev/md2
e2fsck -f /dev/md2

Would you confirm?

Thanks

I also ran the following command, just to make sure you can help me with the right commands:
~ # fdisk -l
You must set cylinders.
You can do this from the extra functions menu.

Disk /dev/sda: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 1 0+ ee EFI GPT
Partition 1 has different physical/logical beginnings (non-Linux?):
phys=(0, 0, 1) logical=(0, 0, 2)
Partition 1 has different physical/logical endings:
phys=(1023, 254, 63) logical=(267349, 89, 4)
You must set cylinders.
You can do this from the extra functions menu.

Disk /dev/sdb: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 1 0+ ee EFI GPT
Partition 1 has different physical/logical beginnings (non-Linux?):
phys=(0, 0, 1) logical=(0, 0, 2)
Partition 1 has different physical/logical endings:
phys=(1023, 254, 63) logical=(267349, 89, 4)

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 121602 976762583 7 HPFS/NTFS

I ran the commands and this is what I got:

mdadm: component size of /dev/md2 has been set to 585566047K
~ # e2fsck -f /dev/md2
e2fsck 1.38 (30-Jun-2005)
e2fsck: while determining whether /dev/md2 is mounted.
Couldn’t find ext2 superblock, trying backup blocks…
e2fsck: while trying to open /dev/md2

The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193

e2fsck:
~ # resize2fs /dev/md2
resize2fs 1.41.4 (27-Jan-2009)
ext2fs_check_mount_point: Input/output error while determining whether /dev/md2 is mounted.
~ #
~ # e2fsck -f /dev/md2
e2fsck 1.38 (30-Jun-2005)
e2fsck: while determining whether /dev/md2 is mounted.
Couldn’t find ext2 superblock, trying backup blocks…
e2fsck: while trying to open /dev/md2

The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193

Now the divide shows a error in the raid. The status is failed to mount

It’s great that you successfully replaced the drives in your My Book World Edition (MBWE) II NAS with larger ones. However, it seems like the NAS is still reporting the old drive sizes instead of the new ones. This could be due to a few reasons:

  1. Partition Resize: The NAS might not have automatically resized the RAID partitions to take advantage of the new drive sizes.
  2. Metadata Update: The metadata that describes the RAID array might not have been updated to reflect the new drive sizes.

To fix this issue and ensure your RAID array is properly utilizing the full capacity of the new drives, you might need to perform some manual steps. Keep in mind that these steps could vary based on the specific firmware and software version of your NAS. Here’s a general approach you can consider:

  1. Backup: Before you proceed with any changes, make sure you have a complete backup of your important data. This is a precaution in case something goes wrong during the process.
  2. Access the NAS Interface:
  • Access the web interface or management console of your My Book World Edition NAS.
  • Navigate to the RAID configuration settings.
  1. Resize Partitions:
  • Look for an option to resize or expand the RAID partitions. This might be located in the RAID management section.
  • If you find an option to resize partitions, select the RAID array and follow the instructions to expand the partitions to the new drive sizes.
  1. Update Metadata:
  • Some NAS systems require you to update or synchronize the RAID metadata after resizing partitions. Look for an option to update or synchronize the RAID metadata.
  1. Reboot: After making these changes, consider rebooting the NAS to ensure that the new drive sizes are recognized correctly.
  2. Verify and Monitor:
  • After the reboot, verify if the new drive sizes are now correctly displayed in the RAID array.
  • Monitor the system for any errors or warnings during the process.
  1. Resync (if needed):
  • Depending on your NAS’s behavior, it might need to perform a resynchronization of data to ensure data integrity across the new drives.

Please note that these steps are provided as a general guideline. Your specific NAS model and firmware might have slightly different steps and options. If you’re unsure or concerned about making changes to your RAID array, it’s a good idea to consult the official user manual or contact the manufacturer’s support for guidance specific to your device.

It’s great that you’ve upgraded the hard drives in your MBWE II (WhiteLight) and have successfully synced the data. However, it seems that the RAID array is not recognizing the full capacity of the new 5TB drives. This is a common issue when upgrading drives in a RAID array, as the RAID controller might still be using the old drive’s configuration.

Here are the steps you can follow to address this issue:

  1. Backup Important Data: Before proceeding, ensure that you have backups of all important data on your RAID array. While this process shouldn’t result in data loss, it’s always a good practice to have backups in case of unexpected issues.
  2. Access RAID Configuration: To modify the RAID configuration and recognize the full capacity of the new drives, you’ll need to access the RAID controller’s configuration utility. The specific steps to do this can vary depending on your RAID controller. It’s usually done during the boot process by pressing a key (e.g., Ctrl+R, Ctrl+I, Ctrl+M) when prompted. Consult your RAID controller’s documentation or your system’s manual for the exact key combination and steps.
  3. Check RAID Status: Once you’re in the RAID configuration utility, check the status of your RAID array. It should show both of your new 5TB drives but might still be configured with the old 1.87TB size. If so, you’ll need to reconfigure the RAID array.
  4. Reconfigure RAID Array: Follow the on-screen instructions to reconfigure the RAID array. You’ll likely need to delete the existing RAID array and create a new one using the full capacity of the new drives. Be extremely careful during this step, as it can result in data loss if not done correctly. Make sure you are working with the correct drives.
  5. Initialize and Rebuild: After reconfiguring the RAID array, the RAID controller may prompt you to initialize and rebuild the array. This process can take some time as it will copy data from one drive to the other to create a mirrored setup. Allow it to complete.
  6. Verify Capacity: Once the RAID array has been reconfigured and rebuilt, check if it recognizes the full capacity of the new 5TB drives. It should now display the correct size.
  7. Restore Data: If everything looks good, you can start restoring your data from your backups onto the newly configured RAID array.

Keep in mind that these steps may vary depending on the specific RAID controller and system configuration you have, so it’s crucial to consult your system’s manual and the RAID controller’s documentation for precise instructions. Additionally, consider seeking assistance from a knowledgeable technician if you’re not comfortable with these procedures, as data loss can occur if not done correctly.