Raid1 expansion 4 -> 12TB

Hi,

I’ve just replaces the Raid1 4TB drives with a WD Purple 12TB, one at a time with rebuild in between and the auto-rebuild built them to 4TB without failure.

In the change Raid Mode, I selected the expansion however it still only saw 4TB (and now its doing a pointless sync).

On the disk I see its put the 8TB into sd[ab]3,

root@MyCloudEX2Ultra root # cat /proc/partitions 
major minor  #blocks  name

   7        0     101904 loop0
   7        1  104857600 loop1
   7        2    2097152 loop2
  31        0       5120 mtdblock0
  31        1       5120 mtdblock1
  31        2       5120 mtdblock2
  31        3     189440 mtdblock3
  31        4      15360 mtdblock4
  31        5      20480 mtdblock5
  31        6      10240 mtdblock6
  31        7      10240 mtdblock7
   8        0 11718885376 sda
   8        1    2097152 sda1
   8        2 3907226562 sda2
   8        3 7808511983 sda3
   8        4    1048576 sda4
   8       16 11718885376 sdb
   8       17    2097152 sdb1
   8       18 3902822400 sdb2
   8       19 7812916207 sdb3
   8       20    1048576 sdb4
   9        0    2097088 md0
   9        1 3902822264 md1
 253        0  104857600 dm-0
root@MyCloudEX2Ultra root # cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md1 : active raid1 sda2[2] sdb2[3]
      3902822264 blocks super 1.0 [2/1] [_U]
      [==>..................]  recovery = 12.9% (505546432/3902822264) finish=275.3min speed=205606K/sec
      bitmap: 1/1 pages [32KB], 262144KB chunk

md0 : active raid1 sda1[0] sdb1[1]
      2097088 blocks [2/2] [UU]
      bitmap: 0/16 pages [0KB], 8KB chunk

unused devices: <none>

From this point it would seem to be easy to delete sd[ab][23] and create the new larger partitions to cover the entire gap and recreate the raid for this.

For for those in this situation gdisk is the tool you need (as a fdisk alternate with the same command line interface that strikes me as 30 years old at least (p, d, n, w)).

So i did delete sd[ab][23] and created sd[ab]2 of the complete size (the default values of start/end sector). I manually set the disk type to what it was previously so it looked like:

Command (? for help): p
Disk /dev/sda: 23437770752 sectors, 10.9 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 1F584082-BA29-49F7-90AF-8D7C56E8501A
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 23437770718
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         4196351   2.0 GiB     0700  Microsoft basic data
   2         6293504     23437770718   10.9 TiB    0700  Microsoft basic data
   4         4196352         6293503   1024.0 MiB  0700  Microsoft basic data

After this /proc/partitions is still showing the old sizes.

When the current resync completes I plan on rebooting to allow the new sizes to be picked up, mdadm --grow /dev/md1 --size=max to increase the raid. And then if it doesn’t happen automatically, resize2fs /dev/md1 to increase the filesystem size.

(originally started as a where is fdisk, but resolved that with gdisk, and continued this as a howto). I definatly would of preferred 12TB to be autodetected/grown however.

[ 42.205213] md: md1 stopped.
[ 42.215647] md: bind
[ 42.218544] md/raid1:md1: active with 0 out of 2 mirrors
[ 42.222593] md1: failed to create bitmap (-5)
[ 42.229936] md: md1 stopped.

Well seems I did something incorrect.

sdb3 appears to still be there. Deleted that and sdb2 to recreate the sdb2 the same size as sda2.

mdadm --create /dev/md1 /dev/sda2 /dev/sdb2 --level=1 --raid-devices=2
mdadm: Defaulting to version 1.-1 metadata
mdadm: /dev/sda2 appears to contain an ext2fs file system
    size=-392162304K  mtime=Mon Sep  7 08:04:11 2020
mdadm: /dev/sdb2 appears to contain an ext2fs file system
    size=-392162304K  mtime=Mon Sep  7 08:04:11 2020
Continue creating array? y
mdadm: array /dev/md1 started.
root@MyCloudEX2Ultra root # cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md1 : active raid1 sdb2[1] sda2[0]
      11715738468 blocks super 1.0 [2/2] [UU]
      [>....................]  resync =  0.0% (1000448/11715738468) finish=975.7min speed=200089K/sec
      
md0 : active raid1 sdb1[1] sda1[0]
      2097088 blocks [2/2] [UU]
      bitmap: 0/16 pages [0KB], 8KB chunk