Mine went something like below, but in my case it ended up not working as I had a SECOND dud disk (2 out of 4!!), so I needed to prep the spare disk, suspect you’ll need to do the same by creating partitions on the new disk that MATCH the existing disks - otherwise you won’t be able to add them to the array.
See how you go with this (when I paste this, the f s c k command will get garbled), note the recovery % will increase and take long time :
~ $ mdadm --assemble -f /dev/md2 /dev/sd[abcd]4
mdadm: forcing event count in /dev/sdd4(3) from 4283848 upto 4283850
mdadm: clearing FAULTY flag for device 3 in /dev/md2 for /dev/sdd4
mdadm: /dev/md2 has been started with 3 drives (out of 4) and 1 spare.
~ $ pvcreate /dev/md2
Incorrect metadata area header checksum
Can’t initialize physical volume “/dev/md2” of volume group “lvmr” without -ff
~ $ vgcreate lvmr /dev/md2
/dev/lvmr: already exists in filesystem
~ $ lvcreate -l 714329 lvmr -n lvm0
Incorrect metadata area header checksum
Logical volume “lvm0” already exists in volume group “lvmr”
~ $ fsck.ext3 /dev/lvmr/lvm0
e2fsck 1.38 (30-Jun-2005)
fsck.ext3: while trying to open /dev/lvmr/lvm0
Possibly non-existent or swap device?
fsck.ext3:
~ $ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5]
md1 : active raid1 sdd2[3] sdc2[2] sdb2[1] sda2[0]
1044160 blocks [4/4] [UUUU]
md2 : active raid5 sda4[0] sdb4[4] sdd4[3] sdc4[2]
5854981248 blocks level 5, 64k chunk, algorithm 2 [4/3] [U_UU]
[>…] recovery = 0.0% (1562368/1951660416) finish=1826.6min speed=17792K/sec
md0 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
208768 blocks [4/4] [UUUU]
unused devices: <none>
~ $ mdadm -D /dev/md2
/dev/md2:
Version : 00.90.01
Creation Time : Fri Apr 3 13:54:17 2009
Raid Level : raid5
Array Size : 5854981248 (5583.75 GiB 5995.50 GB)
Device Size : 1951660416 (1861.25 GiB 1998.50 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Thu Apr 28 01:07:51 2011
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 0% complete
UUID : b2c561ae:9acfcff0:2594d787:fb4eb047
Events : 0.4283850
Number Major Minor RaidDevice State
0 8 4 0 active sync /dev/sda4
4 8 20 1 spare rebuilding /dev/sdb4
2 8 36 2 active sync /dev/sdc4
3 8 52 3 active sync /dev/sdd4