How to reset RAID1 volume degraded flag ? (or expansion status)

Hi all,

first : I posted this with “My Cloud OS3” tag, but I have a Gen1 Mirror still running OS 2.x, if that matters.

Here’s my issue :
I wanted to expand the capacity of my 3TB RAID1 Volume.
So I started the procedure from the RAID management screen, using the “expand” option.

Thing is, it prompts to swap disk #1 for a new larger one, which I did, but unluckily, while rebuilding from existing old disk #2, it turns out this one had a problem I was not aware of
(didn’t think of running a complete disk check before expanding, since the NAS never reported anything wrong. Anyway…).

Now I’m stuck in a loop : it seems that the expansion process somehow flagged old disk #1 as being degraded, and I can’t get rid of that status (to somehow restart the whole procedure from scratch by maybe first switching positions of old disks to get the good one in pos. #2)

I did try to invert them, but the “degraded status” follows along, so this doesn’t work. It must be flagged directly on the HDD somehow.

And the OS having started the expansion procedure, it won’t get out of it.

I even restored a previous configuration backup, that indeed is not in “expansion mode”, but it still sees disk #1 as degraded…

Anyone has an idea how I could just get disk #1 back to “healthy” ? (SSH command line ?)
Because obviously I don’t want to “rebuild” disk #1 from the OS (with the restored config that doesn’t know about expansion), because it would wipe it out and try to re-mirror from #2, which has a real problem…

Hope that was clear enough.

OK, I just SSH’d in and looked at raid dev details, and I don’t really get what it did.

It seems md1 (on the right hand side) is the historical volume (created in 2015), with array size matching the old drive capacity, and used dev size matching the new one (detected on first expansion attempt)

I have no idea what md0 is (left, created today).

drive #1 was indeed removed from the md1 array.
So I just re-added it (partition sda2, presuming their index should match for mirrored drives), and the WD OS saw the volume as healthy again (with the restored config of course, the “current” one was still in expansion mode, asking to remove disk #1)

Now I can restart the whole process, but first run a complete test again, and maybe I’ll just hot-swap faulty disk #2 for one of the newer (and larger) drives, then disk #1, and only then launch expansion.

Maybe there’s a way to get out of the “expansion in progress” state through ssh, but I don’t know which.

Hope this helps if anyone is also stuck in expansion, but thing is you need a config backup to get back to.