Drive replacement leaves RAID status Degraded

I have got rather confused with my PR4100, 4x WD Red 4TB, RAID5

I had a drive failure (Disk 2) and replaced the drive with a brand new one and left it for a few days and didnt see any issues with that but the thing that confuses me is that the RAID status is degraded with a red light on Disk2 on the front of the PR4100.

The screen says drive status is healthy, the Web UI says drive 2 is healthy as all the rest, smart data is exactly the same for all drives.

I am at a loss of what is going on. one thing is apparent, it seems to not want to rebuild the RAID even though auto rebuild is on, i also tried disabling cloud services and apps

1 Like

after looking into manually rebuilding the RAID5 via SSH I have a solution.

First, i put the faulty drive back in the PR4100 and removed it from the RAID

mdadm --manage /dev/md1 -r /dev/sdb2

Then i removed the drive from the PR4100 and then connected the new drive to my PC and cleaned it with DISKPART on windows. for some reason it had been initialised with GPT

> list disk
> select disk 0
> clean

After that I put the drive new in bay 2 of the PR4100 and waited for the system light to stop flashing.

Connected via SSH and run these commands

ls /dev/sd* # found sdb which is the drive in bay 2
mdadm -D /dev/md1 # Raid5
mdadm -D /dev/md0 # Raid1

not sure why both Raid1 and a Raid5 is on there, anyways, After that I run the command

mdadm --manage /dev/md1 -a /dev/sdb

which adds it to the Raid5 and now I am getting this

I am not sure on the exact proper way of doing this but this appears to be working and on the Web UI
I can see that Volume_1 is rebuilding too



A multi-bay Western Digital My Cloud Network Attached Storage Device is preconfigured in either RAID 1 or RAID 5 mode to provide redundancies against data loss.

If one of drives in a RAID array fails, and the RAID volume is not configured for Auto-Rebuild, then a Manual RAID rebuild must be performed in order to introduce a replacement drive and bring the RAID volume back to a “healthy” state.

I do know this. problem is, and i am not the only one that has had this issue, it just did not auto rebuild, even if auto rebuild is enabled, thus why I have taken the approach above which worked. when i said

I am not sure on the exact proper way of doing this

is because on when doing ls /dev/sd* it shows sdX sdX1 sdX2 sdX3 sdX4 for drive 1 3 and 4 but no longer for drive 2 which has only sdb

because it did not want to auto-rebuild i had to take the drastic action of forcing it to and the results is

Again, i have no idea why there is a Raid1 on there as I only ever configured a Raid5 only

@CAProjects, I know it’s been awhile but did this approach end up working for you?
I ran into the exact same issue and got RAID5 rebuilt successfully using the same method. However, the red light on the drive stayed on (despite the Web UI and mdadm indicating that everything is good).

After a reboot of the NAS, I was back to square 1 with drive showing as removed and RAID5 being degraded. Rebuilt again (only took about a minute this time) but decided not to try rebooting again.

Currently rebuilding one more time after setting up the partitions on the new disk to be exactly the same as other disks (which also allowed to get 2GB RAID1 out of degraded state)…