Hi guys, I need some serieus help getting my
I have a 5 year old PR4100 with 4 x 2TB WD red’s. It was always configured in the default Raid1 setting. A week ago, my PR4100 showed a red led on the first drive. My files were still accessible and everything was working fine. When running a quick drive test, the dashboard showed that drive 3 was failing. This seemed a bit odd to me so I restarted the NAS.
After the restart, all drives show a red led on the system. My files and shares aren’t accessible anymore and it seems as if the drives aren’t mounted. The dashboard shows that I need to configure a raid volume again. I obviously don’t want to do this as this would wipe the drives. I normally had raid rebuild set on automatic but this function is now missing from the dashboard (see screenshot)
Running a full test again, the system says drive 3 should be replaced. Today I received a new, identical 2TB WD red HDD and replaced drive 3. I somehow hoped that this would magically start rebuilding the raid but the NAS just does…nothing. Doing another drive test, all drives appear healthy although all 4 drives have red led’s on the system.
I can still log in to the dashboard, have full acces to SSH but can’t acces files in any way. I think it has something to do with the raid configuration or profile missing.
I haven’t read or write any files to the HDD’s since the errors so I’m hoping all my very personal files on the nas aren’t damaged or gone in any way. But I have no idea how to go forward. Manually start rebuilding the raid from SSH? Retrieve or change the missing raid profile? Any help is very much appreciated!
root@MyCloudPR4100 ~ # cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md0 : active raid1 sdd1[2] sdb1[1] sda1[0]
2094080 blocks super 1.2 [4/3] [UUU_]
bitmap: 0/1 pages [0KB], 65536KB chunk
Full kernel info:
output of mdadm --examine /dev/sd*
in this pastebin: root@MyCloudPR4100 ~ # mdadm --examine /dev/sd*/dev/sda: MBR Magic : aa55 - Pastebin.com