Hello to all
I’ve just a slightly fragile moment with my PR4100. It’s been running for several years without a hiccup. It has 4x identical WDC WD40EFRX - 4TB Red drives in it, and it’s running under RAID 10. The logic is that I fully expect one drive to go at some point, so I’d like to be able to pop an identical WD40EFRX drive in it, as and when the time comes, always having at least one as a back-up drive, to rebuild the RAID array if it happens.
Yesterday, I had a red light on Drive 1. Looking at the control panel, it reported that Drive 1 had failed it’s SMART tests, and I thought the day had arrived when I was going to need my back-up drive to go in.
However, before I did, I completed a full power-down reboot of the PR4100. Oddly, upon completion of the boot sequence, the light on Drive 1 has gone blue again and the control panel is reporting that all is well with all four drives and the RAID array.
I hadn’t touched the drive in anyway prior to this happening, other than physically accessing the NAS over the network to access data. It’s being used as a media server and running PLEX to our devices all around the house and has been doing it very well. I appreciate that the NAS probably gets more traffic, and therefore probably more drive activity, than if it was being used for just standard file data.
Is there anything I can do or check to see if I can get to the bottom of why Drive 1 went bad temporarily? I’m not especially worried about it, and in general I’m a “if it ain’t broke, don’t fix it” sort of person. I’d like to get a better handle on whether the HD in Drive 1 is starting to fail and I can reasonably expect more problems from it in the forthcoming weeks or months, or whether this was just a strange glitch that may never be explained?
Many thanks for any responses.