I recently moved my main NAS (PR4100 with 4x WDC WD100EFAX-68LHPN0 drives) from the last OS3 version to OS 5.14.105.
Since the update, I’ve had 2 different drives drop out of the array and be initially reported as failed. However, if I reboot and test the drives each one has then tested as good. If I rebuild the volume, the failed drives are able to rejoin the raid array quickly, making Volume_1’s status “Good”.
After the rebuild Quick and Full disk tests report everything fine. Finally, ssh-ing into the system and inspecting the smart records of all drives show 0 reallocated sectors and the smart records are almost exactly the same for each drive, even for the drives that were recently reported as bad.
root@PR4100 ~ # smartctl -A /dev/sda
Given this happened shortly after updating from OS3, this has me suspicious of the new firmware rather than the drives. Any other things I should test?
@richardbrockie, Please clear all notifications from your My Cloud device and perform a My Cloud System test System Test: This test will review the health of the hardware on the device, such as the hard drives, fan, system clock, and device temperature. The test may take a few minutes as it verifies the unit’s memory, temperature, fan, and more.
I have the same problem with my PR4100. Since I updated to the new firmware 5.14.105, I have had two different drives failing in my unit (Both in Bay 3). But after doing all the system test, those drive have came back Good and they are working great. I even put those two drives in a different NAS unit and they work great. I just sent a Tech Support note to WD. More to follow but this could be an issue with their new firmware.
It did it again this morning and the drive is still fine - I think it’s clearly the new FW.
As it seems to take about a week for this to occur, I’ll try to avoid it by using the power schedule to add a nightly shutdown to keep the uptime at most a day. I’ll report back in a while…