PR4100 - Bad Disk after only 3 weeks

Hello folks

Two days ago, I powered my PR4100 down to connect a newly bought, secondary power brick. While powered off, I connected it and unplugged the old one, pressed [On] to start it up. All went well, then I knew that the new power brick worked. I then connected the original power brick to the NAS and unplugged and re-attached the new one without anything strange happening. It dodn’t go down and I could therefore conclude that both of them worked and that nothing happened to the NAS when loosing either one of them. I went back to our livingroom waited a few minutes and then just confirmed that the NAS was back in business again from my MacBook.

Yesterday when I got back from work and noticed that the LED for Disk 1 (far left disk LED) was lit red and the display said something like “Drive Fault” or something. I logged in to the PR4100 main page using a browser and saw that the Device Diagnostics also said “Drive Status: Fault”. I immediately created a ticket for the WD support about it. Then nothing happened, and I still haven’t got anything back from them actually. I searched and read for a couple of hours and read somewhere that someone got a similar issue and it was resolved by rebooting the NAS. I did just that and the “problem” went away.

The question is now - what actually happened? If the drive is actually ok, then why did it say it was faulty or bad? If there are something wrong with it, how do I get hold of that information to be able to ask for new drive? I set this up 25th of January so it’s not even 3 weeks old.

Any ideas or thoughts around this?

Regards
Robert

I would contact WD again.

HOWEVER - - - - you may find the bureaucracy responds better to a RMA request for the entire PR4100 than for a single drive.

Not sure I would worry too much if the Red Light cleared after rebooting. Could be a fluke. See if it comes back in a week, or if you go through another few power cycles. Hopefully, the data on the PR4100 is backed up? (hint: Always have a backup :wink: )

Update:

I actually got contacted by phone by the WD Support yesterday on my way home from work. I asked the support technician to call me back in 5 minutes, then I would be home where my NAS is located and she did call me back. She asked me to do a Full Disk Test, that took between 6 and 12 hours to complete (do’t know exactly because it wasn’t finished when I went to bed, but was done when I woke up. Everything was fine according to the test output. I’ve now created a ‘System Report’ and I’m about to send that one to WD Support. I’m not that worried, even with the red LED and the ‘Drive Status: Fault’ message in the display, nothing else was indicating that there was a problem with the raid. It was still marked as healthy actually - not degraded (or whatever it should say when a disk is lost). We’ll see how this turns out. I think that this might be some kind of hiccup when I was disconnecting and connecting the power bricks during boot phase, but I’m not sure.

A question about that by the way. In /etc on the NAS there’s a file called power_status. In my case it holds the value of 3. Is this the number representing the binary 11, meaning that I’ve got both power sockets on the NAS in use? Just a thought. Does anyone know what it represent?

// Robert