PR4100 Apparent Drive Failure - Upgrade Options

My PR4100 (four 4T WD40EFRX drives) was showing a red light on one of the drives recently and the dashboard did indicate one bad drive. There was no data lost due to Raid-5 but I shut it down immediately to figure out what to do. WD tech support pointed me at the drive compatibility matrix which looks out of date to me. It looks like the WD40EFRX is obsolete although it is possible to buy them. They did eventually confirm that the WD40EFPX is a suitable replacement although not listed in the compatibility matrix.

In the meantime, I powered on the PR4100 and all drives are now indicating normal. Do I have a drive failure or not? Given that this unit is 7 years old, I imagine that all drives are nearing end of life. This made me think that a better option might be to replace all drives with higher capacity drives and I can get recommended drive models from the compatibility matrix. I found a tech note that explains how to replace all drives and expand the capacity. However, the tech note indicates that this whole process could take one week. That seems a bit extreme. Has anyone done this? Maybe I just move all data off the NAS, then insert 4 new drives and start from square one or does that create more problems than it solves. Suggestions?

I’ve had this happen several times with my EX4. NAS runs fine, suddenly spot a red light on #4. Shut down, pull #4, blow some air in the SATA connector, wipe the drive connector, reinsert, boot and all is OK. Would run for months before another red light and I do the same. I already have a spare drive in case it goes down for 10 count. I shut down before pulling the drive so the EX4 doesn’t start rebuilding the raid which could take a day or two since it’s still the same drive.

Which brings me to your upgrade. You can backup to another location , everything on it, including the configuration (from the dashboard), then shutdown, replace all the drives, boot up, configure the array, then restore all the data. This would have to reindex all of your media and you may need to redo some of the settings (not sure all are restored from config file). Of course, this assumes you have another storage device big enough to backup your PR4100. So you would backup to USB everything. How long it takes depends on how much data is in each share. Then setup the PR4100 with the new drives, then restore from USB. Again, could take a long time.

The other option is to swap drives, one at a time. With the 4100 running, simply pull the 1st drive out (or the one that showed the red light), and swap with the new larger drive. This will trigger a rebuild. Let it run. Could take a day or two depending on how much data you have. Once complete, swap in drive #2 and wait. Rinse and repeat 2 more times until everything completes. Then go into the dashboard and expand the volume to use the new capacity.

So which one? Can’t really say for sure. The one by one is easier since there’s no external drive involved plus a bunch of backups. All you do is swap drives. All your data is still available during the rebuild, just a bit slower on response. You also can start once you get your new drives.

The backup and swap all, will give you a backup (you should have one anyway, and repeat on a schedule) but will require an extra external usb drive of sufficient capacity, setting up and testing the backup jobs, then running the backups. You won’t be able to swap the drives out until you complete and verify the backups. Then swap drives, initialize the raid, restore the config, verify the settings are correct. Then start restoring the backups. So a bit more effort on your part. This could also create some “down time” regarding access the files since the shares will be empty until the restore is complete. However, you’d have a set of scheduled backups and a backup location to add another layer of protection.

FWIW, I have 2 NAS’s. My EX4 is my primary and i have another older unit from a different vendor. Everything goes to my EX4. My other simply backs up the EX4. I also have a large USB drive connected to my Router (so really another NAS). My older NAS then backs up to the USB.

Bear in mind I’m only providing an overview of the 2 processes, not detailed steps.

Also make sure you get drives suitable for NAS usage. These should be CMR drives not SMR, so drives like WD Red Plus/Pro, or Seagate Ironwolf/Pro) WD Red/Black, Seagate Barracudas and the like are SMR. You can web search CMR vs SMR to better understand the differences.

Thanks for the detailed response especially on the drive replacement type. I did nothing to the failed drive - just power the NAS down for a few days and power up and it was then happy. I like the idea of blowing some air through it and reseating. I’ll do that if it happens again. All critical NAS data is backed up on other media but I have a lot of old raw video files that if lost would not be a big deal; likely stuff that should have been trashed at the end of each project. So I do not have enough online storage to backup everything, so I’m inclined to replace one drive at a time. Time to shop for some CMR drives.