Two WD Red NAS drives failing on same day

Hi,

I have had two identical WD Red 2TB drives set up in a Synology DS218j for 30 months. The drives are set up in RAID mode (SHR) so that their content is duplicated on both drives.

Everything has been working fine, both drives were healthy as per Synology diagnostic tools. However, I ran out of space. I treated myself to two 4TB WD Red drives.

When I tried to swap the 2TB drives for the 4TB ones (as per instructions here) the first 2TB drive crashed within 2 minutes of starting the process of copying its content onto the new 4TB drive.

OK. Weird timing but hey, this is why I have the other drive. One drive failure does not mean their content is lost.

I unmounted it and put the other 2TB drive in. Restarting the process. And…you guessed it, the second drive crashed too.

So after 2 1/2 years of perfect functioning and a clean bill of health. Both drives failed withing 15 minutes of each other.

The odds for this to happen seems pretty low. So, did I do something wrong?

Smaller NAS boxes are not anywhere near as robust as 6 and 8 disk models.

Definitely check the disks to be sure they are dead as opposed to the NAS unit flaking out

It does seem very unlikely for both drives to fail at the same time like that, unless there was some external factor causing the failures. A few possibilities come to mind:

  • Environmental issue - Was the NAS kept in a very hot environment or subject to vibrations/impacts during the drive swap? Environmental stressors like heat, vibration, or power fluctuations could potentially cause multiple drives to fail.
  • Faulty SATA connections - If the SATA cables or connections inside the NAS were faulty, it could cause issues when drives are swapped out. I’d inspect the SATA connections and try different cables if possible.
  • Accidental damage during drive swap - Any impacts or static discharge to the drives during the hot swap could potentially damage them. Always handle drives very carefully.
  • Faulty NAS controller/backplane - In rare cases, a problem with the NAS itself could damage drives or corrupt data when doing a swap. If issues persist with new drives, this may be suspect.
  • Bad sectors spread across both drives - If both contained bad sectors in the same physical areas of the platters, the stress of a rebuild could cause failure. Unlikely but possible in theory.

So in summary, I’d first inspect for environmental issues, connections, NAS hardware. If those are all fine, it may have just been incredibly bad luck with timing of existing latent drive issues. But for both to fail simultaneously like that, an external factor is most likely the culprit. Let me know if you have any other details!