6 Drives All Gone Bad At Once?

Thanks for taking the time to look at this!

I recently purchased 6 WD1002F9YZ drives, and set them up in a 6 disk Storage Space solution pool.

This was a mirrored space, essentially as a RAID10 using all 6 drives.

For some reason now, I’m having an issue where in POST the drives are showing as 0MB, and are no longer able to be seen by the Windows 10 64-bit.

These drives are brand new, worked like a champ for about 1 to 2 weeks, but when I decided to restart the system, they are no longer showing in the OS as a RAW or unformatted space.

Device Manager lists them as an “Unknown Device” and do now show in the Disk Management tool.

I contacted Western Digital support, explained that I have tried testing other SATA power and data cables, as well as other drives.

I do not have another desktop/PC to test them in, but I do have other drives (WD and Seagate) that do work on the ICH10R ports on this motherboard.

I have tried enabling/disabling various BIOS functions/tools that may affect the drives including Stagger Spin-up, and Hot-Plug capability.

Western Digital support suggested to try another PC as there’s “a very unlikely possibility that all 6 drives are DOA at once.”

TL;DR: I have 6 brand new enterprise drives that no longer work, and all 6 died at once, so I’m up-creek…

These are fairly new drives and there is no firmware update I’m aware of, and WDTools does not detect the drives, what should I do? (Trying to avoid RMA, as there may not be an issue with the drives… I hope.)

Hi and welcome to the WD community

Have you tried a different SATA port to see if each drive works properly, it is highly unlikely that you could have 6 new drives fail, but i would also advise to test on a different computer just to make sure, this is the best way to determine if the drives are defective.

Thanks for the reply ArMak!

Yes, I have tried all 6 of the available SATA ports on this controller, as well as the 2 ports on the other SATA controller.

I will attempt to find someone who has a SATA motherboard I can use and test on theirs, but you’re right, all 6 does seem a bit excessive.

might want to try another OS, using Windows 10 Tech Preview might not be the best step in testing/troubleshooting 6 new drives.

use GParted boot CD or something similar that can check the partitions and details of the drives with some basic diagnotics.  You can also use another diagnostics CD to boot and verify if your SATA ports work.

If you do try another board, it will need to have the same chipset/SATA controller to read your existing drive array.

In theory your setup is good, but Windows 10 is in beta/RC and both ReFS and the 6TB are not mature technologies, so in total this seems like a risky business to me. I would wait for the official release of the Windows 10 and go for the WD Red Pro 4TB discs.

I do apologize for the late reply on this, and it is still an ongoing/slightly-fixed issue.

I did find out that one of the 6 drives are bad, and I did find out something else…

This is not an issue with the OS, possibly the motherboard or chipset, but what I did discover is that if I change the BIOS option from AHCI to RAID, the drives worked!

I think it’s possible this is a drive related issue, as they would not spin-up or show any signs of activiy when on AHCI mode as in my first post, however, once set to RAID, I could hear each drive in sequence starting up before the Intel “RAID” controller option was available(Ctrl-I).

I have attached another photo I caught of the drives as the POST was checking them, and I think one of the 6 was timing out all the other drives, but upon removing the one drive that showed trouble, still setting the BIOS to AHCI would cause lockups during POST.

Take a lookie and let me know any other thoughts!

The drives appear to be configured to Power Up In Standby (PUIS). This setting can either be programmed by way of the drive’s PM2 jumper, or it may be a user configurable firmware setting.

PUIS can be disabled with a tool such as HDAT2. HDAT2 must be launched with the /W option to wake up the drives. Another possible solution is “sudo hdparm -s0 /dev/ice”.

http://answers.microsoft.com/en-us/windows/forum/windows_tp-windows_install/build-9879-hard-drive-disaster-read-this/7da0cd97-3ebc-4eb3-ae23-9873672e4e6d?page=1

That would be quite a curious thing for these drives to have PUIS when they are “performance” datacenter drives.

Is there anymore information on this regarding these particular model (WD1002F9YZ) drives?

On the contrary, PUIS would be desirable for a large array. This would allow the drives to be powered up in sequence, thereby avoiding a massive start-up load on the PSU. In fact some SCSI drives have jumpers that configure a 10-second (?) spin-up delay for each SCSI ID. For example, a drive with a SCSI ID of 2 will wait 20 seconds before spinning up while a drive with a SCSI ID of 3 will wait 30 seconds.

A tool such as CrystalDiskInfo will tell you if PUIS is supported and enabled.

You are right that this could be great for a massive array where there is not an option for stagger spin-up on the chipset or RAID card.

I will have to do more research and see if I can disable the PUIS as I can set a stagger within the chipset options and would rather have it enabled through the board than the drives.

Staggered spinup is sometimes implemented via pin #11 in the SATA power connector. This same pin is also used to drive an activity indicator.

http://pinouts.ru/Power/sata-power_pinout.shtml

Most desktop implementations will permanently ground pin #11 in the PSU cable, in which case you will have no spinup control via this route. An enterprise server, OTOH, may provide control over pin #11 via electronics in the backplane.

If your RAID card or motherboard chipset offers staggered spinup, then I suspect it will be implemented via PUIS, not via SATA pin #11.