Where has my RAID volume gone?

Hi folks!

I’m hoping someone can help me diagnose what’s actually happening on my EX4 NAS. Here’s the scenario:

I had a 6 TB raid 5 array on 4 disks (3 x 2tb and 1 x 3tb). I’ve just bought and was migrating to 4 x 6 TB RED NAS drives. You know, swap a drive and hit rebuild… Repeat until done. Anyways, first two went fine, but not so much drive 3. The new drive reported faulty. I tried another drive. Also reported faulty. Seemed the controller had maybe barfed, so I rebooted the NAS.

Now all the drives are recognized as valid drives but the raid volume is gone and I get the following error every time I reboot:

Volume Failure
The data volume 1 on the drive is not accessible. Contact Customer Service.
Tuesday, 2014 October 21, 18:15:56
Code:4

At the moment, I have the 4 original drives in it and still have the same error. Interestingly, if I reboot via ssh, when it tries to umount the disks, I only get reports from 3 of the 4 disks… where’s /mnt/HD_b4? 

Stop Lighttpd Daemon…
killall: snmpd: no process killed
                     USER        PID ACCESS COMMAND
/mnt/HD_a4:          root     kernel mount /mnt/HD_a4
                     USER        PID ACCESS COMMAND
/mnt/HD_c4:          root     kernel mount /mnt/HD_c4
                     USER        PID ACCESS COMMAND
/mnt/HD_d4:          root     kernel mount /mnt/HD_d4

Does anyone have any suggestions on how to diagnose it further? I’m a bit unfamiliar with how raid is implemented on these boxes, can you point me in the direction of some doco? (or some other M that I can RTF? :slight_smile:

There were also a couple of shares in the remaining TB on disk 4. They seem to have gone bye bye too.

Any help gratefully received.

Best

Ne

BTW I do have an open support case with WD. But so far I’m just getting dumb boiler plate responses (you know… “press the reset button” and “pull out the drives and put them back in again - that should cause it to rebuild the raid array”) its been a couple of weeks, I’m getting a bit kind of anxious about it. :-S

Hello ,

I am having the same error on the same EX4 with raid 5 also, but this error appeared after a power failure, my NAS was working fine before this error, can any one help!!

Thanks,

A me too? ldress - If you want help, you should really start your own thread, that way if our problems have different causes and fixes, it won’t be as confusing.  :slight_smile: I am actually going to start actually trying to figure it out myself. If you’re comfy in ssh, feel free to come along for the ride - I can’t guarantee where we’ll end up. At this point I’ve been without my storage for about 2 or 3 weeks and the inertia is kinda killing me.

I was really really hoping I could avoid giving myself a crash course in Linux Raid implementation. Support from WD doesn’t seem to be forth coming and the answers I’ve had so far are frustratingly dumb.   I’m just gonna semi blog here. IF anyone has ideas, Please do chime in! 

First lets see what /proc/partitions sees on the disks. Interesting… In my case I can see that all 4 drives have their 2 tb slice:

/ # more /proc/partitions | grep 2$
  31 2 5120 mtdblock2
   8 2 1949218816 sda2
   8 18 1949218816 sdb2
   8 34 1949218816 sdc2
   8 50 1949218816 sdd2

But it looks like it has gotten confused  -  where are the sd[a-d]2 partitions?:

/ # blkid
/dev/sda4: UUID="1ab7ec4c-ed8d-4657-a268-b4ac9024b77e" TYPE="ext4" 
/dev/sdb4: UUID="5ca1e876-3169-4037-95f5-a56a643e9015" TYPE="ext4" 
/dev/sdc4: UUID="35162970-1ec9-4878-be1e-5d9c38b910fa" TYPE="ext4" 
/dev/sdd4: UUID="3d7053a9-f379-4ba2-8612-46bcb18d3482" TYPE="ext4" 
/dev/md0: TYPE="swap" UUID="5297874b-3b4e-47f7-9d59-c0d7271942a8" 
/dev/loop0: TYPE="squashfs" 
/dev/sda1: UUID="c33a160e-a34b-5197-0a30-d09669b3aa0d" TYPE="mdraid" 
/dev/sdb1: UUID="c33a160e-a34b-5197-0a30-d09669b3aa0d" TYPE="mdraid" 
/dev/sdc1: UUID="c33a160e-a34b-5197-0a30-d09669b3aa0d" TYPE="mdraid" 
/dev/sdd1: UUID="c33a160e-a34b-5197-0a30-d09669b3aa0d" TYPE="mdraid"

Did a bit more googling and found mdadm - think I might be making some progress - anyone with experience troubleshooting failed raids, I would love to hear your thoughts :slight_smile:

/ # mdadm --examine /dev/sda2
/dev/sda2:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x0
     Array UUID : da66e6b8:467ae48b:9cc4ea6a:64c3e53e
           Name : 1
  Creation Time : Wed Oct 22 22:51:00 2014
     Raid Level : raid5
   Raid Devices : 4

    Device Size : 3898638192 (1859.02 GiB 1996.10 GB)
     Array Size : 11695913856 (5577.05 GiB 5988.31 GB)
      Used Size : 3898637952 (1859.02 GiB 1996.10 GB)
   Super Offset : 3898638320 sectors
          State : clean
    Device UUID : e3e20cb0:4988aec3:2bdfcce5:e42dc4ab

    Update Time : Wed Oct 22 22:51:10 2014
       Checksum : fe09e414 - correct
         Events : 2

         Layout : left-symmetric
     Chunk Size : 64K

    Array Slot : 0 (0, 1, 2, 3, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed)
   Array State : uuuu 380 failed
/ # mdadm --examine /dev/sdb2
/dev/sdb2:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x0
     Array UUID : da66e6b8:467ae48b:9cc4ea6a:64c3e53e
           Name : 1
  Creation Time : Wed Oct 22 22:51:00 2014
     Raid Level : raid5
   Raid Devices : 4

    Device Size : 3898638192 (1859.02 GiB 1996.10 GB)
     Array Size : 11695913856 (5577.05 GiB 5988.31 GB)
      Used Size : 3898637952 (1859.02 GiB 1996.10 GB)
   Super Offset : 3898638320 sectors
          State : clean
    Device UUID : 7b789923:fd484263:f96ff1ce:c37ae040

    Update Time : Wed Oct 22 22:51:10 2014
       Checksum : 8f6b180d - correct
         Events : 2

         Layout : left-symmetric
     Chunk Size : 64K

    Array Slot : 1 (0, 1, 2, 3, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed)
   Array State : uuuu 380 failed
/ # mdadm --examine /dev/sdc2
/dev/sdc2:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x0
     Array UUID : da66e6b8:467ae48b:9cc4ea6a:64c3e53e
           Name : 1
  Creation Time : Wed Oct 22 22:51:00 2014
     Raid Level : raid5
   Raid Devices : 4

    Device Size : 3898638192 (1859.02 GiB 1996.10 GB)
     Array Size : 11695913856 (5577.05 GiB 5988.31 GB)
      Used Size : 3898637952 (1859.02 GiB 1996.10 GB)
   Super Offset : 3898638320 sectors
          State : clean
    Device UUID : 0a00c6ed:0c6af0af:7bafbedd:24d10df8

    Update Time : Wed Oct 22 22:51:10 2014
       Checksum : 6c405691 - correct
         Events : 2

         Layout : left-symmetric
     Chunk Size : 64K

    Array Slot : 2 (0, 1, 2, 3, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed)
   Array State : uuuu 380 failed
/ # mdadm --examine /dev/sdd2
/dev/sdd2:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x0
     Array UUID : da66e6b8:467ae48b:9cc4ea6a:64c3e53e
           Name : 1
  Creation Time : Wed Oct 22 22:51:00 2014
     Raid Level : raid5
   Raid Devices : 4

    Device Size : 3898638192 (1859.02 GiB 1996.10 GB)
     Array Size : 11695913856 (5577.05 GiB 5988.31 GB)
      Used Size : 3898637952 (1859.02 GiB 1996.10 GB)
   Super Offset : 3898638320 sectors
          State : clean
    Device UUID : 7a86f2e4:449a6c73:f462161c:703e0ff7

    Update Time : Wed Oct 22 22:51:10 2014
       Checksum : 64422dfe - correct
         Events : 2

         Layout : left-symmetric
     Chunk Size : 64K

    Array Slot : 3 (0, 1, 2, 3, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed, failed)
   Array State : uuuu 380 failed
/ #

Then I found this page:  https://raid.wiki.kernel.org/index.php/RAID_Recovery

and the warnings about using older versions of mdadm  - so I thought I might check…

/ # mdadm --version
mdadm - v2.5.6 - 9 November 2006
/ #

*sigh* Figures. I wonde if I can find a more up to date version?

Google says there’s an RPM for this architecture as part of Suse distro. Many of the mirrors I tried didn’t have it, so if you’re needing it, you might look here.

ftp://133.24.255.153/mirror9/opensuse/opensuse/ports/armv5tel/factory/repo/oss/suse/armv5tel/mdadm-3.2.6-2.1.armv5tel.rpm

But I’m getting ahead of myself, lets say we scp it up into /root and see if it wants to play ball… oops make that /home/root… *mutter*

Bugger.

~ # ./mdadm --version
./mdadm: /lib/libc.so.6: version `GLIBC_2.15' not found (required by ./mdadm)
~ #

Found the required version of glibc. Before I go changing the link, is doing so going to break things?

All I’ve done so far is poke around a bit and its probably time got some sleep. Might wait to see if anyone has advice to offer before going further.

If you know this stuff well, I’d love some assistance :slight_smile:

Best

Ne

So before starting to work out a way forward towards recovery myself, I asked WD Support if they could just tell me how the raid is implemented. A day or two after I figured it out for myself, I got this as a response:

[Deleted]

If you are thinking about buying one of these, or indeed any WD products. just think about the reply I got here. Their device fails to keep track of the RAID volume, the tools to fix it are out of date, old and buggy, and you will have to go see a professional data recovery group if I’m to have any hope of getting your data back, unless you happen to be particularly knowledgable about driving linux.

But further, and several days earlier, I asked if I might have better luck putting the drives into another computer and booting to a live cd to see if it can see the partition to which I eventually got this reply:

[Deleted]

Given that the implementation of raid in the WD is proprietary and confidential to WD (actually its just linux software raid - OMG The secret is out!! PFFT!) I figured what the **bleep**… stick the disks into my old server box, boot to a modern live cd and see what happens.

Guess what?? it detected the raid array, said hey its a bit broken, let me fix that and then was able to mount it. frikkin YAY.

I have previously suggested that contacting WD Support might be a good idea and perhaps you’ll have better luck that me. I was clearly given poor advice, discouraged from even trying to reover and advised to get spend potentially uknown hundreds of dollars at a professional data recovery house.

If they’d just update the tools on the **bleep** box, this whole process could probably happen automagically. And if not, I’d’ve at least been able to run mdadm sucessfully and recovered it in place.

And why would they miss inform and intentionally withhold information? Information that I can find out myself by simple if time consuming analysis, but I wouldn’t have to if they’d just share the ways in which they’re using FREE SOFTWARE. Proprietary (no) and confidential (seriously>? Grow the F… UP!!)

I’m actually significantly annoyed with WD over how they have handled this case.  This company just demonstrated the worst business practices of the old guard of IT companies. They’re happy to use and abuse the generosity of people like you reading this forum, who are giving your time and expertise to help end users less knowledgable than yourselves, and like those software engineers writing software and giving it freely to the benefit of the community at large. They’re more than happy to use those things to make their own offferings better (sometimes perhaps,  possible is a better definition), but then act in secrecy and deceptively, obstructing people who (in my case) just wanted to understand how its implement so I can attempt to recover from a problem that in essence was caused by their **bleep**ty old implementation. What is to be gained by their actions? All I can see is a well deserved helping of contempt.

Shame on you WD. Shame.

And censored for quoting the messages. Tempted to seek out a bigger stage.

It happened again.  Same EX4 NAS, only I’ve since bought 4 x new 6 tb WD Red drives and had setup RAID 5 across all 4 drives. 

This time I couldn’t easily access the data; as in the raid didn’t automagically mount when put into a standard pc and booted to a linux Distro. So this time I asked for help from the experts; the folks over at the raid-linux mailing list.

They basically walked me through getting the array back online using my trusty old pc hardware and a sysrescue on bootable cd.  The process was:

check smart counters -  looking to make sure it isn’t caused by two simulataneous legitimate drive failure.

run once for each drive. Looking for Overall health…: passed

smarctl -a /dev/sda

smarctl -a /dev/sdb

smarctl -a /dev/sdc

smarctl -a /dev/sdd

Stop the array

mdadm -S /dev/md1

Examine the partitions that are participating in the array to see what’s up

mdadm --examine /dev/sda2

mdadm --examine /dev/sdb2

mdadm --examine /dev/sdc2

mdadm --examine /dev/sdd2

In my case drives 1 and 2 were dropped simultaneously, around 9 hours before 3 and 4. Which shows up in the Last updated time and Events in mdadm --examine

Force it to resync

mdadm --assemble --force --verbose /dev/md1 /dev/sd[abcd]2

check the file system without trying to fix stuff (if this doesn’t report clean you might be reticent about continuing)

fsck -n /dev/md1

mount the array in linux and backup anything newer than the last backup.

mount -o ro,noload /dev/md1 /mnt/[something]

Then cp the data or whatever you do to backup yours.

The guys over at raid-linux suggested the most likely cause to be hardware related because two drives dropped simultaneously. They suggested I look back through the dmesg logs for around that time and see if the cause is apparent there.

Typically they’d be in /var/log but not on the NAS. Anyone know where (or if) they’re kept?

Both times the NAS has lost the raid volumes its has been when coming out of standby mode. I wonder if there is a problem with my particular hardware?

I’ve seen this occur on an EX2 after power failure. I’ve since put the replacement on an UPS - WD said failure was hardware related, but didn’t go into detail.

I didn’t get to put it under the microscope before it went out, but I’m assuming this controller board is not pleased with sudden voltage surges/drops.

You could be on to something there… They’ve done a lot of work on the grid here in the last year or two and thngs have been pretty good, but sometimes we do get a fluctuation here and there. So I’ve been kinda toying with the idea of getting a UPS anyway.

I now know what to ask the folks to get me for xmas haha. Thanks!