Hi all,
today I purchased a WD MC 4TB.
After installing it, I updated the FW to the latest version (v04.00.00-607)
As I like to know what I have to deal with, I enabled SSH to understand how it all works.
This is when I discovered that there is a what looks like to me a faulty Software Raid config:
# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Thu Jul 12 20:25:55 2012
Raid Level : raid1
Array Size : 1999808 (1953.27 MiB 2047.80 MB)
Used Dev Size : 1999808 (1953.27 MiB 2047.80 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Jul 29 21:23:21 2014
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 49266c09:bc456f15:83decb8c:900f3e2c
Events : 0.742
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 2 1 active sync /dev/sda2
# mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Tue Jul 29 20:48:49 2014
Raid Level : raid1
Array Size : 1999808 (1953.27 MiB 2047.80 MB)
Used Dev Size : 1999808 (1953.27 MiB 2047.80 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Jul 29 21:31:57 2014
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : e073f245:c99fa050:997ceb89:519129b4 (local to host WDMyCloud)
Events : 0.1071
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 0 0 1 removed
md0 is missing sda1, and md1 is missing sda2
I would say there should be only one md0, with both sda1 and sda2 as Raid members.
From the size it looks like it is the rootfs, which makes sense to keep it on a Raid1 volume.
I can get the md0 back in working order, but the question is how can this happen, and is this maybe a bug in the latest FW?
Thanks,
Christian