I’m not well versed in disk raids. Does anybody understand why the My Cloud try to remove sda1 and sda2 from the raid md0 or md1? I see a message in the output of dmesg everyday the system
tries to do the remove. It is caused by the script /etc/cron.d/20-checkRAID which gets run every morning at 3:05AM.
The model is WDC WD60EFRX-68M. Firmware 04.04.00-303. I’ve seen this message on
several releases. It also happens on both of my My Clouds.
I can understand checking the raid every day. But why is it trying to remove an active unit.
RAC
dmesg -T|tail
[Wed Sep 21 03:04:51 2016] md: cannot remove active disk sda2 from md0 …
[Wed Sep 21 03:04:52 2016] EXT4-fs (sda4): re-mounted. Opts: user_xattr,barrier=0,data=writeback,noinit_itable,init_itable=10
[Wed Sep 21 19:36:24 2016] EXT4-fs (sda4): re-mounted. Opts: user_xattr,barrier=0,data=writeback,noinit_itable
[Wed Sep 21 19:36:24 2016] md: cannot remove active disk sda1 from md0 …
[Wed Sep 21 19:36:24 2016] md: cannot remove active disk sda2 from md0 …
[Wed Sep 21 19:36:25 2016] EXT4-fs (sda4): re-mounted. Opts: user_xattr,barrier=0,data=writeback,noinit_itable,init_itable=10
[Thu Sep 22 03:05:02 2016] EXT4-fs (sda4): re-mounted. Opts: user_xattr,barrier=0,data=writeback,noinit_itable
[Thu Sep 22 03:05:02 2016] md: cannot remove active disk sda1 from md0 …
[Thu Sep 22 03:05:02 2016] md: cannot remove active disk sda2 from md0 …
[Thu Sep 22 03:05:03 2016] EXT4-fs (sda4): re-mounted. Opts: user_xattr,barrier=0,data=writeback,noinit_itable,init_itable=10
The messages are generated from the restoreRAID function in the /usr/local/sbin/data-volume-config_helper.sh. For some odd reason they remove and then add the sda1 and sda2 partitions.