EX4100 Drive Failed in Empty Bay

I keep getting notified of drive failure in bays 3 and 4. I currently only have drives in bays 1 and 2. The unit does not show a light on bays 3 or 4, firmware 2.11.133. Any ideas what would be causing the repeated errors? I have rebooted several times and the errors reoccur.


Have you tried moving the drives to the other bays to see if it does the same?

Also have you tried a physical reset?

Hi, I’m sorry for reviving this topic again, but I have the exact same issue. Since Jim_Brown hasn’t replied to your solution, I’m not sure if it helped.

In short:
I configured my WD My Cloud EX4100 with two 8 TB HDD’s to RAID 1, bays 3 and 4 are not occupied. Nor have I tried to reset the NAS.

  • will placing the disks from bays 1 and 2 to bays 3 and 4 or resetting the NAS affect the data?

I’ve had my EX4100 about a year now and I just received a similar email for the first time. I have no drives in bay 3 or 4. The NAS was in standby and was just spinning up at the time so…

I’m getting the same issue:
wd mycloud ex4100
Firmware: 2.21.126
Drive in 1 and 2 and getting failure’s for 3 and 4 when there is no drive in there.

I am facing the same issue as well on my EX4100.

“Drive failed in Bay 3. Replace the failed drive. Contact WD support Code 201”
“Drive failed in Bay 4. Replace the failed drive. Contact WD support Code 201”

Currently no drive is installed in Bay 3 & 4

I have EXACTLY the same problem … buy my EX4100 only two weeks ago and throws me an error CODE: 201 in bay 4 …: /

So there is no solution? Can anybody help me? i have exactly same problem… i use just 1 one bay. But i get always 201 error about 2-3-4 bays.

Hi! The same situation with my EX4100 with two 4TB drives installed into 1 and 2 bays. Support requested several configuration files but still cannot do anything. Has anybody resolved the issue?


Hello! Excactly the same situation with my EX4100 with two 4TB drives installed into 1 and 2 bays.

Following events are generated on your WDMyCloudEX4100.

Event title:Drive Failed

Event description:Drive failed in Bay 3. Replace the failed drive. Contact WD Support.


Event code:0201

Event time:08-15-2018 01:12:06 AM

Firmware version: 2.30.193

I started getting this error after the latest firmware update.
No physical drives in bay 3 and 4. Bays 1 & 2 have each 4tb drives set to spanning.


Same thing happened to me today. The WDEX4100 was automatically updated to 2.30.196 and after the reboot process the drive in bay 2 failed… Error code 201

I assume you’re using a RAID configuration. What works for me is pull the drive (hot) wait two minutes and put it back in it will overwrite as it rebuilds.

Sooooooo there’s no fix for this I guess because I just started having this issue

My drives a spanning in slots 1 and 2 say failed in either slot 3 sometime slot 4 and those slots are vacant.

I can just confirm the issue with my story and information. First of all, I had this issue too. I’ve had this issue since shortly after I bought mine, with a support request dated back at 2017-05. Now, when the NAS is rigged again after being offline for nearly 3 years, and after a firmware update to the latest (2.40.155) - believe it or not - it still continues to pop up. The error code seems to appear 2-3 times per day for both bays (have to analys it to be exakt, next time).

Repeat of the issue and setup:
My unit EX4100, has 2 HDDs of 3 GB in size each, installed in bay 1 and 2. “Good” condition of the drives and with about 33 deg C. Bay 1 and 2 is set together with “RAID 1” configuration. Bay 3 and 4 has always been empty. Regenerates the Critical Code: 0201. One error code for each empty bay.

Some observations:
The error code notification doesn’t seem to be in perfect sync. If my memory serves me right, one error may appear roughly 20-30 min before the other. I guess the programming is set to test the drives a number of times a day (a check up).

Old support notes + comment:
Quote from a support mail: “By default a 4-Bay NAS is expecting to be equipped with 4 drives. As it cannot find any drives in bay 3+4 but logically assumes that there should be 2 drives, so it concludes that those drives have a hardware defect not allowing a proper connection and usage, ergo it gives you the mentioned error message.” End.
Comment: I find this way of thinking very odd. By following the thought pattern, the unit is programmed to be “so called smart” and assumes that there should always be 4 installed… WHY? It should be the other way around; you have 4 bays for your disposal, if you use 2, then 2 should be active in the process and not pester the administrator for NOT having 4. I find this to be “NOT SMART” as in bad programming, IF this is true.

I would very much like to see why this is happening, and a fix or correction of the issue. For instance, I can’t set up the unit to inform me by email or SMS IF an actual drive would collaps. Now I have to check manually from time to time. This is not good nor safe.

Here are some new observations regarding its behavior.

Now, the NAS has had a behavior for wakening up from its sleep mode almost once every hour. I started to wonder why - 'cos I wasn’t accessing it by any means. So, long story short, I started to switch things off. Found it, and it was the “MAC Backup” function that led to its frequent awakening. Quite unnecessary when you haven’t linked any devices to the NAS TO backup.

After 4 days of without awakenings, I logged in to the NAS’ system, and noticed something. NO critical errors notified during that sleep time of 4 days! However, upon logging in, the NAS awakened and immediately notified me of error code 201 for bay 3 and 4 - BOTH at the same time, and at exactly at the awakening process (drives spinning up). Furthermore, I noticed that the front lights of bay 3 and 4 had become red for a certain amount of time - which I haven’t seen before. Shortly after, all of the bay lights returned to the state in which I usually see it: Bay 1 and 2 as blue, and Bay 3 and 4 as turned off (black).

Some thoughts:
In this case, the error code seems to be linked with the up-powering of the drives. (or indirectly). When powering up, the system seems to get feedback somehow through the electrical (or communication) system, and if it gets “no unit responded during powering up” it delivers this error code. Same thing could be happening during frequent awakening, although in that case the spin up process during awakening is probably not the main issue. It seems that the system may not differentiate properly between a failed drive or an empty one - both handled by the same error processing, from received feedback. Due to this, it means that if I use 3 or less drives (out of 4) I will get 1 to 3 error codes - In other words: 1 for each missing- or empty drive bay. Like I mention earlier, this is not “smart”. My question becomes, Is this a hardware or a software issue?