Performance problems ? Read this first


There’s numerous reports about the MyBookLiveDuo performance problems: slow to read/write files, extremely slow to perform time machine backups…

I’ve found a way to improve the performance of this nice little NAS. This hack may have unexpected side effects, use it at your own risk and make sure you understand what you’re doing with command lines.

In the MyBookLiveDuo there’s a script running a “du” command every minute. This du command is putting a big load on the NAS system and reading/writing files on the NAS while du is running is killing the performances.

Here’s a procedure to disable this script and recover full speed for file transfers:

Test some file transfer / backup and let me know if this action improves things for you.

This script’s main role is to put the system in standy mode when not used for a while. By killing it, you also disable this feature.

This procedure has to be executed everytime the system is restarted. I recommend to wait 10/15 minutes of uptime before killing it (the script is doing a specific job the first time the NAS is booted up, I believe it’s better to let it finish it).


It would be good to share that this will also void the warranty, but thanks for the information anyway.

1 Like

Does enabling ssh voids the warranty ? I don’t think so.

As soon as you reboot the unit the script starts again and everything goes back to normal state.

This trick helps (du every minute is definitely slowing down data transfer) but unfortunately doesn’t bring dramatic performance improvement I was expecting at first place.

If you don’t feel confident messing around with ssh on your NAS: don’t do it ! If you have important data on it: do a backup first ! 

I’d like to have some feedback (good or bad) about people who tried this. Anyone ?

I’m currently doing a time machine backup from a single Mac to the MBLD.

a top is showing the load average is above 3 and the CPU is only used at 20% maximum (by the afpd daemon responsible for Apple network file system):

top - 00:03:25 up 4 days, 5:10, 1 user, load average: 3.27, 3.38, 3.25
Tasks: 96 total, 1 running, 95 sleeping, 0 stopped, 0 zombie
Cpu(s): 2.6%us, 10.3%sy, 0.0%ni, 0.0%id, 86.4%wa, 0.0%hi, 0.7%si, 0.0%st
Mem: 253632k total, 243712k used, 9920k free, 54080k buffers
Swap: 500544k total, 203520k used, 297024k free, 110464k cached

10448 nobody 5 -15 14336 5696 1408 D 10.3 2.2 144:49.97 afpd
8708 root 20 0 28672 12m 11m S 1.7 5.1 18:59.79 cnid_dbd
1933 root 20 0 5056 3456 2368 R 0.7 1.4 0:00.13 top
1 root 20 0 4352 1024 704 S 0.0 0.4 0:03.07 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:01.15 ksoftirqd/0
4 root RT 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/0
5 root 20 0 0 0 0 S 0.0 0.0 0:00.81 events/0
6 root 20 0 0 0 0 S 0.0 0.0 0:00.01 khelper
9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 async/mgr
89 root 20 0 0 0 0 S 0.0 0.0 0:00.01 sync_supers
91 root 20 0 0 0 0 S 0.0 0.0 0:00.02 bdi-default
93 root 20 0 0 0 0 S 0.0 0.0 9:50.36 kblockd/0
98 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ata/0
99 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ata_aux
101 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kseriod
121 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rpciod/0
152 root 20 0 0 0 0 S 0.0 0.0 0:00.01 khungtaskd
153 root 20 0 0 0 0 D 0.0 0.0 12:05.53 kswapd0
154 root 20 0 0 0 0 S 0.0 0.0 0:00.00 aio/0
155 root 20 0 0 0 0 S 0.0 0.0 0:00.00 nfsiod

So this means the CPU isn’t the bottleneck.

The load average is the average number of processes in ready state waiting for a resource (usually the CPU but in this case it isn’t). So some processes are waiting on some ressources. They could be waiting on some network IO. It could also be software interrupts.

Any thought ?

Unfortunately top’s man page says %wa is the disk IO waiting time. I’m afraid there’s not much more to increase performance of this box…

I looked at the IO optimization on linux and it seems the MBLD doesn’t use the best IO elevator scheduler.

MyBookLiveDuo:~# cd /sys/block/sda/queue
MyBookLiveDuo:/sys/block/sda/queue# cat scheduler
noop [anticipatory] deadline cfq

The anticipatory scheduler might not be the best one according to this document:

So I changed it to CFQ like this:

MyBookLiveDuo:/sys/block/sda/queue# echo cfq > scheduler

MyBookLiveDuo:/sys/block/sda/queue# cat scheduler 
noop anticipatory deadline [cfq]

And indeed it seems to me the performance increased and the time to make a 250 GB time machine backup has been reduced. Could someone try this  and let everybody know the result ?

1 Like

changing the scheduler did the trick for me – am seeing about 35MB/s consistently

I enabled ssh, took note of the password, sshed in…

ssh root@ http://mybooklive.local/

and simply did what the post said … echo cfq > scheduler

I did a few other things too but finally the sceduler really helped. 

Originally I was seeing about 156kb/sec … made sure all cables were cat5e.  One cable was cat5 and I don’t know why because it was to another machine on the network but maybe it was causing an issue with my router.  I also set the network to manual using system config OSX (with the same settings as were there under automatic.

Rebooted and got an improved  28k/sec - 35mb/sec fluctuation.  It hit 35mb/sec which was great but then dropped low for a 3-4 readings, then popped back up.

Now after doing the scheduler as above, I get a conisistent 25-40mb/sec.  This is about 4 times as fast as it was.

I also earlier tried making the /etc/sysctl.conf file mentioned earlier in the thread but that had no noticeable effect so I took it out and am not using it now.


Good to have some feedback.

I forgot to mention the scheduler should also be changed for sdb drive:

MyBookLiveDuo:~# cd /sys/block/sdb/queue

MyBookLiveDuo:/sys/block/sdb/queue# echo cfq > scheduler

MyBookLiveDuo:/sys/block/sdb/queue# cat scheduler 
noop anticipatory deadline [cfq]


Now I’m not 100% satisfied with this improvement. I’m sure there’s a problem in the MBLD which kills its performance: most of the time the CPU is simply waiting for IOs (this can be seen in top’s %wa stats). On top of that the load average always stays to values above 1, even when there’s no activity on the box. Not good.

It seems to me the NAS has better performance when it’s been just rebooted and after a few days it starts to slow down.

I hope WD is going to release soon a new firmware fixing these problems otherwise I’ll change my MBLD with another product.