Any interests in Kernel 4.0 on My book live?

The password should be “password”.
My Image is just vanilla Debian, https://wiki.debian.org/SambaServerSimple

yes, thanks.
Meanwhile I used the image from Ewald, as samba is already installed, but it does not run. I cannot ssh or even ping the MBL.

Any hint ?

The image uses DHCP to obtain an IP address, so it assumes you have a router that provides it with an address. Maybe that is the problem, at least if it boots properly.
Alternatively, if you want a fixed address, use 7zip (or equivalent tool) to edit /etc/network/interfaces within the compressed tar image (or edit this file after untar). Just watch out for /linefeed issues if you edit the file from Windows (edit in UTF-8)
For example:

iface eth0 inet static
address 192.168.1.6
netmask 255.255.255.0
gateway 192.168.1.1
mtu 4080

Another more likely cause could be a mismatch between the uboot boot file (/boot/boot.scr) and the image. The trouble is that the uboot firmware is quite old and does not support booting of ext4 file systems. Hence, by default I boot of sda1 (/boot directory) and have / (the root filesystem) on sda2 formatted in ext4.
To achieve this, simply copy /boot directory from the Debian image to /dev/sda1 (make sure sda1 is ext2 or ext3) or if you have copied everything to sda1, format sda2 to ext4 and put the whole Debian image there. Alternatively, you can fix /boot/boot.scr to reflect your situation or simply boot of TFTP. In the latter case you only need a single disk partition, either sda1 or sda2.

You can read more about creating U-Boot command files here

EDIT: I updated the read.me to explain this a little better: Debian readme

It now contains the proper boot files for most combinations. Just pick the proper one for your situation.
Apologies for the confusion and any time lost…

1 Like

Hi Ewald,

Firstly thank you very much for your work, you saved the life of my old MBLs.

I’ve tried the separation between boot in ext3 and root in ext4, as you recommend in the last posts and I didn’t manage to make them work.

Everything is fine when I use only one ext3 partition (I’ve merged sda1 and sda2) but when I tried the boot in the sda1 ext3 and the root in the sda2, ext3 and ext4 both, none of them seem to work. I’ve even tried to set the boot flag to the root disk, but nothing changed.

I’ve used the boot.scr you included in git, however, I think some of them are not in the right place sda1 and sda2 for the ext3 seem to be swapped and the links in the readme for the ext4 points to the ext3 options. Could you please check?

Thank you again, this problem on the separated boot and the ext4, is not a big issue actually, in fact I’m more than happy with the disks working on Debian with NFS, as I use a raspberryPi for the owncloud that was my actual goal :slight_smile: Just to let you know.

best regards
Alfredo

@alfredo.anton,
You are right, the files in the ext4 section link to the ext3 files, so I have fixed both links. Thanks for pointing that out and apologies for the time lost. I could not find a fault in the ext3 section though… That said, it is late here and so I would not dare to bet on it…
The only real difference in the ext4 versions is rootfstype=ext4 and of course root=/dev/… which point to the ext4 based root volume.

Thanks for keeping this updated. I’m still running Gentoo on all three of my MyBook Live’s (1TB, 2TB, and 3TB). So far, they’re all running great.

My partitions are typically:
sda1 32MB /boot ext2
sda2 1GB swap
sda3 rest root ext4

I am thinking of trying OpenWrt on one of them, or looking for another MBL to test it on.

@Simba7,
OpenWrt is an excellent choice. As mentioned before, I use their work as the foundation for my kernel. The OpenWrt team has been able to get things fixed upstream, because they are an authority in open source, where I had to rely on patches only. For years I complain about Linux performance decreasing for embedded systems as we moved from 2.x to 3.x and definately to 4.x. Anyhow enough complaining !

On the “not so good side”, you will notice a 10 to 40% performance decrease due to the SATA and Network drivers that are not optimized for MBL HW, but also because OpenWRT kernel/OS is tuned and (properly) stripped for routers with little storage and memory.
On the positive side, you have the nice Luci GUI, which even extends into Samba.

In fact, I have a version of my kernel tuned for OpenWrt and it runs extremely well after some small changes. Maybe I will post a section of how to use my kernel with OpenWRT, including all the extra good stuff… I compiled my own OpenWRT image from sources, so maybe I make that available as well. Just need some free time :sweat_smile:

@Ewald

Thank you for your work! With your kernel and instructions at github I was able to install Ubuntu minimal 16.04 LTS for powerpc in my MBL. I rather install this distro since it is still officially supported until at least April 2021. I was able to migrate my users (Red Hat offers a nice tutorial), and samba shares settings from the original firmware. Some of the scripts to manage shares ( createShare.sh, addUser.sh, etc) still work just fine with some slight modifications. These scripts from the original firmware make life much easier!

Now I am thinking about enabling zswap or zram with a custom kernel to improve performance a little, if possible.

@nissan,
Sounds great. It would be wonderful if you can share your work on Ubuntu. If you have a compressed tar archive to get started with Ubuntu that would be great. I have Debian 9.X and 10.0 running, but they are contributed versions and performance-wise not up to par with Debian 8.11. I did some tests with zram and zswap, but there is not a lot of memory to work with and to get maximum benefit it required to write a better hardware enabled compression algorithm for LZ4 and/or LZO
Ewald

Yes, turns out compressed RAM won’t make a big difference, particularly with a single core. I was in the process of creating a new ubuntu tar archive when my hard drive just died after six years of service :(. I need a new hard drive to continue. But it is rather easy to create a tar archive using schroot with the ppc arch from any ubuntu or debian installation.

Hi all. Been following this thread for a long time, but just signed up today so I can post a reply. Great work everyone.

Hoping you can help! I’ve got a 3TB MBL, still with the original drive, and never been opened up. I was using a FSTAB CIFS mount in Ubuntu 19.04 to access the drive, getting around 30MB/s write speed when testing with dd.

Something changed in 19.10, causing the drive to spin up every 15 minutes, even when not being accessed. Samba log files suggest the connection terminates, and then tries to reestablish, and wakes up the drive. Messed around for a while - experiemented with a few different ways of mounting - with no luck.

So now i’m using NFS mount in FSTAB, which doesnt wake up, but seems to be giving me even slower results. Between 15-18MB/s on a good day, sometimes as low as 2.5 MB/s.

@Ewald I know you did a lot of work with the 2.6.32.70 kernal, and have used this as a performance yard stick for other kernals you’ve compiled. As I only use the drive purely as a NAS, to keep things simple, I thought I would go with this initially, and stick with the WD firmware. Followed the instructions on your Git, but i’m seeing no performance increase from the stock, and the dd tests seem to fluctuate more on the newer kernal.

I see you’ve referenced 2.6.32.71 in a couple of posts, with revised SATA drivers and a couple of other additions.

I was wondering if im missing something to get the 90+ MB/s speeds you’ve seen? Tried aligning the MTU on the client with the 4088, which made no major difference. Other than that, is there anything else I should try, or, should I just skip this completely and move to a 4.x and newer Debian?

Thanks in advance!

@various-artists,
Sorry for late reply, I was in China for work and behind the great firewall.

When you are using kernel 2.6.32.7x on top of WD OEM firmware, you are connecting a late 2019 Linux stack, likely with kernel 5.x, with an early 2012 Debian Lenny stack. Since then there have been many changes/releases to/of CIFS, Samba, NFS and even core networking stack (e.g. congestion control, TCP/IP stack rewrite in kernel 5.x etc). The combination of WD OEM software and my custom 2.6.32.7x kernel has made sense for a long time because performance was close to the maximum the hardware could deliver (e.g. CPU and 100Mb/s networking limits) and WD provided patches for critical Linux bugs.

But since 2018 WD started to no longer patch critical Linux defects and with LTS kernel release 4.9.33 I started to achieve performance levels that were getting very close to the OEM SW/2.6.32.7x combination. Hence I focused all development and testing work on kernel 4.x which in turn required running MyBookLive on more recent Linux versions such as Debian Jessie or even Ubuntu 16.x+.

Now, to your question. I don’t have any experience with 19.10, but the fact that you are having issues with different subsystems (CIFS, NFS, network drive access causing drive spin-up etc.) might suggest a more foundational issue e.g with core networking.
Some things to verify:

  • Are you running IPv4?

  • What is your netcat read/write throughput ?
    e.g. dd if=/dev/zero bs=1024K count=256 | nc -n -u -q 0 -T throughput 2222

I can do some testing with Ubuntu Bionic Beaver 18.04 (Aug 2019) but only with kernels 4.9 (I no longer have a running OEM based install) to see what it gives over NFS and CIFS and will post an update to this message…
UPDATE:
copy of 1GB (1024MB) file from MBL (Debian Jessie, kernel 4.9.119) to Ubuntu 18.04 after trashing MBL cache to make sure all data is read from disk:

  • over NFS v3: 9.75s (~105 MB/s)
  • over CIFS: 10.25s (~100 MB/s)
    In comparison, copying to same file to Windows 10 v1910: 9.24s (~110MB/s) over Samba/CIFS

Hi @Ewald, appreciate you getting back to me!

Yeah, I guess trying to use an 8 year old kernel with more modern OS’s might not be as smooth sailing as I had hoped, haha. Although since my last post, i’m actually back on your 2.6.32.70 kernel, and was doing a little experimentation.

I’ll detail how I mount the drives, as there may be some easy performance gains from modifying the FSTAB entry. I did experiment with adding “noatime”, “nodiratime”, and “async,” but I can’t remember them making much difference, or if they even do anything on newer OS’s.

Using NFS mount in FSTAB:
“192.168.0.45:/nfs/Public/ /home/home/Temp nfs rw,nfsvers=3 0 0”
(Adding nfsvers=4 did work on your kernel, but the performance was pretty much the same. I think only version 3 worked on the original WD supplied kernel…?)

user@ubuntu:~$ dd if=/home/home/Temp/test1.img of=/dev/zero bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 12.4515 s, 86.2 MB/s

user@ubuntu:~$ dd if=/dev/zero of=/home/home/Temp/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 54.4311 s, 19.7 MB/s

Using CIFS mount in FSTAB:
//192.168.0.45/public /home/home/Temp cifs username=user,password=user,_netdev,uid=user,vers=2.0 0 0”

user@ubuntu:~$ dd if=/home/home/Temp/test1.img of=/dev/zero bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 20.3607 s, 52.7 MB/s

user@ubuntu:~$ dd if=/dev/zero of=/home/home/Temp/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 33.2887 s, 32.3 MB/s

Tried both MBL and Client with MTU 4088, but it seemed to hang when doing dd, so I guess maybe my router doesn’t support higher than 1500. (Is anything above 1500 classed as jumbo packets?)

So yeah, faster writes with CIFS, but much better reads with NFS. Both of these were using Ubuntu 19.10, but I did back everything up from the drive using a Windows 10 machine a few days ago - scared for my data now i’ve started messing around! - and left it overnight. It was doing between 70 -75 MB/s to Windows.

To answer your questions, I thought I was only running IPv4, although when I do “ifconfig” on the MBL, it does show inet6 addresses… Could that slow things down?

Could you explain a bit more what you mean regarding netcat read/write? Am I using the correct dd commands to get an accurate performance reading? I’m not sure how to empty to MBL cache, so maybe they aren’t too accurate.

I wonder if using the original drive could be negatively impacting performance? I’d have thought improved writes should be achievable though…

I can try the newer kernel and Debian version, although, since you said your 2.6.x was the fastest firmware, i’m now wondering if I have issues somewhere else… :smile:

Thanks again!

On a side note, i’ve left my CIFS mount active on Ubuntu 19.10 whilst I wrote this post, and it doesn’t seem to have woken the drive for some time. Either your kernel, or some recent updates to Ubuntu, but looking at the samba logs, the connection doesn’t seem to be dropping and reestablishing every 15 minutes like before. Strange!

@various-artists,
Answered with pleasure, but problem not yet fully solved :wink:
86+MB/s NFS read is solid performance. For reference, I am getting 105MB/s with kernel 4.9.119, but I have a 4TB drive from 2018 that is reading 150MB/s+ as reported by “hdparm -t /dev/sda”. In addition, I am only going over a gigabit switch that supports Jumbo packets and not over a router.
Try “hdparm -t --direct /dev/sda” on your drive to get a reference of how fast it can read (–direct bypasses drive physical cache).

If your switch or router does not support Jumbo packets, setting MTU to 4080 might actually slow things down a little bit. In general, you can expect 20% lower read performance with MTU 1500 compared to 4080. And yes, anything above 1500 bytes MTU is Jumbo packets.

With respect to write speed, I am getting 47MB/s for the equivalent write command.

With respect to IPV4, the mbl will show IPV6 addresses as IPV6 is enabled. But, if you are on IPV4, that’s what is being used. So you are fine there.

Netcat (nc command) is a lower level network transfer protocol than NFS or CIFS, so it’s closer to raw network performance. You can also use it over UDP instead of TCP.
Your dd command is equally low level, but since either source or destination are CIFS or NFS mounted, you will be using these protocols. The dd command you are using is totally fine.

CIFS reads of 70 to 75MB/s is solid. Again, MTU=1500 and actual disk read speed might be your bottlenecks. If you go over a router (and not a switch) that would definitely be another 25+% hit.

The “hangs” with MTU 4080 might be related to your network, but also to the 2.6.32.70 kernel. This issue existed on the OEM kernel too, and only when rewriting the network driver from scratch for kernel 4.19, I found the root cause. I backported my solution to 4.9.x but never back to 2.6.32.70 as it’s a complex piece of code (it’s hardware related). That said, I only saw the hangs like once every 30 minutes under stress load. Not when just transferring one file. You need to unmount and remount your drives though after changing MTU.

Finally, yes, in raw numbers, 2.6.32.70 is the fastest kernel when benchmarked under ideal circumstances:

  • MTU 4080
  • client & server on a gigabit switch with Jumbo packet support
  • tuned SAMBA/NFS v3
  • 64k pagesize and 64k disk block size
  • simple, streamlined Debian Lenny Linux
  • latest hybrid WD HDD/SSD
    But is will hang once every 30 minutes with MTU=4080 under stress test (a simple network reset (ifconfig down/up), or cable in/out fixes things, no need to reboot)
    But most folks don’t want to use 64k page size and ever less 64K disk block size because you can not mount the drive under Linux or Windows in an easy way. And everyone want modern Linux with modern kernel, but these run many more services and daemons, consume more memory and are optimized for 64-bit architectures. Hence, I had to reset the performance circumstances and expectations completely.

Nowadays, with kernel 4.19, I am achieving results that are within 5% overall from 2.6.32.7x with just 16k page size and regular 4K disk block size, modern Linux with all overhead and zero hangs. I even achieved 122MB read and write over CIFS with a few modifications to SAMBA and a new DMA driver that I wrote from scratch. Just don’t have the time to get it all stable, published and ready to support as cross compiling SAMBA is very complex…

In summary, a few thoughts:

  • Ubuntu 19.10 might not be the issue
  • check your raw disk read speed with hdparm -t --direct
  • check your network connection with respect to support for Jumbo packets
  • don’t go over a router is possible (router with a build in switch is OK of course)
  • take a look at my SAMBA config on GitHub (massive tuning possible)
  • is your NFS rsize/wsize tuned (on v3 I am using 32k both ways)
  • are you using NFS over TCP or UDP?

Ewald

Any idea on a 5.4 kernel since it went LTS?

I have 5.4.25 running, but performance is very variable (sometimes good, sometimes bad for unclear reasons), it does not survive 24 hours of stress test and there are a lot of bugs. Currently fixing defects in MTD blocklayer for ROM access (kernel null pointer dereference). I noticed the OpenWRT team has also a 5.4 build based on 5.4.24, but it also did not survive 24 hours of test. For Debian I need more drivers to work. Also found that there are differences based on the version of gcc being used. Not sure why…
There is quite some potential for a NAS with Samba 4.12 and ioring driver support.
Will publish something in the next week…
Ewald

Good progress being made (stuck at home due to Covid19 related government restrictions). Several bugs fixed and pushed upstream e.g. mtd related kernel panic.
The vm defects (swap space not being used, killer OOM wrongly invoked) are tough ones. It occur on other platforms with little memory, not just PowerPC based MBL.

Bad news, my power supply blew up and damaged the board. No more development until I can get a replacement or find some solution :frowning:

Sorry to hear about your damaged board @Ewald. Are you planning to get a replacement so you can continue breathing new life into this old hardware? :smile:

Just a quick note, the links on your Github for the Debian Jessie 8.11 boot.scr files no longer work. 404 error.

I’ve been having some issues with my drive, where the NFS write speeds sometimes drop to 1.5 MB/s using dd, then other times closer to the 19 MB/s in my previous post. (Not sure why, it’s quite random.) Thought id give your newer build a try, since previously i’d stuck with the 2.6.32.70.

Trying to get it running on a spare drive, following your guide for a smaller ext3 partition (sda1) with /boot, and an ext4 partition (sda2) with the remainder of the roof file system, but i’m stuck with a solid yellow light on boot.

I made a boot.scr file by tweaking the one from the existing MBL image (booting from sda1 rather than the software raid), and compared it to the one in your image which is configured for TFTP. If you could reupload so I can make sure i’m definitely right, that would be appreciated!

Also, when i’m formatting the partitions on a new blank drive, I don’t need to specify block sizes for any of the partitions on the latest kernel do i? Currently i’ve just been doing, “mkfs.ext3 -m 1 /dev/sda1” with -m 0 on the larger sda4. mkswap for sda3, but maybe I need to configure this once i’m up and running?

Thanks in advance!

@various-artists,

  1. the boot.scr files are now all locally stored on Github both in text and precompiled in a tar file e.g. here. Let me know if there is a specific file you can not find.
  2. for NFS, it depends on many things like versions (v3 is best for 2.6.32.70), TCP or UDP, mount options, block size of disk (NFS does not handle so well >4K block sizes) and type/size of files being transferred. Try with a 1G file, it’s easier to measure. For a whole directory with many smaller files, 19MB/s sounds reasonable without Jumbo packets (I get ~35 MB/s for copying /usr from Ubuntu 19.04 to MBL).
  3. Regarding your boot problem, try to get uboot netconsole going, so you can see what is going on. TFTP boot.scr example here.
  4. No need to specify block sizes, except for swap space where the block size must match the kernel page size. So if you boot kernels with different page sizes, make sure you have a mkswap command in /etc/rc.local as to reinitialize the swap space to match the running kernel each time

Plenty of bargain seconds hand MBL’s in the US, but unfortunately I no longer live there now…
Haven’t found anything reasonable for Europe…