[PACKAGE] Docker v18.03.1 CE for WD My Cloud

Grml… we’ve fixed the platform rename again (tnx @tonyrayo ) and changed to armel instead of armhf.
The remaining possible obstacle is the 64k page size.
I’ve got the arm toolchain setup, but I’m still working on a VM setup to test it on.

@Tfl i installed the updated package (v18.03.1-ce) over the old version but docker version is still reporting version as version 17.12.1-ce even after a reboot, did i miss somthing ?

It’s more likely that I missed something :slight_smile:
We’ll be releasing a new package very soon as @tonyrayo created an port for the ARM based platforms and we have some memory usage improvements.

EDIT: ARM port is made by @JediNite, sorry for the mixup.


Do you mean the stuff I have been working on for the EX4100 ? Can’t see why it would not port to the other arm platforms. Would be good to test.


Hey @Tfl, I’m on my way back home from Canada (surprisingly fun place to celebrate U.S. Independence). I sent a msg a while back asking if you had any tips for both the x64-86 CI image as well as ARM (possibly using QEMU). If that message was lost in transit let me know… I’ve been busy but not that busy; I was just giving you time to reply :smile:.

@JediNite It wasn’t an independent port, just a configuration mismatch that was causing an older version to be downloaded/compiled for certain architectures. The above-mentioned CI will be worked on this weekend and will hopefully help find these issues.

(For those interested, CI stands for continuous intergration. Whenever a change is made to the GitHub code for a selected application, it will be compiled on a virtual NAS, as close to the real thing as possible, as a way of testing code changes.)


On the EX4100 at least, it does not have a kernel that is compiled with seccomp and keyring support, so in order for docker to work, these features need to be disabled in the docker-runc binary file. Check out the patch files I have put on my repo at https://github.com/JediNite/docker-ce-WDEX4100-binaries as it may be worth seeing if you can integrate these into your build process for these platforms.



I released a new version that will use @JediNite’s build on ARM. By using symlinks to the binaries the memory footprint went down a lot as well.
My apologies to everyone who lost a lot of time trying to get it to run…

@tonyrayo while it certainly is an interesting challenge to create a CI setup, I’m not sure yet if it’s worth the trouble for the very few people using these packages. I’d start with getting the x64 firmware image to run in virtualbox or qemu (e.g. with vagrant). The installation of packages can be tested with

/usr/local/modules/usrsbin/upload_apkg -rSomeBinary.bin -d -f1 -g1

To init / start / stop the apps, just use the corresponding scripts but it’s best to use full paths and the proper PATH in env.
We could add a test directory in each app with a validation script. But let’s discuss this on github.

Anyway, my focus is more on custom firmware now and an easier installation procedure, so I’m not that often on the My Cloud OS anymore.

could somebody be so helpful to write a short step by step guide?
actually its hard for me to find a starting point to get these stuff running.

i connected via ssh on my pr2100 downloaded via wget the https://download.docker.com/linux/static/stable/x86_64/docker-18.03.1-ce.tgz file, unpacked it, copied the files from the docker folder to /usr/bin (as recommended on the linked packe) and started “dockerd &” with the result that it is saying that /var/lib/docker is existing

so should i delete this folder for starting over?

in this linked installtion description there is nothing written what to do with the *.bin file from the first linked download page…

Easiest is to use the binary installer from https://wdcommunity.com for your platform and install it in the applications tab of the web ui.
If you want more details, read the install and init scripts in the repo.

I have a DL4100 (FW 2.30.193) and have tried every binary installer available from wdcommunity.com. Additionally i have tried all of the steps listed above to no avail. I even diffed the docker_daemon.sh.bak and the daemon.sh and tinkered with some of the settings to see if i could make it work. nope, nothing.

i have gotten not able to connect to docker.sock

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

i have gotten docker not running.
i have gotten some weird message when starting a container.

i have removed all changes and am back to the lousey 1.7.0

Any ideas?

Try to install the package, then reboot and check if it’s running.
Without the weird message I can’t say more.
Check /var/lib/docker/docker.log for more info.

I recently experienced corruption in the portainer volume. I figured it would make sense to just start over by uninstalling/reinstalling docker. The problem is that the directory at /shares/Volume_1/Nas_Prog/_docker still exists after uninstall, and on reinstall, the corrupted volume is still present. Due to this, I am unable to start portainer. Any attempts to delete /shares/Volume_1/Nas_Prog/_docker immediately render the filesystem read-only until reboot. Is there any way to remove the persistent _docker directory so that I can start fresh?

/shares/Volume_1/Nas_Prog/_docker is the docker working directory, bind-mounted to /var/lib/docker.
I think you may have a running process preventing this umount of this _docker working directory, resulting in a dirty shutdown/uninstall.

Please stop docker and check the remaining mounts

cd /
export DDIR=/shares/Volume_1/Nas_Prog/docker
$DDIR/stop.sh $DDIR
mount | grep docker

There should not be any mount left.
If there is, try killing those processes and run the stop script again.
Get the pids of those processes with this command (and kill them)

fuser -cv $DDIR

When none are left, you can move the folder.

I may look into creating a test suite for docker to reproduce this.

Running on WD Cloud EX2, after install it, restarted.
and I can’t start docker anymore. this is what I got ( i’m on latest firmware )

WARN[0000] containerd: low RLIMIT_NOFILE changing to max current=1024 max=4096
WARN[2018-10-11T19:55:05.258793661-07:00] unable to modify root key limit, number of containers could be limited by this quota: open /proc/sys/kernel/keys/root_maxkeys: no such file or directory
FATA[2018-10-11T19:55:05.259379687-07:00] Your Linux kernel version 3.2.40 is not supported for running docker. Please upgrade your kernel to 3.10.0 or newer.

@khoaofgod thanks for the report.
I’ll remove the EX2 and EX4 from the download page.
If the kernel is too old, you may need explore alternative firmware such as debian.
Do this at your own risk.

Hi, thanks for your efforts of this package. While unfortunately i failed to start docker in mycloud gen2, only ending up with ‘Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?’. And i try your instructions but didn’t work for me. So uninstall docker in webui.

But when I run command ‘find / -name ‘docker’’ via ssh, I find a path ‘/usr/local/modules/script/docker_daemon.sh’, and the ‘docker_daemon.sh’ can not be removed, saying: ‘rm: can’t remove ‘/usr/local/modules/script/docker_daemon.sh’: Read-only file system’

How can i do to remove this file?

No need to remove it… it gets symlinked into the path during boot and the init script of this app overwrites that symlink.
Check /var/lib/docker for logs when docker daemon fails to start.

Hi all,
amazing work guys, standalone docker is working like a charm.
I try now to integrate it in a swarm cluster. I am facing an issue with the overlay network, and I think that it is because the vxlan module is not loaded.
Does any of you how to find or compile a vxlan module for the kernel and then load it ?

I have been able to fix. I provide the how to:

  1. Create a container to compile the necessary modules
  2. in it, download the kernel source and compile the modules
  3. transfer to your NAS and load the modules into the kernel

part of this is sourced from [GUIDE] How to Build Custom Firmware

Activate SSH on your nas, have Docker installed and run a container:
docker run -it ubuntu:latest /bin/bash
apt-get update
apt-get install -y build-essential binutils gcc-multilib g+±multilib lib32gcc1 lib32ncurses5 lib32z1 git
apt-get install -y autoconf libtool pkg-config libncurses-dev
apt install -y bc nano wget
wget http://downloads.wdc.com/gpl/WDMyCloud_PR2100_GPL_v2.31.149_20181015.tar.gz
tar xvzf WDMyCloud_PR2100_GPL_v2.31.149_20181015.tar.gz
rm WDMyCloud_PR2100_GPL_v2.31.149_20181015.tar.gz
cd WDMyCloud_PR2100_GPL_v2.31.149_20181015/kernel
tar xvzf linux-4.1.13.tar.gz
cd linux-4.1.13
nano .config

F6: VXLAN, uncomment and put CONFIG_VXLAN=m


./xbuild.sh clean
./xbuild.sh build
scp net/ipv4/udp_tunnel.ko drivers/net/vxlan.ko net/ipv6/ip6_udp_tunnel.ko <IP_OF_YOUR_NAS>:/root

Back on your NAS:
cd /root
insmod ip6_udp_tunnel.ko
insmod udp_tunnel.ko
insmod vxlan.ko

All the Swarm networks will now work !
Note: the modules are not loaded at boot. Let’s put this in the TODO, if anyone can detail the procedure for the modules to load at boot. Thanks
Note2: if a new firmware is released, the same procedure has to be done again, with the latest source code… if it is released.

1 Like

Thanks for sharing!

I suggest to put the modules in /shares/Volume_1/Nas_Prog/docker/modules and to load/unload them with init.sh and clean.sh.

About note2: I think it’s unlikely that they will bump the kernel for OS3… but they may backport ‘critical’ fixes.
More info on adding kernel modules for USB DVB tuner support in my tvheadend guide.