[PACKAGE] Docker v18.03.1 CE for WD My Cloud

The default shipped docker is an old version 1.7 … it took me a while to figure it out but here is an installer for the most recent community edition, bundled with the portainer docker management portal, running on port 9000.

Official Docker website

Official Portainer website

Package binary and source available on my new wd package repo.


Binaries for ARM provided by @JediNite. Thanks!


  • install the bin file via wd web ui - applications - manual install
  • Select configure in the WD web interface to get redirected to portainer running at port 9000
  • For manual install, see install.sh and daemon.sh in the bundled source code.
    Installer is based on the install instructions for binaries.
  • Your docker 1.7 storage mapper is backed up under Nas_Prog/_docker.bak
  • If you have issues to get it running, stop the old docker (see further how) and reinstall.
    Use /usr/sbin/docker_daemon.sh stop and assert nothing docker related remains mounted.
  • Uses vfs storage backend which might be less performant than the original devicemapper.
    There’s a guide here demonstrating a BTRFS backend. Thanks for the instructions!

Docker compose
Setup based on linux instructions at https://docs.docker.com/compose/install/#install-compose
Works only for x64 NAS!

curl -L https://github.com/docker/compose/releases/download/1.25.5/run.sh -o $dc
chmod +x $dc

For ARM based My Cloud devices there is a docker package by @aleksjej here

The alternative is going native with python and pip (e.g. from Entware)

opkg install python-pip
pip install setuptools docker-compose

What to do with docker compose? Here’s a pretty awesome example.

I’m looking forward to your feedback!


Thanks for making the package. I have been using docker for awhile to get things running that are not in the app store.
I installed the package, however, I ran into some issues on my PR4100.

After installing and rebooting, i checked docker ps:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

and after running dockerd i get:
WARN[2018-03-20T19:34:16.647440057+01:00] could not change group /var/run/docker.sock to docker: group docker not found
INFO[2018-03-20T19:34:16.648406101+01:00] libcontainerd: started new docker-containerd process pid=19422
INFO[0000] starting containerd module=containerd revision=9b55aab90508bd389d7654c4baf173a981477d55 version=v1.0.1
INFO[0000] loading plugin “io.containerd.content.v1.content”… module=containerd type=io.containerd.content.v1
INFO[0000] loading plugin “io.containerd.snapshotter.v1.btrfs”… module=containerd type=io.containerd.snapshotter.v1
WARN[0000] failed to load plugin io.containerd.snapshotter.v1.btrfs error=“path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter” module=containerd
INFO[0000] loading plugin “io.containerd.snapshotter.v1.overlayfs”… module=containerd type=io.containerd.snapshotter.v1
INFO[0000] loading plugin “io.containerd.metadata.v1.bolt”… module=containerd type=io.containerd.metadata.v1
WARN[0000] could not use snapshotter btrfs in metadata plugin error=“path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter” module=“containerd/io.containerd.metadata.v1.bolt”
INFO[0000] loading plugin “io.containerd.differ.v1.walking”… module=containerd type=io.containerd.differ.v1
INFO[0000] loading plugin “io.containerd.gc.v1.scheduler”… module=containerd type=io.containerd.gc.v1
INFO[0000] loading plugin “io.containerd.grpc.v1.containers”… module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin “io.containerd.grpc.v1.content”… module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin “io.containerd.grpc.v1.diff”… module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin “io.containerd.grpc.v1.events”… module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin “io.containerd.grpc.v1.healthcheck”… module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin “io.containerd.grpc.v1.images”… module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin “io.containerd.grpc.v1.leases”… module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin “io.containerd.grpc.v1.namespaces”… module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin “io.containerd.grpc.v1.snapshots”… module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin “io.containerd.monitor.v1.cgroups”… module=containerd type=io.containerd.monitor.v1
INFO[0000] loading plugin “io.containerd.runtime.v1.linux”… module=containerd type=io.containerd.runtime.v1
INFO[0000] loading plugin “io.containerd.grpc.v1.tasks”… module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin “io.containerd.grpc.v1.version”… module=containerd type=io.containerd.grpc.v1
INFO[0000] loading plugin “io.containerd.grpc.v1.introspection”… module=containerd type=io.containerd.grpc.v1
INFO[0000] serving… address="/var/run/docker/containerd/docker-containerd-debug.sock" module=“containerd/debug”
INFO[0000] serving… address="/var/run/docker/containerd/docker-containerd.sock" module=“containerd/grpc”
INFO[0000] containerd successfully booted in 0.006067s module=containerd
ERRO[2018-03-20T19:34:16.677432975+01:00] ‘overlay’ not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded.
ERRO[2018-03-20T19:34:16.680019896+01:00] ‘overlay’ not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded.
ERRO[2018-03-20T19:34:16.680126613+01:00] Failed to built-in GetDriver graph devicemapper /var/lib/docker
INFO[2018-03-20T19:34:16.681570652+01:00] Graph migration to content-addressability took 0.00 seconds
WARN[2018-03-20T19:34:16.681820683+01:00] Your kernel does not support cgroup memory limit
WARN[2018-03-20T19:34:16.681858677+01:00] Unable to find cpu cgroup in mounts
WARN[2018-03-20T19:34:16.681887172+01:00] Unable to find blkio cgroup in mounts
WARN[2018-03-20T19:34:16.681916785+01:00] Unable to find cpuset cgroup in mounts
WARN[2018-03-20T19:34:16.682002271+01:00] mountpoint for pids not found
Error starting daemon: Devices cgroup isn’t mounted

What did i do wrong?

Thanks for trying the package and providing the logs.
Did you upgrade docker manually before?

Make sure the old docker is not running.

/usr/sbin/docker_daemon.sh shutdown

It’s recommended to clear your docker root (storagemapper)… my app tries to reuse the docker root if the devicemapper directory is not found, as it then assumes you’re running my version already.

 rm -rf /shares/Volume_1/Nas_Prog/_docker

Then you can setup docker as follows (probably not necessary). It cleans up some of the old docker that is loaded on boot.

cd /shares/Volume_1/Nas_Prog/docker
sh init.sh .    # note the dot

Then to start

sh start.sh .   # note the dot

Check docker status with either of these

./daemon.sh status
docker ps

And to stop

sh stop.sh .   # note the dot

The scripts take care of mounting/umounting the cgroupfs layer. This is required to get clean reboot behavior.
If you have any more issues, please provide output of these commands

cd /shares/Volume_1/Nas_Prog/docker
./daemon.sh status && echo OK
./daemon.sh issetup && echo OK
docker ps            # this should show some containers when running
docker --version     # this should be 17.12.1
which docker         # this should be /sbin/docker
ls -l /var/lib       # /var/lib/docker should be a valid symlink
ls -l ../_docker     # show the contents of docker root
sh stop.sh .         # stopping docker may show some leftover mounts
sh start.sh .        # starting docker shows the status as well

Good luck

I’m also quite confident that the installer should work fine if the old docker is not running.

ps | grep docker

Kill any remaining pids.

mount | grep docker
cat /proc/self/mounts | grep docker

Umount any volume showing up here.
Then modify the install.sh in the Nas_Prog/docker directory by commenting the copy code on line 17

# install all package scripts to the proper location
# cp -rf $path_src $NAS_PROG

Then you can reinstall the package with

cd /shares/Volume_1/Nas_Prog/docker
sh install.sh .. ..

I tested that with several docker root configurations (old, new, missing) and it works for all - if the old docker is not running.

Thanks, I stopped the current docker and followed your steps, it works now. Had to reinstall all my dockers but glad i kept their configs :slight_smile:

MASSIVE EDIT: I forgot to restart my NAS; everything works great :stuck_out_tongue:

Kept from original post > A side question to @Tfl, I would like to try and compile some native 3rd party apps. Does the build env resemble what you used for firmware building?

What will happen if I install a firmware update after this docker installation?

It should be unaffected, like other apps/configs.

Edit: Oh, but unlike other apps/configs, docker is included in the original firmware… that’s a really good question.

The OS is mainly a read-only squashfs image getting loaded in RAM, providing fast boot times and a guaranteed fresh, clean environment on boot.
One of the steps in the bootscript is to create symlinks to the read-only scripts in the root OS.
At the end of the bootscript, the apps located in the Nas_Prog directory (and registered in the xmldb) are started.
Finally it starts the internal docker with the /usr/sbin/docker_daemon.sh script, which is a symlink.

However when this app starts, it renames the docker_daemon script symlink, so the bootscript won’t be able to start the internal docker in its final step (which is good). Instead, this app copies the new docker binaries to the OS in RAM and starts those with a slightly modified version of docker_daemon.sh.

A firmware upgrade will install a new squashfs image but it doesn’t touch the apps (unless they would write an upgrade hook which will surely not happen anymore). New firmware will still try to start its old docker version, but as long as this app is installed, the docker_daemon.sh script will already be renamed.

If, for some bizarre reason (but please tell me if it’d make sense), you’d want to go back to the internal docker, just uninstall and the firmware docker will start on boot. If you experience problems, you might need to remove _docker storage root in the Nas_Prog directory and restore the _docker.bak backup.

TL;DR: it works.

1 Like

@Tfl Amazing work! thanks

1 Like

Thanks for a detailed explanation. An updated Docker environment is a huge saving grace for this NAS, so it’s nice to know that current containers shouldn’t get nuked if/when a firmware update is applied.

I installed docker 17 and all went fine.
After installation all containers/images are gone.
I’m trying to reinstall Jackett and Sonarr with this command:

docker run --name=jackett -p 9117:9117 -e PUID=0 -e PGID=0 -e TZ=Europe/Rome -v /mnt/HD/HD_a2/docker-config/jackett:/config -v /mnt/HD/HD_a2/Jackett/blackhole:/downloads --restart=always linuxserver/jackett

docker run --name sonarr -p 8989:8989 -e PUID=0 -e PGID=0 -v /etc/localtime:/etc/localtime:ro -v /mnt/HD/HD_a2/docker-config/sonarr:/config -v /mnt/HD/HD_a2/Public/TV\ Shows:/tv -v /mnt/HD/HD_a2/Transmission:/downloads --restart=always --privileged linuxserver/sonarr

But i get an error:

Unable to find image 'linuxserver/jackett:latest' locally
latest: Pulling from linuxserver/jackett
ad965b2cd940: Pull complete 
36c62cc3be64: Pull complete 
4c30609896db: Pull complete 
3aee3cd10d7a: Pull complete 
812f952f77b7: Pull complete 
f36dfa7eba12: Pull complete 
433d8d396cdf: Pull complete 
1ed2672d3116: Pull complete 
88e57060ac88: Pull complete 
7059404af9da: Pull complete 
4ada00417837: Pull complete 
Digest: sha256:9c3a9b30efcdddad830bbb8513322c8ae0e2ddfbd46d4454d4bf5bc86eb1dbe7
Status: Downloaded newer image for linuxserver/jackett:latest
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:402: container init caused \"process_linux.go:385: running prestart hook 0 caused \\\"fork/exec /sbin/dockerd (deleted): no such file or directory\\\"\"": unknown.
ERRO[0207] error waiting for container: context canceled

It says

no such file or directory

Do all the directories you’re referring to actually exist?
Anyway, drop the PUID, PGID, map the shares as /shares/Public, /shares/Transmission and provide proof that all these directories exist.
Also the output of all these commands

ls -l /shares/Volume_1/Nas_Prog/_docker
ls -l /shares/Volume_1/docker-config
ls -l /sbin/docker*

I simply did

docker run --name=jackett -p 9117:9117 -e TZ=Europe/Rome -v /shares/somecoolshare/jackett:/config -v /shares/mylegaldownloads:/downloads --restart=always linuxserver/jackett
docker start jackett

Portainer also comes pre-installed. You will still need to make valid shares and set the env settings, port forwardings, etc., however, if you are more comfortable in a WebUI environment, this might be helpful. It’s recommended that you know how to successfully create a container from the shell first; Portainer allows for a nice UI overview after that point.

Added docker compose instructions to opening post.

Hi Tfl and thank you for providing this installer. Has anyone managed to get Portainer to work properly with the EX2 yet? It uses port 9000, which clashes with the built-in Twonky Server. Is it possible to change it to another port, by editing the config files etc? Thanks.

See https://github.com/WDCommunity/wdpksrc/blob/master/wdpk/docker/install.sh#L77 and https://github.com/portainer/portainer/pull/299

Clean up the old portainer (note that I’m actually removing all containers)

docker stop $(docker ps -q)
docker rm $(docker ps -aq)

And run it at port 9001 (or wherever you want)

docker volume create portainer_data
docker run -d -p 9001:9000 \ 
           -v /var/run/docker.sock:/var/run/docker.sock \
           -v portainer_data:/data portainer/portainer:arm

As I’d like to know if it works without the arm extension, please let me know if this one works as well:

docker run -d -p 9002:9000 \ 
           -v /var/run/docker.sock:/var/run/docker.sock \
           -v portainer_data:/data portainer/portainer

If it doesn’t I’ll update the installer.

Hi again and thanks for the response. My apologies for not double checking the other day, or else I would catch and explain another error sooner. Please correct me, but I believe the EX2 does not come with a docker binary? I wiped my unit, flashed the latest firmware, did a grep in sbin and it came up empty. I installed the package (finished too quickly to do anything) then ssh into the unit, and dug around in the docker folder to view the scripts. In the install.sh file, I noted this line:

wget “https://download.docker.com/linux/static/stable/${ARCH}/docker-17.12.1-ce.tgz” --no-check-certificate

Doing both a uname and executing the script gives me this:

Linux home 3.2.40 #4 Fri Jul 31 16:04:18 CST 2015 armv7l GNU/Linux


The “armv7l” portion of the url is non-existent on the Docker download site, and lists only “armel” and a few others. The rest of the script then fails, and I have to clean up everything manually.

Does this mean Docker will not work on the EX2? Please advise, and thanks for reading.

Sorry for the inconvenience… that bug is fixed but I didn’t create a new release yet.
The correct URL uses armhf instead of armv7l … see here
Please note that I don’t have access to an EX2 (or any ARM) device, which is why I flagged support as experimental.

EDIT: update available, see OP.

Hello again. I’ve tested the updated package, but got the same negative result in the dashboard. I checked the install script and uname still references “armv7l” instead of “armhf” to get an invalid download URL. The “extract” and “remove” sections of the script references the older v17 package too, so it ends up with errors during install. I did test the “armhf” package previously, but that didn’t work. I could try to figure out how to install manually, but the EX2 never had Docker support from the start, and it has a measly 512MB RAM to begin with, so it’s probably not worth the trouble. It is difficult to work without having the hardware to test the stuff on, but thanks again to you and the team for taking the time and effort in making them.