[GUIDE] How to install Ubuntu 18.04 Server on the My Cloud PR4100 NAS


Disclaimer: do this at your own risk. No fancy web gui here, just raw unix power.


  • a PC running Ubuntu 18 or similar… and some experience with unix.
  • USB flash drive (8GB+)
  • WD My Cloud PR2100/PR4100


sudo apt install qemu-kvm ovmf

Prepare a working directory

mkdir ubuntu && cd ubuntu

Copy the UEFI bootloader

cp /usr/share/ovmf/OVMF.fd bios.bin

Download the Ubuntu 18.04.1 server iso.

Find out the name of your USB flash drive with lsblk. I’ll use /dev/sdX here.
Boot the iso installer.

sudo kvm -bios ./bios.bin -cdrom <path_to_iso> -hda /dev/sdX -boot once=d -m 1G

It should boot with a black grub screen to install Ubuntu.
Complete the installation with the defaults and any extra package that you may be interested in (e.g. Nextcloud).
Note down the user and password, you need it to login into the machine later.
At the end, it will reboot and ask you to remove the cdrom.
Just close the whole window to shutdown the whole virtual machine.
Then boot without cdrom straight from the USB flash drive.

sudo kvm -bios ./bios.bin -hda /dev/sdX -m 1G

Login in the virtual machine and update packages if you like.

Ubuntu is now installed for a virtual network interface with the new udev persistent networking naming.
You’ll see the current network interface is called ens3 or similar.

ip addr show

This won’t work on actual My Cloud hardware.
Create the netplan configuration as follows

sudo editor /etc/netplan/01-netcfg.yaml
  version: 2
  renderer: networkd
      dhcp4: true
      dhcp4: true

This causes the NAS to get a dynamic IPv4 address on both of its onboard (eno) interfaces.

Example with static IP and bonding

Here’s how to combine the throughput of the 2 network interfaces on a single static IP address.

  version: 2
  renderer: networkd
      dhcp4: no
      dhcp4: no
      interfaces: [eno1, eno2]
      addresses: []
        mode: 802.3ad
        mii-monitor-interval: 1

More info (static IP, bonding, …) on https://netplan.io

Hardware Control
Thanks to the research of Michael Roland and @dswv42 we now have full control over the fan, lcd, buttons and sensors. Ubuntu ships with the 8250_lpss module, so you don’t need to build a custom kernel.
The PMC is accessible at serial port /dev/ttyS5.

git clone https://github.com/WDCommunity/wdnas-hwtools
cd wdnas-hwtools
sudo ./install.sh

The Ubuntu boot disk is now ready. Shutdown with

sudo halt -p

and plug the USB drive in the PR4100 NAS.
Boot up and enjoy!


Download example boot disk image

Download my image here, unzip (use 7zip on winwods) and burn to a 16GB+ flash drive.
Direct unzip and write with

cat foo.img.gz | gunzip | dd of=/dev/sdX


Login: wdnas
Password: mycloud

Warning: risk for data loss when using the wrong device
To grow the file system to use the complete usb boot drive, delete the second partition.

sudo sgdisk /dev/sdX --delete=2

Create it again, it will automatically use all available space.

sudo sgdisk /dev/sdX --new=2

Refresh the partitions

sudo partprobe

Now that the partition has been resized, you can grow the file system.

sudo e2fsck /dev/sdX
sudo resize2fs /dev/sdX
Create a new ZFS array

Insert your disks (hotplug is allowed). List them.

wdnas@wdnas:~$ lsblk -d
loop0          7:0    0 86.9M  1 loop /snap/core/4917
loop1          7:1    0 89.5M  1 loop /snap/core/6130
sda            8:0    0  1.8T  0 disk 
sdb            8:16   0  1.8T  0 disk 
sdc            8:32   0  1.8T  0 disk 
sdd            8:48   0  1.8T  0 disk 
sde            8:64   1 14.3G  0 disk 
mmcblk0      179:0    0  3.7G  0 disk 
mmcblk0boot0 179:8    0    2M  1 disk 
mmcblk0boot1 179:16   0    2M  1 disk 

Here we see sde is the USB boot disk.
Create a mirror pool over sda and sdbbased on the Ubuntu Tutorial.
It’s recommended to name your pool media.

sudo zpool create media mirror sda sdb

Alternatively, create a raidz pool over 4 disks. This is similar to a RAID5 pool, using 1 disk for parity.

sudo zpool create media raidz sda sdb sdc sdd

In order to use it, you need to create a file system (also called dataset) on the zpool. This is similar to a ‘share’ in the My Cloud OS.
Here’s an example

sudo zfs create media/pictures

The file system gets mounted automatically at /media/pictures.

or import existing data

SSH into My Cloud NAS running Ubuntu.

cat /proc/mdstat
sudo mdadm --assemble --scan

or if you used my FreeNAS image to create a ZFS array

sudo apt install zfsutils-linux
sudo zpool import

Follow the instructions.
It’s recommended to name your zpool ‘media’ so that the Ubuntu snap system can easily use it.

Setup Nextcloud

I couldn’t explain it better than this Digital Ocean guide.
All data will be stored on the USB boot disk, which is not so interesting.
Here’s how to change the nextcloud snap data partition to your zfs pool.

First create a zfs dataset for nextcloud.

sudo zfs create media/nextcloud

Allow the nextcloud snap to use the zpool

sudo snap connect nextcloud:removable-media

Change the data path in /var/snap/nextcloud/current/nextcloud/config/autoconfig.php

sudo sed -i "s#'directory' => .*#'directory' => '/media/nextcloud/data',#" /var/snap/nextcloud/current/nextcloud/config/autoconfig.php

Restart nextcloud

sudo snap restart nextcloud.php-fpm

Now visit your nextcloud website and create the admin user.

Disable internal flash memory

If the internal flash memory is completely broken, you may be unable to restore the origal WD OS.
Installing Ubuntu is a solution, but you’ll see system freezes when polling the disks in the dmesg output.
A solution is to blacklist the mmc_block driver.

sudo editor /etc/modprobe.d/blacklist.conf

Add a line with

blacklist mmc_block


sudo update-initramfs -u

[GUIDE] How to Install Debian Linux on the My Cloud PR4100 NAS

Great thanks to the “Tfl” user for helping me very much.
I have a really totally brick PR2100 with “probably internal flash is fried”
I do not know the exact way he did, but I now have the PR2100 with Ubuntu and NextCloud.
It seems that the PR2100 now works better than the original software.
Thank you once again.