Save WD MyCloud SSH Key permanently


I want to hibernate my WD MyCloud after a backup via SSH.
The whole thing works in principle already.
However, the SSH Kyy which I store is always overwritten.
I already found out that I have to make permanent adjustments in the /dev/ubi0_0:.
However, I do not know where the /home/root/ directory is.
Can anyone help me with this?

I am interested in the same thing. I couldn’t find the original post but someone had written this:
To keep your authorized keys after reboot, use a persistent home directory, e.g. by installing Entware”. I found the following links (for my PR4100) to Entware:

Link to OS 3:*&device=&dvc=MyCloudPR4100
Link to OS 5 packages:

However, I have not investigated its use, nor attempt to install it yet.

Thank you very much!
That was exactly what I was looking for.

I installed Entware via the GUI and without any configuration the home directory was unchanged even after a reboot.
Hopefully it stays that way

1 Like

I chose to do things a bit differently for my EX2 Ultra. Although entware now gives you a persistent /home folder which will preserve its authorized_keys file between reboots, the built in ssh server configuration is in /etc/ssh, which will be lost after a reboot. So, I’d be stuck with password authentication still being acceptable after a reboot, which is a security hole. My goal was to restrict it to accepting certificate only connections and make it survive a reboot.

After using the link above to install entware for OS 5, I used opkg to install dropbear, a lightweight ssh server.

# opkg install dropbear

I copied the keys I wanted authorized from another ssh server computer to:


and I made sure I could ssh in after using the web interface to switch off the built in ssh server. Then I edited:


to include the line:


This will disallow password based logins. I still ssh to the built in user account sshd, even after switching off the built in ssh server. To restart dropbear after changing the conf file, use this command:

# /opt/etc/init.d/S51dropbear restart

This will survive a reboot and even a firmware update. It uses ecdsa, rsa, or ed25519 host keys.

I found one more trick. There can be a lot of typing to navigate the storage folders, so an alias can help.

# nano /shares/Volume_1/Nas_Prog/entware/profile

Put in whatever alias command you want, for example:

alias cdv1=‘cd /mnt/HD/HD_a2/Vol1’

Save the file, then reboot the NAS to use the alias.

1 Like

Thank you for these tips @nfsmith . I followed the steps but I am on a PR4100 and it might be different. Once I turn off SSH from the PR4100 web (dashboard) > Settings > Network, I am unable to ssh any longer.

Hi pleguellec,

When you’re setting it up, you might want to have the built-in ssh server running on port 22 and set up dropbear to run on another port, say port 28. Then if they’re both working, you can ssh into one or the other by targeting port 22 one time and another time target port 28 to test dropbear. Once you have dropbear working, you can disable the built-in ssh server. On the client side, you can set up a config for the NAS host that automatically connects via port 28, so you don’t have to remember to tell the client to use port 28 each time you connect. Other ports may work, but 28 is unassigned by IANA, and less likely to be problematic.

You can tell the dropbear server which port to use by editing dropbear.conf at this path on the NAS:
For example, just make it read:

To make it take effect restart dropbear on the NAS:
/opt/etc/init.d/S51dropbear restart

Later, once you’re sure it’s working, you can add the following line to disable password authorization:

Hope this helps.

1 Like

Thank you again @nfsmith . I went ahead and specified to use port 28 (while native PR4100 ssh server uses port 22). I was able to restart dropbear using the following command:

# /opt/etc/init.d/S51dropbear restart

I disabled native ssh via the dashboard.
It appears that the [dropbear] ssh server accepted my key as I was prompted for its password. However, I then get the following error message:

Cannot initialize SFTP protocol. Is the host running an SFTP server?


Dropbear isn’t an SFTP server, so, change your client configuration to use what dropbear does provide: SCP. WinSCP can do that, but you haven’t said what operating system you’re using. Linux has other clients that can use it, such as the openssh client package. On a MAC, it can be run from the terminal, and Transmit can do it graphically.

Note that the built in SSH server doesn’t normally provide SFTP either, but you could use entware to install openssh as your server instead of dropbear, if you really want to set up an SFTP on your NAS.

Thank you very much @nfsmith. I use WinSCP (Windows 10 client) and copied the profile I use to ssh into my PR4100 and simply updated the port number. I didn’t think to change the file protocol from SFTP to SCP. That did it.

I developed the following Bash script as workaround for the issue that file ~/.ssh/authorized_keys gets lost each time the WD MyCloud is restarted. The script might be a solution for users who don’t want to install Entware.

The script is to be executed on the client machine (e.g. laptop) to start an SSH connection to the WD MyCloud. When the file ~/.ssh/authorized_keys is not found on the WD MyCloud, then it is created and the public key of the client machine is inserted. If this file is already existing, but the client machine’s public key is not contained in it, then it is appended. So as long as the WD MyCloud is not restarted, on each client machine the password has only to be entered for the first SSH session.



#PUBKEY="ssh-rsa AAAAB3NzaC1yc2EAAAA...3C7w== bob@pc-1234"
PUBKEY=$(cat ~/.ssh/

  if [ ! -f ~/.ssh/authorized_keys ]
    mkdir -p ~/.ssh
    echo '${PUBKEY}' > ~/.ssh/authorized_keys
    chmod 600 ~/.ssh/authorized_keys
    echo 'Keyfile was created and key added.'
    if grep -q '${PUBKEY}' ~/.ssh/authorized_keys
        echo 'Public key already contained in keyfile.'
        echo '${PUBKEY}' >> ~/.ssh/authorized_keys
        echo 'Public key was appended to keyfile.'
# mkdir  -p: No error message when folder already exists.
# chmod 600: Permissions read+write for owning user, no permissions for other users.
# grep   -q: Quiet, do not write to STDOUT.

# ssh   -t: Force pseudo-terminal allocation
# bash  -i: Interactive Shell

The code contained in string INSTALL_FILES_IF_NEEDED is executed on the WD MyCloud, not on the client machine. I tested the script with clients running on Ubuntu (Windows Subsystem for Linux) and MacOS.

The file is executed each time the MyCloud is started? So to copy the authorized_keys into the root folder the following would be needed:



mkdir -p ~/.ssh
cp $path/authorized_keys ~/.ssh
chmod 600 ~/.ssh/authorized_keys

For this the file authorized_keys needs to be placed in the app’s root folder.

Here another script to be used on the client, which copies file .ssh/authorized_keys and other configuration files into the root folder when needed. This means that only for the first SSH connection after a reboot of the MyCloud NAS the password has to be entered:



read -r -d '' SCRIPT_ON_NAS <<'EOF'
  if [ ! -d ~/.ssh ]
    cp -r /mnt/HD/HD_a2/share_for_root/.ssh ~
    chmod 400 ~/.ssh/*
    echo 'Folder ~/.ssh was copied.'
  if [ ! -f ~/.bashrc ]
    cp /mnt/HD/HD_a2/share_for_root/bashrc ~/.bashrc
  bash -i


The script assumes that there is a special share named share_for_root on the NAS, which contains the files to be copied to the root folder during the first SSH login after a reboot. The construct used to fill variable SCRIPT_ON_NAS is a so called here document.

The share_for_root might also contain a folder bin which contains binaries of additional programs, e.g. text editor nano. Some binaries might need additional shared libraries, which can be copied into a folder lib.

In file .bashrc (which is also copied by the script) the following lines have to be contained, so that the binaries from folder share_for_root/bin can be used:

export PATH=$PATH:/mnt/HD/HD_a2/mnt/HD/HD_a2/template_directory/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/mnt/HD/HD_a2//mnt/HD/HD_a2/template_directory/lib

This way I was able to “transplant” some programs from Raspbian (which runs also on a ARM CPU) to a WD MyCloud EX2 Ultra, .e.g. iperf3, htop, bc, git, 7za and shred.

BTW, one have to copy sshd_config before starting sshd, if you want to use built-in sshd. Or restart it with

if /usr/sbin/sshd -t -f /etc/ssh/sshd_config; then kill -HUP `ps aux | grep “/usr/sbin/sshd” | grep -v grep | awk ‘{ print $1 }’`; fi

How do you get an script to be executed at boot ?

The only one I found was in /mnt/HD/HD_a2/Nas_Prog/twonky/
So I made a folder /mnt/HD/HD_a2/Nas_Prog/restore where I put my files and script. But it is not executed during reboot.
Do I need to declare my restore app folder somewhere ?


The script has to be in the root folder of your MyCloud app (i.e. you have to develop your own MyCloud app).

Could you please provide me some links on how to do that ?
All I can find on WD site refers to off-device apps or Android apps which is unlikely to apply to my old device.

The “SDK for OS 5” can be found here:

This ZIP file contains another ZIP file named `` which contains the packager (which runs only on Linux). Another file contained in this ZIP file is `WD_Add-On_SDK_v2.0.18_04282020.pdf`, which describes how to build an app for WD MyCloud with OS5; see especially section "8. A sample app package without a Web UI".

A MyCloud app is just a set of script files (.sh files, e.g. or ) which are called in a particular order. Once these files are finished you have to execute the packager in the folder with these files (e.g., MyCloudOS5_mksapkg -E -s -m MyCloudEX2Ultra ) to create the app file which then needs to be uploaded to your MyCloud device via the Web UI.

Many thanks for your help

I’ve created a simple app that saves and restores the .ssh directory: persistent-ssh.

It’s 1.3K, so I’m attaching it encoded in Base64.

persistent-ssh.txt (1.7 KB)