USB Backup Failure - settings question

If you just want to copy the files from a Windows server to the NAS ( OR the other way around can work, don’t really know) just use a simple CMD script with robocopy ( ) and create a task scheduler on the server when to start. You can also copy only new files and folders :slight_smile:

you don;t really need a 3rd party software

Got a call and a remote connect from a tech at WD. He didnt get far. He couldnt  get the Ex2 to see the USB drives so he dumped the error log and sent it to engineering. 

Now the think won’t come ready !!! I can log into it via the web browser but it does not show up as a device on my network. Tried powering it off and back on but no help.

What i really want is my money back. .  

Well I am not sure this is the best solution but I have converted one of my old PCs into a FreeNAS server and I am using Robocopy at the moment to get the files copied from te EX2 to the FreeNAS box. 

There are some plugins that should make it much easier to do backups of the EX2 but for now I urgently need a copy of my 4tb of music, movies and photos someplace other than the WD EX2

After I am finished I will experiment wit some of the plugins and see which ones are best suited for file syncing between the two boxes. 

BTW. Robocopy would not recognize either device buy refering to it by name, but did find it by using its ip address. .

And you were right about Windows Home server. It requires to have the WHS client installed  on the source device.    

I have had a support case open now on this issue for TWO WEEKS.   So far, I’ve only talked to second-level people who don’t seem to know much.  I was told how to turn on extended logging via the Web interface, but for thie particular problem, extended logging didn’t add any value. 

I did discover the SSH interface, so I was able to look at the processes running during the backup job.  It would seem that, for a synchronizing backup, there are two commands launched, an ‘rsync’ command to actually synchronize the files between source and destination, and a du -sb, to find out how much space the backup is to take. 

For some appallingly silly reason, they are launched in parallel, and if the number of files to be backed up is large, the du command never completes.  There are ‘broken pipe’ errors recorded in /var/log/apache2/web_error.log. 

The rsync command is executed with -q (quiet), so any chance of seeing where what went wrong is nit.  I haven’t yet tried to execute the same command using -v, to get more information, because I’m afraid that some fascist customer sujpport agent will coma along and declare my warranty void. 

But there clearly are some very easy-to-fix, elementary programming errors in the Linux system underying the web interface.  What I find completely annoying is, someone went to the extra trouble to block access to the web interface from any subnet other than the one on which the NAS is attached.  This makes it hard to access the dashboard from any but the most simple of corporate networks.  If such poorly thought-through features must be included, at least provide a way to turn them off.

I had an encounter with the second level support tech too. All that happened was dump the log and then it got sent “up” to engineering. Said it would probably end up being a firmwar update some day. 

I still contend this was not a well thought out product. They threw too many features on the whiteboard and sent it to production before it was finished and thououghly tested. 

My opinion aside… until they have this working dependably I need a safe place for my data and 1 WD-EX2 is a single point of failure. So I continue to search for other solutions.  I have a FREENAS box running and now have a second copy of everything but if is a pain to add any new features to it . Any other functionality must run as a pluging and be installed as a virtual machine (jail) and confirured and yada yada yada. And I could not get it to automatically do a backup of the EX2 so I got a third party window machine doing the backup using robocopy  meaning all data makes two trips across the network .A pain in the butt. !

Now (as I am writing this)  I am working on getting a windows-8 storage space configured with 10tb of disks to see if i can cut that down to only one trip across the network.  

And with some luck WD will fix this and in the end the backup  will all happen inside the WD EX2 and not generate any external network traffic at all. I’ll post here when I get any news from WD. 

1 Like

Well, as a result of my posting to this list, and the moderator excalating this to tech support, I have some progress and a workaround. 

I got escalated to another second-level named Michael, who told me that Level 3 (== the developers) are working on a fix.  That was over a week ago.  He also gave me permission to execute commands via the command-line over the SSH interface without voiding the warranty.   Once permitted, I was able to easily develop a workaround.

When one does a ‘Synchronize’ backup, the Linux command that gets executed on the NAS embedded Linux looks like the following:

rsync --timeout=30 --job-name=BackupJob!_usb -rlptDHq --delete /mnt/HD/HD_a2/myshare/ /mnt/USB/USB2_c1/BackupJob/myshere

You will have to replace BackupJob and myshare with the name of the backupjob you selected and the share you selected for backup. 

I found that executing this command as written above on the command-line also caused the backup to fail.  The same complaint about timeouts and broken pipes that appeared in /var/log/apache2/web_error.log. when executing from the GUI appeared on stdout, the SSH console.

Through trial and error I discovered that simply eliminating the --timeout=30 option allowed the backup to complete without problems on the same large dataset on which it had failed before.  I also changed the ‘q’ in the options clump -rlptDHq to a ‘v’ so that I could see which file was being backed up, where the job failed (if it did, which it didn’t) and when the job finished.  I did not try backgrounding this command-line command with a ‘&’.  Most important to me was to get the backup done…  I may try that when I next update the backup set, but having paid for a device I shouldn’t have to be doing beta testing on it. 

I reported the success of this workaround back to the second-level escalation support last Friday – still haven’t heard anything back.  I have no idea what the developers are up to, but if this problem has been around for over a year, and the workaround is this easy, then a proper fix should also be easy.  Having been in the business of software development for a while, I suspect transparency and reporting issues.  Most line programmers want to do a good job.

In any case, those of you left high and dry by this long-playing  nonsense at least have a workaround that will get you to your goal.


Appreciate the post CEB04. I would offer a bit of advice on the usage just to make sure you don’t get the source and destination backwards if you’re going to use the --delete option as I did :frowning: Total newb mistake and I’m ashamed but I did see what was happening and was able to kill the rsync process.

I would suggest removing the --delete and manage the deletion manaually if you’re running rsync manually anyway.

Copies files from USB to HDD and doesn’t delete and outputs verbose

Job name must already exist obviously. The order is destination then source. I added the ampersand at the end just so I can break out if needed or if I lose connection my process keeps going. Perhaps a nohup at the beginning would be a good idea too?

rsync --job-name=2TBHDD-BU!_usb -rlptDHv /mnt/USB/USB1_c1/ /mnt/HD/HD_a2/BackedUp/2TBHDD-BU/FA_GoFlex_Desk-1/ &

Copies files from USB to HDD and DOES delete from HDD and outputs verbose

rsync --job-name=2TBHDD-BU!_usb -rlptDHv --delete /mnt/USB/USB1_c1/ /mnt/HD/HD_a2/BackedUp/2TBHDD-BU/FA_GoFlex_Desk-1/ &

Apologies for the delay posting an update about this - not only is the EX2 a hunk of **** I repeatedly get errors when trying to log into the forums.  

As mentioned in previous post I used rsync to make a backup of my main folders to a USB using command:


It was deathly slow, taking nearly 40 hours to copy around 3TBs but it works.  Thankfully as well rsync when run in the future will only copy across any file changes so shouldn’t take as long to update my backups in future. 

I have this problem too, not all USB backup jobs will complete. This seems more likely to occur with a large number of files, not necessarily with large file sizes. I have several large folders successfully backing up, but one with lots of files and subdirectories fails out of the gate. One of the backups is my TimeMachineBackups folder, which contains time machine backups from 2 separate Macbooks; it’s backing up just fine.

This is terribly inconvenient. Not just the backup failures, but the fact that I can’t just back up the entire EX2 to a USB drive. Is there any good reason for this? I have to create a backup job for every folder, and I can’t even schedule them. This seems, at best, like an oversight for a product that’s intended to reliably secure my data. Along with my EX2 I bought a WD MyBook so I could provide another layer of redundancy for the EX2. As it turns out, I can’t just back up my EX2 to the USB drive, a real shame.

From this thread it looks like the backup failure problem has been an issue for some time, so it seems the chances of getting it resolved are slim at best. That, along with the fact that I can’t just back up my EX2 to an external drive is a bit of a deal breaker, and I may just return the whole $600 mess to Amazon and look for another solution.

Mr. Bursik,

You are in error in your posting – as others have posted here, the share order is source then destination.

The job-name field can be deleted if you issue the command from the command line.  .  It’s part of the GUI interface to track the progress of the backup, and is not part of any documented rsync command I know.

The safest way to do this operation is to start a backup from the GUI.  If you suspect it will fail, log in v ia ssh, and execute the command ‘ps -ef | grep rsync’.  You will see one or more instances of the rsync command the GUI generated on your behalf.  Copy the text of that command to someplace safe, terminate the backup job as issued from the GUI, then return to ssh and paste the command you copied to the command line.  Edit out the --timeout, and execute the command.  You may want to try sticking a ‘&’ at the end to background the job – this would allow you to log out of ssh and theoretically leave the job running.  I haven’t tried this yet.

That way you are emulating as closely as possible the actions of the device as delivered, and not just going at things from scratch in some pseudo-emporical way that will most likely lead you to shoot yourself in the foot.

You’re not alone in wanting to do this – the WD second-level assigned to me on an escalation basis after I posted here did it too.  When hw permitted me to run rsync from the command line, he told me how to do it from scratch, in a way that would not have been compatible with what the device does itself.  This is not the first time I’ve seen customer service drive the customer down a wrong path, which is why I insist on transparency, especially when it comes to my data.

As a result of my posting here, I did finally get another reply back from the second-level who responded when I posted the first time.  He took credit for telling me the solution (when he didn’t – see above), and graciously ‘allowed’ me to either continue using the command-line solution I had worked out, or offered to give me my money back (oh yeah, right, after spending *weeks* copying files over).  He told me that engineering was working on a fix, but that there was no ETA in sight.   What are they doing, hoping to meditate the problem away?

Not a lot of value-add going on with support – but at least they didn’t insist on maintaining a boundary that would have blocked a workaround.    I had to call several times when I first got the device, and key information needed to operate it is not in the user manual, and not evident from the GUI.  It’s time for some rethinking, in my opinion.  But exposing the underlying Linux was a good move towards preserving transparency. 

Any updates with this issue being resolved ?

For me the built-in backup to USB works for small backups, not too many files, not too many Gb. It miserably fail on a ~1.5 Tb backup … which should be the bread and butter of this type of device. Very disappointing!

I will try the SSH way, which I think also offers some scheduling if one can and wants to play with cron jobs, but that should be all “out-of-the-box” features in the http interface. So, WD failed big time here!

Any suggestions for a replacement from other brands?

How did you find the “bad” file?

When will the ‘–timeout=30’ be eliminated? Or, if it cannot be eliminated, then increase it to 600 (10 minutes) or 1200 (20 minutes).

Reading the rsync man page shows:

This option allows you to set a maximum IO timeout in seconds. If no data is transferred for the specified time then rsync will exit. The default is 0, which means no timeout.

So, if you have a large number of files, it is certainly likely that the command will fail. That’s because rsync builds a list of files before it transfers and if that building takes longer than TIMEOUT, no data transfers and rsync gives up.

Seems like an easy fix…

Yes I read all the above comments and complaints. I have the exact same problems. Even after handing mycloud mirror device back to WD under warranty and getting a new device the same problem arises in backup failed. I want to sync the external passport to my already COPIED back up as I keep putting new music on my passport. So its when I SYNC is where the problem arises BACKUP FAILED

Has anyone in here ever tried the Synergy Server ??

I have not tried Synergy Server. I do not use the EX2 for sync or back up. It just sits there as a storage device. I pt the smarts elsewhere on my network. I gave up on the EX2 a year ago. What I did was take an old Windows pc that was about to be retired and created a windows storage space which is fault tolerant. . I gathered up every old hard drive I had, ( big and small) and added them to the storage space. I ended up with a storage space far larger than the EX2 box. I installed a back up software on the windows box and now it synchronizes to the EX2 box all the time. So now I have the EX2 that everyone on the network uses for storage and an old windows PC sitting quietly in the corner that has a copy of everything on the EX2. This has been working flawlessly for the past year. There is lots of freeware programs out there that do the backup and sync for you…

hi all,
getting back to the original question: has WD ever fixed the problem or clarified the right settings to back up to a USB drive?
Here we are in 2018 and I am still having the same issues.

Hi, this problem still exists. I was battling with this yesterday. Got a MyCloudMirror and a new USB3 drive.

I need the option of read and write from/to Windows and Mac so formatted the USB drive as FAT32 (NAS doesn’t like ExFAT).

Wondering now if it’s simply down to the FAT32 file size restrictions or folder depth.

May have to bite the bullet and do a host backup over my LAN via my laptop to USB drive. Will know in a few days…

For everyone coming to this necro thread from Google:

What worked for me was plugging the hard-drive (Seagate) into a PC and formatting it (NTFS). Then I deleted the old backup job on the WD Dashboard and created a new one, making sure to select synchronize NOT copy.

For whatever reason the old backup from the WD on the Seagate had switched to “read only” and couldn’t be deleted or overwritten OR made editable, even from File Explorer, but I could format over the top of it in Windows. However, trying to format over it as exFAT on a Mac failed.

Our office has multiple HDs, so using File Explorer to copy/paste backup onto a spare HD so I’d have something while I tinkered with the Seagate wasn’t an issue.

I have no idea if it was the “fault” of the Seagate or the WD that that folder became immutable, but, there you have it.