USB Backup Failure - settings question

I have had this issue (trying to backup from NAS to USB) since I bought this NAS over a month ago now and still not found a solution for it.  Incredibly small folders backup fine but as soon as they are larger than around 500mb I am bombarded with backup failed messages.  

What I was looking to set-up was a simple system where I could switch on the external hard-drive once a week and make a backup of my most important folders for a layer of extra security.

However, as with majority of things on the NAS I have completely given up on any of the “solutions” provided by WD.  Instead I am looking to see what I can do via ssh.  USB drives mount to /mnt/USB/ so should be relatively simple to script to backup the folders (using JuiceSSH on android which can save ‘snippets’ on my phone to run so don’t have to worry about scripts being lost every time I reboot the NAS).  I am guessing my best option is rsync -a SOURCE DESTINATION?  Tried it for main folders manually at the moment and it seems to have copied them all OK.

However, I was wondering if there was any way to make rsync print out it’s current status while it is copying?  Alternatively, it is possible to make a log of its status and if sync was successful to a text file?  Additionally, is there any easy way to rsync multiple folders rather than having running the same command for each folder I want to backup?  

Found solution to my own question.  Incredibly simple step of just needing to add --progress to the command.  So command should be “rsync -a --progress SOURCE_FOLDER DESTINATION_FOLDER”.  So created a snippet in JuiceSSH with: rsync -a --progress SHARE_FOLDER1 SHARE_FOLDER2 SHARE_FOLDER3 … USB_MNT_FOLDER

Then guessing I can add “umount USB_MNT_FOLDER” to the end.  Currently at office so will likely be able to try it at the weekend - will post results afterwards though.  

If I hadn’t set it on the shelf for a month before I hooked it up I would send it back for a refund. I have had nothing but problems with this thing

. First it would not share USB drives connected to a hub and once I found a work around for that I am now experiencing the same problem as everyone here. Cant get a backup from NAS to USB to finish. 

But I have some clues…Here is what I have discovered. 

  1. You can’t copy the root directory of the NAS to the USB drive. It will create the subdirectories but it will fail when trying to copy the files. so you end up with a bunch of empty directories .

  2. Acturlly you cant copy any directory from the NAS that contains sub directories.  It will fail like number 1 above.

  3. If you pick a directory that has no subdirectories it will work fine and complete. 

 It is sort of worthless if you can’t do a wholesale backup of your NAS without creating a new job for each subdirectory. Can You imagine trying to back up my 50 some folders of photos in the public shared pictures directory with each folder containing subdirectories.  I would have to create a hundred different backup jobs and completely loose my directory structure.  So this HALF BAKED  design needs immediate attention from the WD development team. Seriously. this is not a complete product. It is a prototype hack they are still developing.I see this thread has been here for quite a while (since last year) and it has not been resolved yet. 

I am considering redeploying my Windows Home server that this EX2 replaced. At least it worked and you could get a response from Microsoft for problem resolution. 

Daryl ( rosede)

As an alternative to using the builtin USB backup jobs (and i agree you need dozens to back up a history of photo albums or music archives) You can use your windows machine and just click, drag, and drop. Pretty simple huh?To use windows file system means that each and every file makes 2 trips across you ethernent network. Not to mention boggs down the windows machine that is doing the work.    It will work,… but for Heavens sake that is why I bought the mycloud.  I paid good money to keep all the traffic on the EX2 USB bus and off of my network. 

Even using the windows for doing the backup I sometimes get an error message that the file no longer exists at the  sepcified location and then you need to manually skip the file or do a retry.  The file is there ! I can see it ! I dont know why the copy cant see it but it is very frustrating to start a 8 hour copy only to find out there is an error message waiting for you 15 minutes into the job. 

Well, I  have had it with this USB backup. I GIVE UP.

Evidently WD does not read this forum or ignores it. And according to other comments here it does no good to open a service ticket.  So for me I am disconnecting the USB drives and building a Windows home server to backup up the WDmycloud. 

So what I will have in the end is the EX2 with 10tb of storage as my main NAS, and a Windows home server with 12tb to back up the EX2.  

What a terrible waste of a decent  laptop, to use it as a crutch to secure my data stored on a supposedly redundent device from WD.  But I can not afford to leave 10 years of photos, music and videos unprotected.   

gpineau wrote:

Well, I  have had it with this USB backup. I GIVE UP.

 

Evidently WD does not read this forum or ignores it. And according to other comments here it does no good to open a service ticket.  So for me I am disconnecting the USB drives and building a Windows home server to backup up the WDmycloud. 

 

So what I will have in the end is the EX2 with 10tb of storage as my main NAS, and a Windows home server with 12tb to back up the EX2.  

 

What a terrible waste of a decent  laptop, to use it as a crutch to secure my data stored on a supposedly redundent device from WD.  But I can not afford to leave 10 years of photos, music and videos unprotected.   

 

 

sorry to burst your bubble, but i just want to tell you this BEFORE you build your windows server as a backup for WDEX2: you can’t backup files from EX2 on the network except on another WD NAS.

so unles you do it the other way around WinSer → Ex2 , you will be dissapointed again

you can support my suggestion here by voting for this option: http://community.wd.com/t5/Network-Product-Ideas/EX2-Remote-backup-on-other-devices/idi-p/844034


Not giving up just yet on my windows home server. You see I have two of them. A 7 tb and a 12tb. The 12 has been decomissioned but the 7tb is still running in the basement. Rght now I am looking for some third party software to do the backup in parallel with the WHS software.  Failing that I will install XP or win7 on it and get some freeware to do the job. 

BTW. I got an email from a WD tech that seemed enthuastic to help me but he went silent after I pointed him to this thread. I have not heard from him in days. 

If you just want to copy the files from a Windows server to the NAS ( OR the other way around can work, don’t really know) just use a simple CMD script with robocopy ( https://technet.microsoft.com/en-us/library/cc733145.aspx ) and create a task scheduler on the server when to start. You can also copy only new files and folders :slight_smile:

you don;t really need a 3rd party software

Got a call and a remote connect from a tech at WD. He didnt get far. He couldnt  get the Ex2 to see the USB drives so he dumped the error log and sent it to engineering. 

Now the think won’t come ready !!! I can log into it via the web browser but it does not show up as a device on my network. Tried powering it off and back on but no help.

What i really want is my money back. .  

Well I am not sure this is the best solution but I have converted one of my old PCs into a FreeNAS server and I am using Robocopy at the moment to get the files copied from te EX2 to the FreeNAS box. 

There are some plugins that should make it much easier to do backups of the EX2 but for now I urgently need a copy of my 4tb of music, movies and photos someplace other than the WD EX2

After I am finished I will experiment wit some of the plugins and see which ones are best suited for file syncing between the two boxes. 

BTW. Robocopy would not recognize either device buy refering to it by name, but did find it by using its ip address. .

And you were right about Windows Home server. It requires to have the WHS client installed  on the source device.    

I have had a support case open now on this issue for TWO WEEKS.   So far, I’ve only talked to second-level people who don’t seem to know much.  I was told how to turn on extended logging via the Web interface, but for thie particular problem, extended logging didn’t add any value. 

I did discover the SSH interface, so I was able to look at the processes running during the backup job.  It would seem that, for a synchronizing backup, there are two commands launched, an ‘rsync’ command to actually synchronize the files between source and destination, and a du -sb, to find out how much space the backup is to take. 

For some appallingly silly reason, they are launched in parallel, and if the number of files to be backed up is large, the du command never completes.  There are ‘broken pipe’ errors recorded in /var/log/apache2/web_error.log. 

The rsync command is executed with -q (quiet), so any chance of seeing where what went wrong is nit.  I haven’t yet tried to execute the same command using -v, to get more information, because I’m afraid that some fascist customer sujpport agent will coma along and declare my warranty void. 

But there clearly are some very easy-to-fix, elementary programming errors in the Linux system underying the web interface.  What I find completely annoying is, someone went to the extra trouble to block access to the web interface from any subnet other than the one on which the NAS is attached.  This makes it hard to access the dashboard from any but the most simple of corporate networks.  If such poorly thought-through features must be included, at least provide a way to turn them off.

I had an encounter with the second level support tech too. All that happened was dump the log and then it got sent “up” to engineering. Said it would probably end up being a firmwar update some day. 

I still contend this was not a well thought out product. They threw too many features on the whiteboard and sent it to production before it was finished and thououghly tested. 

My opinion aside… until they have this working dependably I need a safe place for my data and 1 WD-EX2 is a single point of failure. So I continue to search for other solutions.  I have a FREENAS box running and now have a second copy of everything but if is a pain to add any new features to it . Any other functionality must run as a pluging and be installed as a virtual machine (jail) and confirured and yada yada yada. And I could not get it to automatically do a backup of the EX2 so I got a third party window machine doing the backup using robocopy  meaning all data makes two trips across the network .A pain in the butt. !

Now (as I am writing this)  I am working on getting a windows-8 storage space configured with 10tb of disks to see if i can cut that down to only one trip across the network.  

And with some luck WD will fix this and in the end the backup  will all happen inside the WD EX2 and not generate any external network traffic at all. I’ll post here when I get any news from WD. 

1 Like

Well, as a result of my posting to this list, and the moderator excalating this to tech support, I have some progress and a workaround. 

I got escalated to another second-level named Michael, who told me that Level 3 (== the developers) are working on a fix.  That was over a week ago.  He also gave me permission to execute commands via the command-line over the SSH interface without voiding the warranty.   Once permitted, I was able to easily develop a workaround.

When one does a ‘Synchronize’ backup, the Linux command that gets executed on the NAS embedded Linux looks like the following:

rsync --timeout=30 --job-name=BackupJob!_usb -rlptDHq --delete /mnt/HD/HD_a2/myshare/ /mnt/USB/USB2_c1/BackupJob/myshere

You will have to replace BackupJob and myshare with the name of the backupjob you selected and the share you selected for backup. 

I found that executing this command as written above on the command-line also caused the backup to fail.  The same complaint about timeouts and broken pipes that appeared in /var/log/apache2/web_error.log. when executing from the GUI appeared on stdout, the SSH console.

Through trial and error I discovered that simply eliminating the --timeout=30 option allowed the backup to complete without problems on the same large dataset on which it had failed before.  I also changed the ‘q’ in the options clump -rlptDHq to a ‘v’ so that I could see which file was being backed up, where the job failed (if it did, which it didn’t) and when the job finished.  I did not try backgrounding this command-line command with a ‘&’.  Most important to me was to get the backup done…  I may try that when I next update the backup set, but having paid for a device I shouldn’t have to be doing beta testing on it. 

I reported the success of this workaround back to the second-level escalation support last Friday – still haven’t heard anything back.  I have no idea what the developers are up to, but if this problem has been around for over a year, and the workaround is this easy, then a proper fix should also be easy.  Having been in the business of software development for a while, I suspect transparency and reporting issues.  Most line programmers want to do a good job.

In any case, those of you left high and dry by this long-playing  nonsense at least have a workaround that will get you to your goal.

2 Likes

Appreciate the post CEB04. I would offer a bit of advice on the usage just to make sure you don’t get the source and destination backwards if you’re going to use the --delete option as I did :frowning: Total newb mistake and I’m ashamed but I did see what was happening and was able to kill the rsync process.

I would suggest removing the --delete and manage the deletion manaually if you’re running rsync manually anyway.

Copies files from USB to HDD and doesn’t delete and outputs verbose

Job name must already exist obviously. The order is destination then source. I added the ampersand at the end just so I can break out if needed or if I lose connection my process keeps going. Perhaps a nohup at the beginning would be a good idea too?

rsync --job-name=2TBHDD-BU!_usb -rlptDHv /mnt/USB/USB1_c1/ /mnt/HD/HD_a2/BackedUp/2TBHDD-BU/FA_GoFlex_Desk-1/ &

Copies files from USB to HDD and DOES delete from HDD and outputs verbose

rsync --job-name=2TBHDD-BU!_usb -rlptDHv --delete /mnt/USB/USB1_c1/ /mnt/HD/HD_a2/BackedUp/2TBHDD-BU/FA_GoFlex_Desk-1/ &

Apologies for the delay posting an update about this - not only is the EX2 a hunk of **** I repeatedly get errors when trying to log into the forums.  

As mentioned in previous post I used rsync to make a backup of my main folders to a USB using command:

rsync -a --progress SHARE_FOLDER1 SHARE_FOLDER2 SHARE_FOLDER3 … USB_MNT_FOLDER

It was deathly slow, taking nearly 40 hours to copy around 3TBs but it works.  Thankfully as well rsync when run in the future will only copy across any file changes so shouldn’t take as long to update my backups in future. 

I have this problem too, not all USB backup jobs will complete. This seems more likely to occur with a large number of files, not necessarily with large file sizes. I have several large folders successfully backing up, but one with lots of files and subdirectories fails out of the gate. One of the backups is my TimeMachineBackups folder, which contains time machine backups from 2 separate Macbooks; it’s backing up just fine.

This is terribly inconvenient. Not just the backup failures, but the fact that I can’t just back up the entire EX2 to a USB drive. Is there any good reason for this? I have to create a backup job for every folder, and I can’t even schedule them. This seems, at best, like an oversight for a product that’s intended to reliably secure my data. Along with my EX2 I bought a WD MyBook so I could provide another layer of redundancy for the EX2. As it turns out, I can’t just back up my EX2 to the USB drive, a real shame.

From this thread it looks like the backup failure problem has been an issue for some time, so it seems the chances of getting it resolved are slim at best. That, along with the fact that I can’t just back up my EX2 to an external drive is a bit of a deal breaker, and I may just return the whole $600 mess to Amazon and look for another solution.

Mr. Bursik,

You are in error in your posting – as others have posted here, the share order is source then destination.

The job-name field can be deleted if you issue the command from the command line.  .  It’s part of the GUI interface to track the progress of the backup, and is not part of any documented rsync command I know.

The safest way to do this operation is to start a backup from the GUI.  If you suspect it will fail, log in v ia ssh, and execute the command ‘ps -ef | grep rsync’.  You will see one or more instances of the rsync command the GUI generated on your behalf.  Copy the text of that command to someplace safe, terminate the backup job as issued from the GUI, then return to ssh and paste the command you copied to the command line.  Edit out the --timeout, and execute the command.  You may want to try sticking a ‘&’ at the end to background the job – this would allow you to log out of ssh and theoretically leave the job running.  I haven’t tried this yet.

That way you are emulating as closely as possible the actions of the device as delivered, and not just going at things from scratch in some pseudo-emporical way that will most likely lead you to shoot yourself in the foot.

You’re not alone in wanting to do this – the WD second-level assigned to me on an escalation basis after I posted here did it too.  When hw permitted me to run rsync from the command line, he told me how to do it from scratch, in a way that would not have been compatible with what the device does itself.  This is not the first time I’ve seen customer service drive the customer down a wrong path, which is why I insist on transparency, especially when it comes to my data.

As a result of my posting here, I did finally get another reply back from the second-level who responded when I posted the first time.  He took credit for telling me the solution (when he didn’t – see above), and graciously ‘allowed’ me to either continue using the command-line solution I had worked out, or offered to give me my money back (oh yeah, right, after spending *weeks* copying files over).  He told me that engineering was working on a fix, but that there was no ETA in sight.   What are they doing, hoping to meditate the problem away?

Not a lot of value-add going on with support – but at least they didn’t insist on maintaining a boundary that would have blocked a workaround.    I had to call several times when I first got the device, and key information needed to operate it is not in the user manual, and not evident from the GUI.  It’s time for some rethinking, in my opinion.  But exposing the underlying Linux was a good move towards preserving transparency. 

Any updates with this issue being resolved ?

For me the built-in backup to USB works for small backups, not too many files, not too many Gb. It miserably fail on a ~1.5 Tb backup … which should be the bread and butter of this type of device. Very disappointing!

I will try the SSH way, which I think also offers some scheduling if one can and wants to play with cron jobs, but that should be all “out-of-the-box” features in the http interface. So, WD failed big time here!

Any suggestions for a replacement from other brands?

How did you find the “bad” file?