To get S3 access working, a few tips that are not well done in the documentation.
The WD app asks for “Access Key” and “Private Key”, but you will see that Amazon will create an Access Key and a Secret Access Key. Simply use the Secret Access Key for the Private Key. Since there is also a private/public key pair option with S3, this can be confusing
Remote Path - it was not clear to me what this is, but it is in fact simply the name of the TOP LEVEL bucket you create (indeed, there is only such a thing as a top-level bucket, within a bucket you can have folders). I was also getting a failure because I was trying to specify a folder like “bucket_name/folder-name”. Only put “bucket_name” in this field
In the S3 dashboard, when logged an on the S3 dashboard, when you are on the “All Buckets” screen, you need to create a top level bucket here, an that is the name you use. You can create a bucket specific for your NAS backup, and then create other buckets for other purposes.
Using Infrequent-Access Storage option (saves many $$) . Unfortunately, I don’t see any way to set the default for new files uploaded to “infrequent-access” in S3, but do believe the API supports the default setting. However, this would require WD to alter the firmware to support this option. So for now, after your backup completes, select each folder created underneath your bucket (I run several distinct backup jobs, so I have distinct backup folders created under the main bucket) and select PROPERTIES. You can change the folder storage class to “infrequent-access”, and that should change all the files/folders within it to that storage class.
you can also specify a Life Cycle management rule for the bucket that automagically will move Standard objects into either Standard_IA or even Glacier (!) after XX days as specified by your rule
timvracer and mkrosse: I was wondering if either or both of you are actually having success with the out-of-the-box configuration of recurring incremental Cloud backups to S3 of large data volumes (100s of GB and 10,000s or 100,000s of files)?
As described in another forum post (under the “My Cloud Mirror” section, but I understand the devices may share the same firmware), myself and some others are finding that each backup run fails after uploading a reasonably large number of files. And, despite the choice of “Incremental” backup strategy, it attempts to start over from the beginning each time, and hence never completes. That also constantly resets my “Standard_IA” back to “Standard” storage, and hence would reset the clock on my S3 Lifecycle rule to kick in again to do the conversion.
So, I’m about to give up on the out-of-the-box WD My Cloud backup to S3 option, unless you are seeing it working on your end, which would give me some hope.
WD’s S3 backup implementation is horrifically unreliable. Backup jobs quit randomly (or never do a scheduled start) with no indication other than “backup failed”…no diagnostic information, no alert, no reliable way to do a restart.
Completely unreliable for a critical function.
Opened a ticket several days ago and provided system report file to tech support. No word back yet.
Hi, I’ve been trying to configure Amazon S3 on my EX2 as timvracer suggested but I keep getting the “Backup Failed” message every single time. Is there any other way to get it to work?
Small backups (less than 1Gb) to S3 are successful. Incremental backup to S3 is not working. Turning on versioning on the S3 bucket reveals every file is sent by my DL4100 every time the same backup job occurs. Not sure if/how the files could be marked as changed. Wasted bandwidth.
Attempting to backup 100Gb tonight. We’ll see if the job completes.
Hi all,
Reviving this topic. I’ve been trying to set up incremental backups from MyCloud Mirror to S3 (around 5GB of photos for now) and having exactly the same problems as @Pete: all files get overwritten each time, even if they haven’t changed, and are set back to Standard storage.
This feature is a selling point of the MyCloud system and doesn’t work. I think we can safely assume the problems don’t come from S3.
I absolutely second that! The cloud back up feature to AWS S3 is the main selling point and it just does not work. It is exactly as you described it.
I’m really disappointed by WD
After some research in other topics of the forum, it seems that the way to go if you want an incremental backup is to select the "overwrite " option instead of the incremental option. That will actually avoid copying files that haven’t changed and when files have changed it will overwrite the old version with the mycloud version. It’s probably the most stupid thing I ever saw in software design, but it does seem to work.