Alternative to Safepoint - reposted

Hi

I’ve been a Mycloud user for 3-4 years already with 4 different Mycloud/Mypassport products that I have been using at my home and office. I have tried many many times to make safepoint work with the different advice given throughout the years to no avail. I actually gave up on it and have just been using Ubuntu Deja Dup on a dedicated device which basically defeats the purpose of Mycloud. Until I saw your post.

I just want to thank you for sharing this to the community even if I have yet to try it myself (just about to- just clearing up one mypassport drive to use as my backup). I just registered here on this forum to be able to thank you.

Keep it up.

1 Like

@ nicktee55

I’m happy to report back that I was able to make this one work except for the email.
For the scheduling I wanted the Mycloud to do rsync twice day a since there are a numbers of pc’s sync"ed to this device whose files (mostly MS office related) are constantly updated throughout the day. So I used this particular cron command instead for rsync’ng at 12 am and 12 pm:

00 01,12 * * * /shares/system/Tasks/backup.sh

However, on the email part I’m not yet able to make it work and have yet to figure out the problem.
Will post back for any development.

Thanks, man.

Might be your email server. With Gmail I had to change a gmail setting that allowed ‘less secure apps’

gmail security settings

Email is just a nice touch.

Alternatively, get it to simply write a message to a log file. Put the log file somewhere in admin user space, so you can see it from the network without having to SSH in.

When I run the python script I get syntax errors. I even typed the code by hand instead of copy/paste

Looks like the forum formatter has mangled the code, interpreting some characters as format commands. There’s certainly a missing # from the first line, which I’m sure should read

#!/usr/bin/env python

Oh; I’ve already tried to correct that in the original thread:

More than just that:

File “./kkk.py”, line 2
– coding: utf-8 –

I can confirm that the forum formatter has mangled the first two lines for reasons best known to itself. I’ve managed to now post the lines correctly by adding a space at the start of lines 1 and 2.

First two lines should be (if you copy and paste please delete the space I have had to insert at the beginning of each line)

#!/usr/bin/env python

-- coding: utf-8 --

There should also be a leading # on the line “send email”

I think it should be OK now, but any difficulties have a look at arty2’s post at Sending email with My Cloud

Use the ‘code formatting’ to stop it interfering with code; the </> icon in the format bar.

I guessed right about the utf-8 line, then…

See my full corrections at the link i posted. Was the link that hard to follow?

Great post. Set everything up, checked everything working, and then kicked off the cron only to discover … half an hour later … that it was part way through the third share and had stopped copying the first two and these were incomplete.

Stopped everything and dry ran the first rsync to be greeted with the error.

Stack Smashing Detected … Terminated.

So I’m currently using cp -R to make a full copy of all the shares before re-enabling the crontab entry in the hopes that it will cope.

I’m presuming that it’s the volume of files that it is trying to sync up. If I make a few changes then I can imagine it will work, but if something touches a large number of files, this is going to hit again.

Any idea how to avoid it?

Hi DarkRayven

Sometimes I have discovered that rsync can have problems with corrupted files, specially jpg and mp3 for some reason. Rsync tries, and tries again several times, and sometime quit or give up.

Have you tried rsync with log-file output, or double or triple verbose (-vvv)? May be you can find a clue there.

Also, you can try to look at what exit values rsync is returning to investigate further.

Tracked it down to an apparent lack of memory issue.The initial copy was going to be over 100,000 files (and that was just in one directory structure). If I ran the same rsync command again, it skipped the files that were copied and picked up from where it left off, only to fail again later. I think it had too much to cope with (2.4Tb of files in the first run)

I ended up aborting this for the initial copy and using

cp -Rp

instead to make an initial copy.

Once that had been done (took about 12 hours), I scheduled a backup script using

rsync --archive --delete

to keep the mirror copy in sync.

Touch wood it has worked so far, but I’ve had it send me an initial email before the backup starts warning me that if I don’t get a completed one afterwards, to check for fails.

I run this script once per week - backup between MyCloud and older MyBookLive:

mount //192.168.10.3/NAS_BACKUP /media/RSBACKUP -o password=“welc0me”

rc=$?
if [ $rc != 0 ]
then
/usr/bin/python /shares/System/backup-email.py “Message from WD MyCloud” “Failed to mount remote share to RSBACKUP”
exit
fi

sleep 6
cp /dev/null /shares/System/backup.log
sleep 3

rsync -rtDvH --delete --log-file /shares/System/backup.log --exclude-from /shares/System/exclude.txt /shares/ /media/RSBACKUP >/dev/null
rc=$?
sleep 6
umount /media/RSBACKUP
sleep 3

if [ $rc != 0 ]
then
/usr/bin/python /shares/System/backup-email.py “Message from WD MyCloud” “System backup has failed” 2>/dev/null
else
/usr/bin/python /shares/System/backup-email.py “Message from WD MyCloud” “System backup has been completed” 2>/dev/null
fi

Hi whsbuss

Nice mirror script…

I see that you are using -H option (preserve hard links), so this must be a mirror to another Linux file system.

I mirror my files to a My Passport Ultra connected to My Cloud. My Passport uses a Windows file system (maybe exFAT).

I‘m considering formatting My Passport to ext4 so I can have a more complete backup like yours. Is rsync working ok every week, or is the exit value sometimes different from 0?

Also I wonder why you stop execution of batch-file for 3 and 6 seconds. Is it necessary?

So far, rsync has worked flawlessly for a year now. Be advised that when you update the firmware the mount point on /media/RSBACKUP will be removed and will have to be manually reconnected.

I’m an old Unix guy and I like giving the operating system a few seconds to allow the shell to not conflict executing commands.

Bumping this old thread. Used a variation of the code whsbuss posted above to backup select folders to an attached USB hard drive. While the following code isn’t elegant and there are probably better ways to code it, it does appear to work for me.

The following mail.py code will use Yahoo.com email and will attach the backup.log file to the email.

#!/usr/bin/env python

# -*- coding: utf-8 -*-
import smtplib
from email.header import Header
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.MIMEBase import MIMEBase
from email import Encoders
from getpass import getpass
from smtplib import SMTP_SSL
import sys  

# edit the areas between the quotes in the line line below with your own settings. 
# Change the backup.log file location to match file to be attached to email.

login, password, server, recipients, logfile = "full.Yahoo.Email.Address.Goes.Here", "Yahoo.Password", "smtp.mail.yahoo.com", "Recipient.Email.Address", "/DataVolume/shares/system/backup.log"  

# send email

subject = sys.argv[1]
body = sys.argv[2]
msg = MIMEMultipart()
msg.attach(MIMEText(body, 'plain', 'utf-8'))
msg['Subject'] = Header(subject, 'utf-8')
msg['From'] = login
msg['To'] = recipients
part = MIMEBase('application', "octet-stream")
part.set_payload(open(logfile, "rb").read())
Encoders.encode_base64(part)
# Change file name in following line to match filename of attached file
part.add_header('Content-Disposition', 'attachment; filename="backup.log"')
msg.attach(part)
s = SMTP_SSL(server, 465, timeout=10)
s.set_debuglevel(1)
try:
    s.login(login, password)
    s.sendmail(msg['From'], recipients, msg.as_string()) 
except Exception, error:
    print "Unable to send e-mail: '%s'." % str(error)
finally:
    s.quit()

The reason for using this code was due to Safepoint being an all or nothing affair and wanting a backup process that would backup a select number of Shares and then email (with log file attached) me. The log file will show what files (if any) have been mirrored to the USB hard drive.

Can post full information/code for anyone interested.

1 Like

The email by python won’t work for firmware v 2 xxx.
Internally, it is a busybox. Its python does not have SMTP_SSL

This thread was generally discussing alternatives to Safepoint feature which the second gen v2.x My Cloud does not have. The v2.x firmware has a different feature (with different options) called Backup.

The second gen v2.x single bay firmware is different than the first gen v4.x single bay My Cloud firmware. As such it is not surprising that certain packages in the older v4.x firmware is not present in the newer v2.x firmware which uses among other things BusyBox.

I just wanted to thank you for this post, and give an update for anyone looking to follow this.

I am running a gen2 at firmware 2.31.183.

The main difference was that the nano command to edit files did not exist, so vi was used instead. Also, the USB share was not under /var/media, but was in the /shares directory itself. I also added the --delete argument to keep the copies matching.

As mentioned, the formatting of the e-mail script is not correct, and you should just reference arty2’s post to get the correct formatting as previously mentioned. Even the indentions in the try/except block matter. Arty2’s script did not need further modification for me, other than entering my personal account information.

I use gmail with two factor authentication, which required an app specific password to be generated for the script. This is done through gmail account settings.

I later learned that the WD Cloud has jobs that rebuild the list of scheduled cron jobs, and to allow your entry to remain, you need to edit a config.xml file. I followed andrewismoody’s instructions at the bottom of this thread, then restarted the cloud device. It sounds like incorrect formatting when editing the config.xml could prevent access to your device and require a 40s reset, but that should not cause loss of data. So, be careful when doing this. I copied existing XML elements and pasted, then edited those to ensure proper formatting. Crontab on MyCloud EX2

Finally, I ran the backup script for the initial copy, which took over a day for me, and I was concerned about a backup running over 24 hours and having two rsync jobs running at the same time. I updated my backup script with an or command, which would avoid the backup if any rsync jobs were running. This device didn’t have pgrep, and I haven’t studied unix in over a decade, so there is surely a better method. However, this is what I did to prevent the backup from running if an rsync job is already running.

ps -ef|grep rsync|grep -v grep || rsync -a --delete /shares/Public/ /shares/My_Book_1230-2/Backup/Public/
ps -ef|grep rsync|grep -v grep || python /shares/System/Tasks/mail.py “Message from WD MyCloud” “System backup of share Public has been completed successfully”

And one final note not really related to this thread, I had a lot of trouble getting the external USB device to mount. It was a 4TB WD My Book I previously used for an Xbox One, and was formatted for Xbox games. I initially plugged in the device before reformatting, and I think the My Cloud remembered the Xbox formatting instead of picking up the new NTFS formatting. I tried everything, deleting partitions, formatting as MBR and GPT, etc… the drive wasn’t identified until I formatted it as HFS+, and as you can see the cloud added the share with an -2 at the end, which I believe indicates a duplicate. Anyway, someone smarter than me could explain this, but I suspect the cloud wasn’t picking up on the NTFS reformat, and thought it was still formatted for Xbox, but the HFS+ format was different enough for it to be seen as new. Just a theory.