Safepoint Backup/Restore Documentation Availability?

Is there any technical documentation available on how the safepoint backup/restore functionality works on the MyBook Live?  Basically I am looking for something that would explain in detail how the MyBook Live creates and subsequently updates the safepoint - or is this proprietary information?  By having an understanding of the workings of the safepoint backup - I would maybe be better able to understand why the safepoint creation and/or restore feature is so incredibly slow.

~Scott

This is simple.  It uses the Linux rsync utility.  First the Safepoint routine spawns rsync but in dry-run mode to get statistics of how long the back-up will take.  This process takes a while if there are a lot of files. It does require the target to be online.

Once that is done and everything is deemed to be ok then the rsync is performed again but this time the copying takes place.

I have about 460GGb of data on my NAS and invoking the rsync utility in dry-run mode, without using W.D. safepoint routines, takes a while to complete.

I think you’ll find this useful:

http://community.wdc.com/t5/My-Book-Live/How-to-perofrm-a-SafePoint-backup-and-exclude-bits-from-the/m-p/311820

1 Like

Thanks Myron - This is very useful information.  This, combined with your link to the other post about excluding items from the safepoint backup explains some of the behavior that I was seeing while experimenting with the Safepoint backup and restore.

I wanted to point out in my example 105 gig safepoint - about 45 gig of that was a Mac Time Machine sparse bundle image in the TimeMachine share (which probably consisted of 500,000+ files inside the Sparse Bundle) - and a 38 gig Norton Ghost v14 PC full image backup - and the rest about 22 gig of random files - probably a combo of music and photos.

The safepoint backup was taking forever gyrating through the Time Machine Sparse Bundle - and probably accounted for 85% of the backup time.  At certain points it appeared to be just stuck - but the drive was clicking away.  I did a subsequent test of the safepoint creation after deleting the Sparse Bundle image from the drive - and the creation was exponentially faster.  The Norton Ghost image file did not seem to impact the performance of the Safepoint Creation but the Sparse Bundle seems to have brought the creation process to a crawl.

I would think that a typical WD MyBook Live user would have a variety of different types of data stored on the drive - probably various backups from other systems, files that are stored on the NAS simply for convenience (not necessarilyt needing to be backed up) - and critical data that is stored only on the NAS and needs to be backed up.  It would be really nice (as you stated in your other post) to be able to specify safepoint exclusion criteria for the safepoint backup - ultimaly limiting the spaced needed and time required to create/update the safepoint - making it possibly to use a variety of other shared drive resources on the network rather than having to have a stand-alone MyBook Live drive sitting around to hold safepoints.

In my case I don’t necessarily want to back up my backup files on my safepoint - and would like to exclude them.  I will take a closer look at your other post on how to exclude specific items.

I am somewhat proficient with linux and perl - however I have no experience with rsync.  Do you happen to have a link that provides a good explanation of how to use the rsync utility?  I’m sure I can Google that - however - I suspect you might be able to point me to something that will be better than what I might find on my own.

~Scott 

I suspect that the SafePoint backup actually happens four times.

Once for the TimeMachine back-up. Once for the SmartWare backup and once for the Shares back-up.  I’m also guessing that each time the dry-run to pre-gather statistics is run each time.

My educated guess is that is why SafePoint is taking a LONG time to complete.  I don’t have anything apple and have even turned off the netatalk protocol.  I also can’t use SmartWare because .NET framework keeps crashing. Only the WD QuickView works so I beleive I’m not seeing this slow back-up.

My mission was to be able to exclude stuff from the back-up and I’ve achieved the goal.

So, yes, excluding directories, files and even files and directories by pattern can be done.  Something that is not currently available on the Dashboard UI but it would be awesome for W.D. to build into the Dashboard UI something that can manage the exclude.txt file so quality surgical exclusion rules can be maintained by the end-user/owner.

There is one EXCELLENT thing that W.D. did. That’s to allow access to the internals of the MBL.  I also agree that anyone who plays about with the internals and bricks their drive doing so really deserve the pain they bring upon themselves.

I’m rather paranoid about changing anything too much incase I break something and before ANY future firmware update I’ll put back the files to the original versions.

I don’t plan on making changes to the MBL (or bricking it)  at this point - although I do enable SSH and go in there and look around trying to understand how it works.  I am amazed that the Safepoint Backup actually works on all of my MBLs - not one problem - just tested both a new safepoint and the update of an existing one - both completed flawlessly - including deleting one of the Safepoints.  It took about 3 hours (I believe) to create a new 92 Gig safepoint - and the estimated time was 4.5 hours.  I have not yet tested a full restore on the 02.10.09-124 firmware - but it did work successfully on the 02.02.02-020 firmware when I originally tested it.

The safepoint creation and update seems to consistently copy about 500meg per minute (.5 gig) - which on my gig network seems a bit slow.  Both MBL’s are attached to the same secondary gig switch off my main gig switch - with nothing else on the secondary switch other than the MBL’s.  Yet I can copy an 8 gig dual layer DVD image fie across my network - through several switches from a Mac to a share on the MBL in about 3 minutes - and that results in a throughput of approximately 2.6 gig per minute.  I question why the 92 gig copy (MBL to MBL) can only copy at .5 gig per minute - and in this case the 92 gig consisted primarily of two large Norton Ghost backup images.  Doesn’t seem to make a lot of sense.

But - it works - albeit slow - and that is the important thing.

Here is a link to a tutorial on the linux rsync utility - for those that are curious.

http://everythinglinux.org/rsync/