rdiff-backup-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [rdiff-backup-users] Converting from rsync and many other thoughts


From: Brad Templeton
Subject: Re: [rdiff-backup-users] Converting from rsync and many other thoughts
Date: Tue, 22 Jul 2008 20:55:10 -0700
User-agent: Mutt/1.5.9i

On Thu, Jul 10, 2008 at 04:33:58PM -0600, Steven Willoughby wrote:
> >
> >But then which I did rdiff-backup from the live directory onto this new
> >backup, it is treating every file as different, and leaving behind a
> >.diff.gz file which is small and looks random to me.   As far as I know
> >there should be no differences.
> 
> The reason rdiff-backup thinks the file changed is that the metadata has 
> changed.  (There is a property in the rdiff-backup/mirror_metadata.* 
> file called "NumHardLinks" which will be set to 2, but the first run of 
> rdiff-backup un-hard-linked the files.)
> 
> Try doing it again and using --no-hard-links on your _first_ run of 
> rdiff-backup.  This doesn't create the "NumHardLinks" property for me.
> 

Would this not make me lose my hard links on the target, or would it
preserve them?

Since I think this request to switch from an rsync based backup to an rdb based
one is not so uncommon, an option which simply builds correct metadata for an
existing mirror to turn it into an rdiff-backup capable mirror might make some
sense.

> 
> This currently isn't possible AFAIK.  You might be able to pause the 
> rdiff-backup process during the day with kill -STOP and then resume it 
> again the next night with -CONT if you turn on SSH's KeepAlive option.

I fear that's not practical.  Many things might happen in between, like boots
and the like.   Rsyncs that are interrupted just resume nicely, by and large, 
but
they are not maintaining metadata.

> 
> The way I do this is with multiple backups: one for photos, another for 
> documents, etc.

You can do that, but files are intermingled to some extent, and this makes
it difficult to deal with propagating deletions and so on.   And even if you
do different directories, any hard links between them could be lost.
> 
> 
> >b) Like rsync, be able to write the update stream to a file, which then
> >goes onto a physical disk that is taken to the offsite.   This is another
> >way of handling when there is a very big difference, too large to send
> >over the internet.  So again, you want the smaller files, the more
> >important files to go over the internet, but when you happen to be
> >ready for a physical trip, you write the difference to a removable drive,
> >and you take it to the backup server and you apply it.   Now all the
> >big files are updated, and you are fully up to date.
> 
> You can do this using the cp -al trick you discovered earlier.  Write 
> the big files to the disk, take them offsite, create a copy with cp -al, 
> remove the rdiff-backup-data directory from the copy, move your big 
> files into the proper place in the copy, and the run rdiff-backup 
> --no-hard-links $copy $dest

I'm not clear on what you are saying here.   The big files are interspersed
among various other files.  Big files that are moved or deleted would not get
caught by this.   I suppose another alternative woudl be to have rsync write
out its stream, take the stream to the offsite, cp -al the offsite, apply the
rsync stream to it, and then do the rdiff-backup.   As long as the no-hard-links
won't destroy my hard links that already exist in either copy.
> 
> >
> >
> >c) Encrypted remote store.  For those who want to do an offsite to a
> >friend's house, it would be cool to have the remote store be encrypted.
> >This does mean that any "diff" is going to be binary.
> 
> Duplicity seems to be the better tool to accomplish this.

It is designed for that but doesnt have the nice features of rdiff-backup
that are attractive.  Wanting it all, I seek a tool that does all these things:

            a) Keeps incrementals reasonably efficiently
            b) Does not require to be root on mirror system, nor even to have
                the same password/group database
            c) Encrypts so that people on mirror system without key can't see 
files.
                Possibly can't even site file directory structure (this requires
                a system were files are just stored by simple names in a random
                structure, and encrypted metadata files store real structure)
            d) Is of course fast and bandwidth efficient and easy to use! 
          




reply via email to

[Prev in Thread] Current Thread [Next in Thread]