duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] (Option to) Cache retrieved volumes between (fetchi


From: Daniel Hahler
Subject: Re: [Duplicity-talk] (Option to) Cache retrieved volumes between (fetching/restoring) runs locally?!
Date: Wed, 10 Nov 2010 00:22:47 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i686; de; rv:1.9.2.12) Gecko/20101027 Thunderbird/3.1.6

Hello,

> It would be much easier to simply mirror you repository to a local path and 
> restore from there.

It would be ~30GB, which would take a while to mirror, apart from that
you might not have the space available.

> Can't you just fetch/restore all files/folders in one go?

Does "duply fetch" / duplicity support fetching multiple files? It does
not look so from the man page.
If you're referring to get the meat of it only, that would have been the
root directory of all virtual containers (which is >90% of the whole
backup).


Thanks,
Daniel

> On 09.11.2010 21:53, Daniel Hahler wrote:
>> Hello,
>>
>> I would like to be able to cache retrieved files from the backend
>> locally between multiple runs of duplicity, e.g. via some config or
>> command line option.
>>
>> Use case: having accidentally overwritten a lot of my (virtual)
>> containers files, I've used the following to restore the previous state:
>>   for i in $=VCIDS; do
>>     b=path/to/files
>>     /bin/rm -rf /$b*
>>     duply profile fetch $b /$b 1D
>>     duply profile fetch ${b}.d /${b}.d 1D
>>   done
>>
>> This makes up 60+ runs of duplicity (2 runs of duplicity per 30+
>> containers, one for a single file, the other for a directory), and when
>> looking at it with "--verbosity 9" it looks like a lot of the same
>> volumes (with a size of 50M in my case) are downloaded every time.
>>
>> I think it would speed this (particular) use case dramatically up, if
>> these files would get cached locally.
>>
>> I could imagine to configure something like "keep files for X hours",
>> and when duplicity gets run and files are older than this, they get
>> cleaned on shutdown.
>> When being accessed, they would get touched to reset the timer.
>>
>> However, there should also be a maximum number of files to cache, since
>> this might easily fill your local volume otherwise.
>>
>> I am thinking about caching the files encrypted (just as on the remote
>> site), but maybe caching decrypted files would make sense, too?
>>
>> Obviously, this should take into account if this is a remote backup
>> (maybe by looking at the transfer rate of the files?!), and not pollute
>> the cache, if the backend is as fast as local transfers might be.
>>
>> What do you think?
>>
>>
>> Cheers,
>> Daniel
> 
> _______________________________________________
> Duplicity-talk mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/duplicity-talk
> 
> 
> _______________________________________________
> Duplicity-talk mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/duplicity-talk




reply via email to

[Prev in Thread] Current Thread [Next in Thread]