|
From: | Robert Nichols |
Subject: | Re: [rdiff-backup-users] Memory usage during regressions |
Date: | Sat, 06 Aug 2011 12:13:43 -0500 |
User-agent: | Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.18) Gecko/20110621 Red Hat/3.1.11-2.el6_1 Thunderbird/3.1.11 |
On 08/06/2011 08:35 AM, Claus-Justus Heine wrote:
Hi there, I'm experiencing quite high memory usage during regressions; I have a backup server with only 2G of RAM. I'm doing daily backups. Sometimes a backup fails, and then, of course, rdiff-backup first recovers the most recent backup which did not fail. During this process, rdiff-backup blows up to approx. 3G of RAM. Then things start to slow down (swapping). It's a quite large backup set, about 400G, with a large history. I doesn't seem to be a memory leak, as the memory usage stay at 3G. Just seems a little bit too much in principle.
Regression is concerned with only the two most recent sessions, so the amount of history should be irrelevant. What is the total number of files being backed up and the size of the uncompressed mirror_metadata snapshot? zcat file_statistics.{latest_timestamp}.data.gz | tr '\0' '\n' | wc -l zcat mirror_metadata.{latest_timestamp}.snapshot.gz | wc -c That would be more indicative of the amount of data that needs to be kept in memory during the regression. FWIW, I'm seeing memory usage of about 480MB during regression of a backup of about 250,000 files, though the number of changed files needing to be regressed is quite small (~1000). -- Bob Nichols "NOSPAM" is really part of my email address. Do NOT delete it.
[Prev in Thread] | Current Thread | [Next in Thread] |