rdiff-backup-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[rdiff-backup-users] Incredibly slow i/o to NAS server


From: Andrea Bolandrina
Subject: [rdiff-backup-users] Incredibly slow i/o to NAS server
Date: Tue, 29 Nov 2016 01:31:28 +0000

Hi,

I'm running an rdiff-backup script, to backup my laptop to my local NAS.

I love rdiff-backup and it normally works great, but at the moment I'm having problem with one specific directory.
Such directory is where is store my docker images, therefore it's a lot of files.
To be precise:
sudo find /mnt/vms/docker/ -type f|wc -l
852443
sudo du -hs /mnt/vms/docker
4.9G    /mnt/vms/docker
So, it's nearly a million files, but less than 5GB.

When I run the backup the first time, it didn't take too long (can't remember how long exactly, but less than a couple of hours).
Then I've updated some docker images, and removed the old ones (clearly lots of changes).
Now I re-run the rdiff-backup and it has been running for nearly 24 hours on that specific folder, no problem with all the other folders.

After some investigation, it turned out the bottleneck is the i/o on the NAS.
This NAS is a more than reasonable machine for the job (Dell workstation running 2 Xeon 2.0GHz dual-core CPUs), all on a headless Debian 7.
The storage for this backup is 4 SATA3 disks on a (mdadm) RAID6, all partitioned with LVM for flexibility.

Here is the iostat output:
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           8.39   58.62   12.03    3.99    0.00   16.98

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda              15.40   448.00    4.60   94.40     0.08     1.87    40.29     0.70    7.12    6.13    7.17   6.26  62.02
sdb              22.40   442.00    5.60   93.00     0.11     1.84    40.50     0.41    4.13    5.18    4.07   3.35  33.06
sdc              20.20   443.00    3.60   95.20     0.10     1.85    40.34     0.35    3.54    5.44    3.47   2.61  25.76
sdd              27.40   432.20    4.60   93.60     0.13     1.80    40.26     0.49    4.97   10.17    4.72   3.61  35.46
md127             0.00     0.00    2.00  257.80     0.02     3.33    26.41     0.00    0.00    0.00    0.00   0.00   0.00
dm-6              0.00     0.00    0.40  168.20     0.00     0.66     8.00     1.61    9.36   11.50    9.35   5.77  97.24

sda, sdb, sdc and sdd are the disks part of the RAID6, md127 the name of the RAID6 partition (not sure why everything is zero there), and dm-6 is the logical partition where I'm saving my backup to.
It's writing 0.66MB/s and the partition is 97,24% utilised! Wow!
I suppose it's due to the number of write/s.

This i/o problem is also confirmed by vmstat too (look at the "io bo" column):
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 5  0 1902128 1188960 428720 12141168    0    0   346  1880 11644 21770 72 14 10  3
 6  0 1902128 1190248 429044 12147940    0    0    50  1221 10136 24383 67 12 17  3
 3  0 1902128 1183644 429328 12154196    0    0    38  1038 13587 25388 66 15 15  4
 6  0 1902128 1179204 429668 12158132    0    0   239   692 11868 22070 66 11 19  4
 5  0 1902128 1171128 429968 12166432    0    0    51  1138 12609 25194 68 15 13  3
 4  0 1902128 1165516 430264 12171088    0    0    45  1438 12365 27973 64 13 18  4
 4  0 1902128 1158788 430576 12177560    0    0    42   958 15849 27693 61 16 18  5
 5  1 1902128 1152560 430780 12183864    0    0   348  1135 14911 26943 65 13 18  4
 3  1 1902128 1145700 431008 12191060    0    0    41   990 16313 27469 62 16 18  4
 3  0 1902128 1139508 431168 12197124    0    0    40   806 11108 25448 65 12 19  3
 3  0 1902128 1133748 431688 12202564    0    0    82   880 9026 25443 63 15 18  4
 3  1 1902128 1124528 432024 12209916    0    0   238  5042 12518 24546 66 12 19  3
 5  0 1902128 1120960 432272 12213644    0    0    16   799 13788 22166 72 14 11  2
 2  0 1902128 1115848 432960 12218472    0    0    96  1192 12731 24136 66 12 18  3

In short, is there a flag I can pass to rdiff-backup or anything else I can do to minimise this problem (number of writes?)?

Thanks,
Andrea


reply via email to

[Prev in Thread] Current Thread [Next in Thread]