[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Bug-tar] bug in filename buffer
From: |
Christian Wetzel |
Subject: |
[Bug-tar] bug in filename buffer |
Date: |
Thu, 14 Mar 2013 09:14:47 +0100 |
User-agent: |
Thunderbird 2.0.0.12 (X11/20080213) |
Hi there,
the new tar buffers the filenames read from stdin (cat <some_filenames>
| tar c -T- | ...).
There are two problems with that.
1. tar does not check the available ram or address space. On a 32 bit
system tar crashes
when the filelist is too large.
2. On an 64 bit system it uses all available ram resources.
We used tar to copy a lot of files (billions) over the network like thist:
zcat large_filelist.gz | tar c -T- | netcat ip port
on the other side:
netcat -lp port | tar x
It was the fastest way to copy a large amount of files and this is no
longer possible.
Currently we downgraded to the old tar, but will there be a switch to
turn this new behaviour off ?
The gnu tools have always been incredible reliable (thanks for that) and
now the streaming
capability and reliability of tar get's thrown away for a imho
unneccessary feature ?
Best regards,
Christian Wetzel
- [Bug-tar] bug in filename buffer,
Christian Wetzel <=