bug-tar
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Bug-tar] hard link extraction


From: Dustin J. Mitchell
Subject: [Bug-tar] hard link extraction
Date: Mon, 22 Oct 2007 10:41:28 -0500

Over on the amanda-users list, we just had an interesting problem come up:
  http://marc.info/?t=119261743100002&r=1&w=2
to summarize, the user is running Cyrus, which makes scads of hard
links.  On doing a restore of only a few files, tar fails because it
doesn't find the antecedent of hard links.

In detail, Amanda is running something like:
  tar --numeric-owner -xpGvf - path/to/file3 path/to/file4 <dumpfile
and the problem occurs when tar, during the archive creation, recorded
path/to/file3 as a hard link to path/to/file1.  As I understand it,
tar records hard links in a symbolic fashion internally -- essentially
"this file is a hard link to $OTHERFILENAME".  This is necessary since
an tar archives don't have the name -> integer -> datastream model of
most filesystems.

This user was using a fairly old version of Tar, but I encountered the
same error with 1.18.

My reading of the documentation, and the lack of a reply to this post:
  http://www.archivum.info/address@hidden/2005-02/msg00011.html
suggest that this is, more or less, a consequence of tar's design.  It
would seem that supporting restores from an archive containing hard
links is basically impossible.  So I wonder if there's a way to
convince tar to ignore inode numbers and write each file's data
independently?  Compression will probably make up for most of the
space wasted by such a technique.

If I've somehow missed the corresponding --no-hard-links option,
please feel free to apply your clue-by-four.

TIA
Dustin

-- 
Storage Software Engineer
http://www.zmanda.com




reply via email to

[Prev in Thread] Current Thread [Next in Thread]