bug-guix
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#24937: "deleting unused links" GC phase is too slow


From: Ludovic Courtès
Subject: bug#24937: "deleting unused links" GC phase is too slow
Date: Fri, 09 Dec 2016 23:43:57 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.1 (gnu/linux)

address@hidden (Ludovic Courtès) skribis:

> ‘LocalStore::removeUnusedLinks’ traverses all the entries in
> /gnu/store/.links and calls lstat(2) on each one of them and checks
> ‘st_nlink’ to determine whether they can be deleted.
>
> There are two problems: lstat(2) can be slow on spinning disks as found
> on hydra.gnu.org, and the algorithm is proportional in the number of
> entries in /gnu/store/.links, which is a lot on hydra.gnu.org.

On Dec. 2 on address@hidden, Mark described an improvement that
noticeably improved performance:

  The idea is to read the entire /gnu/store/.links directory, sort the
  entries by inode number, and then iterate over the entries by inode
  number, calling 'lstat' on each one and deleting the ones with a link
  count of 1.

  The reason this is so much faster is because the inodes are stored on
  disk in order of inode number, so this leads to a sequential access
  pattern on disk instead of a random access pattern.

  The difficulty is that the directory is too large to comfortably store
  all of the entries in virtual memory.  Instead, the entries should be
  written to temporary files on disk, and then sorted using merge sort to
  ensure sequential access patterns during sorting.  Fortunately, this is
  exactly what 'sort' does from GNU coreutils.

  So, for now, I've implemented this as a pair of small C programs that is
  used in a pipeline with GNU sort.  The first program simply reads a
  directory and writes lines of the form "<inode> <name>" to stdout.
  (Unfortunately, "ls -i" calls stat on each entry, so it can't be used).
  This is piped through 'sort -n' and then into another small C program
  that reads these lines, calls 'lstat' on each one, and deletes the
  non-directories with link count 1.

Regarding memory usage, I replied:

  Really?

  For each entry, we have to store roughly 70 bytes for the file name (or
  52 if we consider only the basename), plus 8 bytes for the inode number;
  let’s say 64 bytes.

  If we have 10 M entries, that’s 700 MB (or 520 MB), which is a lot, but
  maybe acceptable?

  At worst, we may still see an improvement if we proceed by batches: we
  read 10000 directory entries (7 MB), sort them, and stat them, then read
  the next 10000 entries.  WDYT?

Ludo’.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]