bug-guix
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#24937: "deleting unused links" GC phase is too slow


From: Ludovic Courtès
Subject: bug#24937: "deleting unused links" GC phase is too slow
Date: Tue, 13 Dec 2016 01:00:07 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.1 (gnu/linux)

Mark H Weaver <address@hidden> skribis:

> address@hidden (Ludovic Courtès) writes:
>
>> Mark H Weaver <address@hidden> skribis:
>>
>>> I think we should sort the entire directory using merge sort backed to
>>> disk files.  If we load chunks of the directory, sort them and process
>>> them individually, I expect that this will increase the amount of I/O
>>> required by a non-trivial factor.  In each pass, we would load blocks of
>>> inodes from disk, almost all of which are likely to be present in the
>>> store and thus linked from the directory, but in this scheme we will
>>> process only a small number of them and drop the rest on the floor to be
>>> read again in the next pass.  Given that even my fairly optimal
>>> implementation takes about 35 minutes to run on Hydra, I'd prefer to
>>> avoid multiplying that by a non-trivial factor.
>>
>> Sure, though it’s not obvious to me how much of a difference it makes;
>> my guess is that processing in large chunks is already a win, but we’d
>> have to measure.
>
> I agree, it would surely be a win.  Given that it currently takes on the
> order of a day to run this phase on Hydra, if your proposed method takes
> 2 hours, that would be a huge win, but still not good, IMO.  Even 35
> minutes is slower than I'd like.

Of course.

I did some measurements with the attached program on chapters, which is
a Xen VM with spinning disks underneath, similar to hydra.gnu.org.  It
has 600k entries in /gnu/store/.links.

Here’s a comparison of the “optimal” mode (bulk stats after we’ve
fetched all the dirents) vs. the “semi-interleaved” mode (doing bulk
stats every 100,000 dirents):

--8<---------------cut here---------------start------------->8---
address@hidden:~$ gcc -std=gnu99 -Wall links-traversal.c  -DMODE=3
address@hidden:~$ sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
address@hidden:~$ time ./a.out
603858 dir_entries, 157 seconds
stat took 1 seconds

real    2m38.508s
user    0m0.324s
sys     0m1.824s
address@hidden:~$ gcc -std=gnu99 -Wall links-traversal.c  -DMODE=2
address@hidden:~$ sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
address@hidden:~$ time ./a.out 
3852 dir_entries, 172 seconds (including stat)

real    2m51.827s
user    0m0.312s
sys     0m1.808s
--8<---------------cut here---------------end--------------->8---

Semi-interleaved is ~12% slower here (not sure how reproducible that is
though).

>>> Why not just use GNU sort?  It already exists, and does exactly what we
>>> need.
>>
>> Does ‘sort’ manage to avoid reading whole files in memory?
>
> Yes, it does.  I monitored the 'sort' process when I first ran my
> optimized pipeline.  It created about 10 files in /tmp, approximately 70
> megabytes each as I recall, and then read them all concurrently while
> writing the sorted output.
>
> My guess is that it reads a manageable chunk of the input, sorts it in
> memory, and writes it to a temporary file.  I guess it repeats this
> process, writing multiple temporary files, until the entire input is
> consumed, and then reads all of those temporary files, merging them
> together into the output stream.

OK.  That seems to be that the comment above ‘sortlines’ in sort.c
describes.

>>> If you object to using an external program for some reason, I would
>>> prefer to re-implement a similar algorithm in the daemon.
>>
>> Yeah, I’d rather avoid serializing the list of file names/inode number
>> pairs just to invoke ‘sort’ on that.
>
> Sure, I agree that it would be better to avoid that, but IMO not at the
> cost of using O(N) memory instead of O(1) memory, nor at the cost of
> multiplying the amount of disk I/O by a non-trivial factor.

Understood.

sort.c in Coreutils is very big, and we surely don’t want to duplicate
all that.  Yet, I’d rather not shell out to ‘sort’.

Do you know how many entries are in .links on hydra.gnu.org?  If it
performs comparably to chapters, the timings suggests it should have
around 10.5M entries.

Thanks!

Ludo’.

#include <unistd.h>
#include <dirent.h>
#include <sys/types.h>
#include <stdlib.h>
#include <stdio.h>
#include <sys/time.h>
#include <string.h>
#include <sys/stat.h>
#include <assert.h>

#define STAT_INTERLEAVED 1
#define STAT_SEMI_INTERLEAVED 2
#define STAT_OPTIMAL 3

struct entry
{
  char *name;
  ino_t inode;
};

#define MAX_ENTRIES 1000000
static struct entry dir_entries[MAX_ENTRIES];

int
main ()
{
  struct timeval start, end;

  /* For useful timings, do:
     sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'  */
  gettimeofday (&start, NULL);
  DIR *links = opendir ("/gnu/store/.links");

  size_t count = 0;

#if MODE != STAT_INTERLEAVED
  void sort_entries (void)
  {
    int entry_lower (const void *a, const void *b)
    {
      return ((struct entry *)a)->inode < ((struct entry *)b)->inode;
    }

    qsort (dir_entries, count, sizeof (struct entry),
           entry_lower);
  }
#endif

  void stat_entries (void)
  {
    for (size_t i = 0; i < count; i++)
      {
        struct stat st;
        lstat (dir_entries[i].name, &st);
      }
  }

  for (struct dirent *entry = readdir (links);
       entry != NULL;
       entry = readdir (links))
    {
      assert (count < MAX_ENTRIES);
      dir_entries[count].name = strdup (entry->d_name);
      dir_entries[count].inode = entry->d_ino;
#if MODE == STAT_INTERLEAVED
      struct stat st;
      lstat (entry->d_name, &st);
#endif

#if MODE == STAT_SEMI_INTERLEAVED
      if (count++ >= 100000)
        {
          sort_entries ();
          stat_entries ();
          count = 0;
        }
#else
      count++;
#endif
    }

#if MODE == STAT_SEMI_INTERLEAVED
  sort_entries ();
  stat_entries ();
#endif

  gettimeofday (&end, NULL);
  printf ("%zi dir_entries, %zi seconds"
#if MODE != STAT_OPTIMAL
          " (including stat)"
#endif
          "\n", count,
          end.tv_sec - start.tv_sec);

#if MODE == STAT_OPTIMAL
  sort_entries ();
  gettimeofday (&start, NULL);
  stat_entries ();
  gettimeofday (&end, NULL);

  printf ("stat took %zi seconds\n", end.tv_sec - start.tv_sec);
#endif

  return EXIT_SUCCESS;
}

reply via email to

[Prev in Thread] Current Thread [Next in Thread]