gnu-arch-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu-arch-users] Re: archzoom


From: Ludovic Courtès
Subject: Re: [Gnu-arch-users] Re: archzoom
Date: Tue, 10 Oct 2006 15:32:08 +0200
User-agent: Gnus/5.110006 (No Gnus v0.6) Emacs/21.4 (gnu/linux)

Hi,

Miles Bader <address@hidden> writes:

> address@hidden (Ludovic Courtès) writes:
>> what they think goes wrong (it is true that `tla' is not lightning fast,
>> especially without a revlib).
>
> More accurately, tla is dog-slow and consumes cpu/disk-io like crazy for
> most operations (it's slightly better about network-io in terms of bytes
> transferred, but goes to town with the worst latency ever). I still use
> tla, mind you, but my number one complaint is its insane inefficiency;
> maybe darcs is slower, I dunno.

I think you already mentioned that, according to you, this inefficiency
was more an implementation issue rather than a design issue, is that
correct?

I did the following experiment:

  $ strace -o ,,s -e stat,stat64,open tla changes
  [...]
  $ wc -l ,,s 
  7881 ,,s

  $ tla inventory --source |wc -l
  1038
  $ tla inventory --all |wc -l
  2794
  $ find . -name \* |wc -l
  6433

(To be fair, the revision in question was already in the revlib,
otherwise the number of `open ()' calls yielded by `tla changes' amount
to ~17000 since it has to feed my greedy library.)

In the end, it looks like there is not *so* much I/O inefficiency due to
the implementation itself.  The inventory mechanism implies that all
files in the tree must be scanned, and the ID-tagging mechanism (I'm
using `tagline' here) implies that all the `.arch-ids' directories plus
all the source files must be scanned (roughly).  Although more flexible,
Arch's ID-tagging mechanism probably yields more I/O than "manifests".
Thus, it looks like high disk I/O consumption may be due to the design
rather than the implementation.

Now, it may be the case that the real performance bottleneck is CPU
consumption rather than disk I/O, I don't know.

Thanks,
Ludovic.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]