coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Faster ls when there are thousands of files in a directory


From: Jim Meyering
Subject: Re: Faster ls when there are thousands of files in a directory
Date: Sat, 25 Jun 2011 14:48:34 +0200

Peng Yu wrote:
> On Sat, Jun 25, 2011 at 12:54 AM, Jim Meyering <address@hidden> wrote:
>> Peng Yu wrote:
>>> When there are a few thousands of files/directories in a directory
>>> that I want to ls, I experience long wait time (a few seconds on mac).
>>> I'm wondering if some kind of cache can be built for ls to speed it
>>> up? Note my ls is installed from macport (not the native mac ls).
>>
>> Use "ls -1U" (efficient with coreutils-7.0 or newer) or find.
>
> If I use -1U with -ltr, I see the results are still sorted. What does
> ls do internally with "-1U" for speedup?

When using *only* -1U, ls prints each directory entry name as it is read.

For minimal overhead ls, use only the -1U options.
I.e., type exactly this:

    env ls -1U

followed by zero or more directory names.
Then the results will not be sorted.

>> Someday GNU ls will use fts, and then it will benefit from
>> the inode-sorting fts does for some FS types when there are
>> very many files.  Then it will be faster with additional
>> combinations of options.  But even then, it won't beat "ls -1U",
>> which doesn't call stat at all for FS with useful dirent.d_type.
>
> What is fts?

It's a module in gnulib, with several improvements over the version in
the GNU C Library.  Short answer: fts is an API for efficient traversal
of a file hierarchy.

See "man fts" for a description.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]