[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: uniq Bug
From: |
Bob Proulx |
Subject: |
Re: uniq Bug |
Date: |
Tue, 27 Jun 2006 19:11:59 -0600 |
User-agent: |
Mutt/1.5.9i |
Eric Blake wrote:
> > Not sure if this has already been discovered, but I found a problem with
> > uniq. If I sat down and looked a the code, I could probably see how to
> > fix it. It seems to always occur with very large unsorted streams (files).
> >
> > Below are the commands I ran to exploit the bug (which I originally
> > thought was my error). Sorting the stream before removing duplicate
> > lines is inconsistent with just removing duplicate lines:
>
> Thanks for the report. However, uniq only works on sorted streams.
> By definition, uniq only looks at consecutive lines, to see if they
> are identical.
Strictly speaking the input does not need to be sorted. You are right
in saying that it only works on adjacent lines. This is what the
manual has to say about it.
info coreutils uniq
The input need not be sorted, but repeated input lines are detected
only if they are adjacent. If you want to discard non-adjacent
duplicate lines, perhaps you want to use `sort -u'.
> If the file is not sorted, then the same line might appear twice. And
> changing this would make slow uniq down (either requiring more
> memory or more time to keep a list of all previously seen unique lines),
> not to mention violating POSIX.
It is left up to the user to sort the file, or not, as desired.
Here is what the standards say:
http://www.opengroup.org/onlinepubs/009695399/utilities/uniq.html
Repeated lines in the input shall not be detected if they are not
adjacent.
I enjoyed the dry humor of this part:
APPLICATION USAGE
The sort utility can be used to cause repeated lines to be
adjacent in the input file.
:-)
Bob
- uniq Bug, Ryan Helinski, 2006/06/27
- Re: uniq Bug, Eric Blake, 2006/06/27
- Re: uniq Bug,
Bob Proulx <=