[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Lzip-bug] reducing memory usage when decompressing
From: |
Antonio Diaz Diaz |
Subject: |
Re: [Lzip-bug] reducing memory usage when decompressing |
Date: |
Tue, 09 Dec 2008 20:24:46 +0100 |
User-agent: |
Mozilla/5.0 (X11; U; Linux i586; en-US; rv:1.7.11) Gecko/20050905 |
John Reiser wrote:
Can you give a concrete example of a file which requires close to 2 times
its size to achieve maximum compression? Proving such a bound would be welcome.
For example, our favorite file:
$ lzip -cvs32KiB COPYING > /dev/null
COPYING: 3.050:1, 2.623 bits/byte, 67.22% saved, 35068 in, 11497 out.
$ lzip -cvs64KiB COPYING > /dev/null
COPYING: 3.076:1, 2.601 bits/byte, 67.49% saved, 35068 in, 11401 out.
$ lzip -cvs32MiB COPYING > /dev/null
COPYING: 3.076:1, 2.601 bits/byte, 67.49% saved, 35068 in, 11401 out.
Beyond 64KiB of dictionary size it doesn't compress more.
In most other compressors (zlib/gzip, bzip2, lzma, ...) fewer matches
means faster execution.
And smaller compression ratios. I want lzip to compress more, not
faster. If I need a faster compressor I can use gzip or bzip2.
Best regards,
Antonio.