[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
better default support for parallel compression
From: |
Mike Frysinger |
Subject: |
better default support for parallel compression |
Date: |
Thu, 4 Nov 2021 01:13:08 -0400 |
with the rise of commodity multicore computing, tar feels a bit antiquated in
that it still defaults to single (de)compression. it feels like the defaults
could & should be more intelligent. has any thought been given to supporting
parallel (de)compression by default ?
i grok that i could just run configure myself and point the various --with-X
knobs to a local parallel program. but that doesn't really help anyone other
than me, and it doesn't play well when moving between systems that might have
a different set of compression programs available. i also grok that i could
just pass -I in my own personal invocation, but that's clunky, and doesn't
help with tools that run tar for me, or the users i support.
tar has a minor toe in the water here already:
src/buffer.c:
static struct zip_program const zip_program[] = {
{ ct_bzip2, BZIP2_PROGRAM, "-j" },
{ ct_bzip2, "lbzip2", "-j" },
but that will rarely, if ever really, hit the lbzip2 program for people.
i also get that there's probably reluctance to make changes in such core
behavior (going from 1 core to all the cores), but it's really hard to
square this away as a reasonable default.
along those lines, communicating preferences as to how many cores the user
wants to utilize will be fun. but i don't think it's intractable.
i didn't find anything in the archives, so if there's an existing thread on
the topic, feel free to highlight it
-mike
signature.asc
Description: PGP signature
- better default support for parallel compression,
Mike Frysinger <=