|
From: | wrotycz |
Subject: | Re: [Lzip-bug] On Windows unpacking does not use all cores |
Date: | Mon, 16 Apr 2018 03:56:40 +0200 |
User-agent: | GWP-Draft |
Romano wrote:Yesterday I compiled latest lzlib 1.9 and plzip 1.7 under cygwin(alsolatest, just installed yesterday) on Windows 7 and as a 64bit both. Itcompiled without any errors or warning and tool also work fine. Duringpacking it is able to utilize all CPU cores to 100% so multi threadingworks. Same with testing using -t flag. However, when I actually try tounpack with -d it never even peak above 50%. This despite -n option,even if I double the number to -n 8. On parallel, until now I alwaysused FreeArc's internal 4x4:lzma which always fully utilized my cpu andit shows as during unpacking without io limitation it could reach~200Mib/s.I don't use Windows, so I can't test your executable. (BTW, please,don't spam the list with huge unsolicited files). The fact that plzipcan use all cores while testing makes me suspect of some I/Oproblem/idiosyncrasy. See for example this thread on the MinGW list:I am aware of blocks concept as well, tool also did not utilized all CPUwith smaller -B block and big enough file, and I know for sure its notmy HDD limiting it because first it is quicker than output and FreeArccan still utilize its max, but also because it does not utilize full CPUeven if using stdin/stdout.Even if decompressing from a regular file to /dev/null ?
[Prev in Thread] | Current Thread | [Next in Thread] |