parallel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Limiting memory used by parallel?


From: hubert depesz lubaczewski
Subject: Re: Limiting memory used by parallel?
Date: Fri, 26 Jan 2018 10:34:35 +0100
User-agent: Mutt/1.5.23 (2014-03-12)

On Thu, Jan 25, 2018 at 11:26:24AM -0500, Joe Sapp wrote:
> Hi depesz,
> 
> On Thu, Jan 25, 2018 at 10:33 AM, hubert depesz lubaczewski
> <depesz@depesz.com> wrote:
> [snip]
> > But it looks that parallel itself is consumming HUGE amount of memory
> > - comparable with size of /some/directory itself.
> >
> > Server that I run it on has 64GB of ram, and the script gets killed
> > after ~ 3 minutes with "Out of memory!" error.
> 
> Have you tried using --memfree?  That will limit parallel from
> spawning new processes

OK, I read the docs, and I don't get it.

Why would I want to limit parallel from spawning new processes?

The problem, in my opinion, is that parallel is using memory for some
kind of buffer when splitting data, instead of operating in some small
block size, like 8kb, or whatever to copy data form input to one of the
outputs.

The final scripts do not use lots of ram. It's just parallel that is
using over 10GB of ram to split data and put it into multiple pipes.
This doesn't look OK to me - isn't it a bug?

Best regards,

depesz




reply via email to

[Prev in Thread] Current Thread [Next in Thread]