lilypond-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: make doc problem


From: David Kastrup
Subject: Re: make doc problem
Date: Fri, 27 Jan 2012 10:05:04 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.0.92 (gnu/linux)

Reinhold Kainhofer <address@hidden> writes:

> On 2012-01-27 00:00, Julien Rioux wrote:
>> On 26/01/2012 11:13 AM, Reinhold Kainhofer wrote:
>>> On 22/01/2012 20:58, Julien Rioux wrote:
>>>> Thanks, you're quite right CPU is not the limiting factor for the
>>>> build. Disk access and usage of swap when compiling
>>>> input/regression/collated-files slows down the build to a crawl for me.
>>>
>>> The problem here is that lilypond builds up memory from 400MB to ~1GB
>>> without releasing...
>>> Most of these allocations don't seem to be memory leaks, but rather due
>>> to guile.
>>>
>>> Cheers,
>>> Reinhold
>>>
>>
>> Is it a bug? We're talking about lilypond running with the
>> -dread-input-files flag here. Once a snippet has been processed and
>> lilypond moves on to the next one, there is no reason to hold onto
>> the memory used by the previous snippet, right?
>>
>
> Please check the -devel mailing list (e.g. thread "Memleaks or not"
> last August/September), where I already observed this. I fully agree
> that after one file is processed, lilypond should reset to its initial
> state and not need more memory than before.
>
> I have no idea why the memory is going up like it does. To me it
> doesn't look like a classical memleak, but rather somthing with the
> Guile garbage collection...

As far as I can see, every music event tends to contain an "origin" of
type Input, and every Input keeps a SourceFile alive, and every
SourceFile keeps a string port to the whole file.

Any object that does not make it into garbage collection will keep the
whole pipe busy.  But let's assume this gets collected nicely.  In that
case, we have the problem that the source files are allocated as _one_
chunk each.  If the memory allocator does not have a contiguous piece of
memory of that size available, it will have to allocate a new one.

And _if_ the memory allocator has a contiguous piece of memory of that
size available and somebody asks for small pieces of memory, then it has
no business _not_ allocating from the large piece of memory.  And the
next time a large piece of memory is asked for, there is none available.

It might be worth checking whether everything is working as well as one
could hope, but there is a non-zero danger that any fixes with good
results are to a good degree depending on the operating system as well.

-- 
David Kastrup




reply via email to

[Prev in Thread] Current Thread [Next in Thread]