lilypond-user
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Parallelizing Lilypond [was: Re: Sibelius Software UK office shuts d


From: Joseph Rushton Wakeling
Subject: Re: Parallelizing Lilypond [was: Re: Sibelius Software UK office shuts down]
Date: Fri, 10 Aug 2012 12:41:20 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120714 Thunderbird/14.0

On 10/08/12 11:56, David Kastrup wrote:
Isn't it possible to break the work up into manageable smaller units
even in the case that it's 100 pages of continuous music?

Linear programming breaks up the work into manageable smaller units.
The units are not separate bunches of pages but rather independent
breakpoint sequences.

... but if I understand right, that doesn't put a cap on overall memory consumption during the process? (I.e. the peak amount of memory that will be used at any one time?)

It's not just about how many cores you can use, in fact that's
probably a minor issue compared to:

     -- Largest possible memory consumption and/or calculation size.  Is it
        capped or does it scale in an unlimited way with score size?

Scales with score size.  It would be challenging to create output
on-the-fly, namely whenever all optimum breakpoint sequences share
common starting sequences, and it would depend on the absence of
forward-references (like page references and stuff).

So what do you think about the potential of an algorithm going something like 
this:

    (1) Read in enough bars of music to take up a little over 2 pages [you can
        presumably do a rough estimate of the width and height of bars and staff
        systems on the fly].

    (2) Engrave that music.  Keep the first page written.

    (3) If the music is completely engraved, keep the second page as well, and
        stop.  Otherwise, rewind to the start of the second page and return
        to step (1), reading and engraving from this new start point.

So basically you're doing: engrave pp. 1 & 2, keep page 1; engrave pp. 2 & 3, keep page 2; engrave pp. 3 & 4, keep page 3; ....

You could generalize this to engraving N+1 pages (N >= 1) at a time and keeping the first N pages written.

That should keep a firm cap on calculation size and memory consumption, as you'd only ever be engraving N+1 pages at a time. It would probably be slower for small scores, but would make it possible to build scores of any size with a constant memory footprint.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]