lilypond-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Project - Eliminating grob parents and outside-staff-priority


From: David Kastrup
Subject: Re: Project - Eliminating grob parents and outside-staff-priority
Date: Sun, 30 Sep 2012 11:39:29 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.2.50 (gnu/linux)

David Kastrup <address@hidden> writes:

> "address@hidden" <address@hidden> writes:
>
>> On 29 sept. 2012, at 19:54, address@hidden wrote:
>>
>>     On 29 sept. 2012, at 19:53, "Keith OHara" <address@hidden>
>>     wrote:
>>     
>>         On Sat, 29 Sep 2012 10:30:32 -0700, address@hidden
>>         <address@hidden> wrote:
>>         
>>                 
>>             The way you're using "tentative" is almost exactly how
>>             pure properties are used in LilyPond.
>>
>>         Specifically, 'pure-height being the estimated vertical extent
>>         before line-breaking, while 'height is its extent after
>>         line-breaking.
>>         
>>         If there are distinct properties to describe the position at
>>         different stages, then each property can be evaluated just
>>         once (as HanWen suggested, and as Mike agreed 100%).
>>
>> More thinking. I'm not enthusiastic about stages - it is a top down
>> approach that locks us into certain points of evaluation. What if we
>> decided to add or get rid of a stage? Would we need to create things
>> like unpure-pure-containers for various stages? What qualifies as a
>> stage?
>
> Dependencies, I should guess.  A "stage" is where we break circular
> dependencies.

Basically, a grob says "I want to have this and that information for
making my positioning" and LilyPond says "You can't get it right now".
Then the grob says "ok, I'll do a tentative positioning", and LilyPond
will come back with more information later and ask again.

Now the problem here is when we are getting oscillation or even just a
converging process.  If there is convergence involved, we are better off
calculation the _relation_ between the positionings.  In a linear
optimization problem, those define the surface plane of a simplex (which
has possible solutions inside, impossible solutions outside, and where
we are looking for the furthest distance from 0 as the goal of
optimization) as a constraint.  Intersecting with other
surfaces/constraints gives us the total solution space, and travelling
outside along its edges (which are the intersection of two planes) moves
us to the optimal solution.

Doing this iteratively means jumping around on the inside of the
simplex.  Each jump may be quite faster than determining the active
boundaries of the simplex, but of course the simplex method focuses on
_those_ pairings of parameters/positioning which are actually valid
tradeoffs.  And since it is an efficient method, it does not get
confused when the heuristics go wrong.

-- 
David Kastrup




reply via email to

[Prev in Thread] Current Thread [Next in Thread]