lilypond-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: partcombine, but including rests from quiet voices?


From: Reinhold Kainhofer
Subject: Re: partcombine, but including rests from quiet voices?
Date: Tue, 18 Dec 2007 14:31:33 +0100
User-agent: KMail/1.9.7

Am Monday, 17. December 2007 schrieb Han-Wen Nienhuys:
> 2007/12/15, Reinhold Kainhofer <address@hidden>:
> > On the other hand, I found the following patch on the mailing list:
> > http://lists.gnu.org/archive/html/lilypond-devel/2005-07/msg00046.html
> > http://lists.gnu.org/archive/html/lilypond-devel/2005-07/msg00050.html
> >
> > I haven't looked closely at the patch, but judging from the description,
> > it does exactly what I had planned to do in determine-split-list. Does
> > anyone know why these patches never got applied? Are there any problems
> > with them? Or was there simply not enough interest?
>
> IIRC the patch was supposed to fix a few small things, but was quite
> huge. Requests for clarifications never materialized.

Well, "small things"... It appeared to have fixed *the* problem that I have 
with the partcombiner, namely that even one single note (where the other 
instrument has a rest) is detected as a solo...

> The part combiner was written before Erik rewrote and cleaned the part
> which reports events from music expressions to contexts (where the
> events are transformed into graphic objects).
> Before, the Music_iterators would directly access the Contexts.  The
> streams layer was inserted in between, the two, so you can siphon off
> the information into a file, or into a data structure, eg. for doing
> the part combine analysis. If you're interested, check out Erik's
> thesis which is on the website under the devel section.

Hmm, that's the dilemma I'm in: I'm currently writing the full orchestra 
material for our performance in February, so I'm on quite a tight schedule, 
where I can't start something like the partcombiner from scratch and finish 
it in time... The other thing is that I'm already doing wayyyyy too many 
things (day job at university, three choirs, law studies, American football 
referee, programming, etc.), so right now I'm rather trying to quit things 
than to take up new challenges (as interesting as they might sound).

> You would iterate in a couple of different passes over the data,
> populating state (dynamic level, current tuning, articulation, which
> notes are playing, etc.) for each moment. With that information all
> present, it should be easy to find sections which are eligible for
> chord notation, and create a split-list based on that.
>
> It might also be possible to have time-administration at this level,
> so you can actually tell where the measure boundaries are.

Yeah, that's definitely a better solution than any hack based on the current 
version...

Cheers,
Reinhold



-- 
------------------------------------------------------------------
Reinhold Kainhofer, Vienna University of Technology, Austria
email: address@hidden, http://reinhold.kainhofer.com/
 * Financial and Actuarial Mathematics, TU Wien, http://www.fam.tuwien.ac.at/
 * K Desktop Environment, http://www.kde.org, KOrganizer maintainer
 * Chorvereinigung "Jung-Wien", http://www.jung-wien.at/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]