lilypond-user
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [OT] Vivi, the Virtual Violinist, plays LilyPond music


From: Graham Percival
Subject: Re: [OT] Vivi, the Virtual Violinist, plays LilyPond music
Date: Sat, 5 Mar 2011 02:32:13 +0000
User-agent: Mutt/1.5.20 (2009-06-14)

On Sat, Mar 05, 2011 at 12:54:31AM +0100, Janek Warchoł wrote:
> 2011/3/4 Graham Percival <address@hidden>
> > Well, playing on a $1000 (or even $1,000,000) violin is just a
> > matter of getting a recording of somebody tapping on such an
> > instrument.  Such recordings (I only need 12 milliseconds of a tap
> > noise!) _are_ available online, but I haven't yet found a
> > recording which stated that it was available under the GPL.  And
> > since sound recordings are covered by copyright, I can't just take
> > an existing one.  :(
> 
> Does this mean that Artifastring is already able to simulate violing
> sounds so perfectly that everything is a matter of "teaching" it how
> to do so?

Artifastring is a very imperfect simulation of a violin.  You can
think about this as having two stages: 1. the actions of the four
strings, and 2. the actions of the body (the "big empty part" of
the instrument).  The way that it simulates the effect of a violin
body (or cello body) is by using a mathematical operation called
"convolution".  This convolution is done by multiplying the sum of
the output samples of the strings with literally an audio
recording of tapping a violin.  In engineering terms, this is an
approximation of the "impulse response".

Because of this, switching to a different violin sound is purely a
matter of switching the "tap" recording.

> I don't have much experience with violin, but judging by the audio
> samples i thought that it could use some improvement (independently of
> Vivi's playing skill improvement).

Yes and no.  The actions of the strings are imperfect, but the
violin used for the impulse response really is a bad instrument.
The "£100 pounds" figure actually included the bow, case, and
shipping.

Also, generating an impulse response by tapping the instrument is
not a perfect impulse response.  It's a decent approximation, but
serious acoustics researchers would use a frequency sweep or
something like that instead.

But I'm not a serious acoustics researcher -- my goal is to
advance the art of automatic music performance.  There's enough
work that I can do on performing chords, vibrato, and the like,
such that I'm not hugely concerned with the instrument quality at
the moment.  And the methods for training Vivi are completely
general; if/when the violin sounds change (or even changes to a
cello or viola), all I need to do is spend 2-3 hours teaching her
how to play that instrument, by classifying audio!  In other
words, there's no programming involved in this training; I'm just
acting like a parent of a music student.

Of course, I'm hoping that when I present this at a conference,
somebody from the audience will say "wow, that's nice work, but
it's a pity that your physical model only approximates XYZ.  I
have code that does this; could we work together?  I don't mind
putting that code under the GPLv3."

So far, the only other open-source bowed-string simulation that I
know of is the 1986 Smith algorithm, implemented in the Synthesis
Toolkit in C++.  Most of the algorithms in Artifastring have been
improved in the 2000s, but it's still much better than the version
in STK.
(the big open-source audio programs like Csound, Chuck, and
supercollider, all use STK for physical modeling, which means the
25-year-old string algorithm)

Cheers,
- Graham



reply via email to

[Prev in Thread] Current Thread [Next in Thread]