fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] How does fluidsynth do pitch shifting?


From: Josh Green
Subject: Re: [fluid-dev] How does fluidsynth do pitch shifting?
Date: Fri, 29 Jul 2005 13:14:50 -0700

On Fri, 2005-07-29 at 01:33 -0700, V K wrote:
> Hi.
> 
> I am working on simple experimental wavetable playback
> system for Linux, currently software only.
> 
> I want to accomplish pitch shifting, much like
> fluidsynth does.
> 
> I am curious, what algorithm does fluidsynth use to do
> real-time pitch shifting/pitch bending?  Will someone
> shed some light on some of these details?
> 
> My understanding is that pitch bending/pitch shifting
> requires sample rate conversions which seems to be a
> pretty complex process.
> 
> I also noticed that running fluidsynth on my 2GHz K7
> PC, the CPU utilzation sometimes exceeded 75%.  What
> routines in fluidsynth consume so much CPU?
> 
> Thank you!
> Kai.
> 

I'm currently the maintainer of FluidSynth, but I don't have a real in
depth knowledge of the synthesis algorithms used (Peter Hanappe is the
original author, but doesn't have a lot of time for this project
currently).  I can point out where to look though.  The interpolation
code is in src/fluid_dsp_float.c.  There is 4 selectable interpolation
algorithms including "none", "linear", "4th order" and "7th order".  The
difference is in how many surrounding sample points are used to estimate
the amplitude at arbitrary time intervals in the sample.  As to the
pitch bending, FluidSynth processes audio in fragments (64 samples by
default I believe).  Control changes affect the output only per
fragment.  So if a pitch bend control event is received it will modify
the variables accordingly which will be used during the next audio
fragment.

FluidSynth likely could be optimized more.  I haven't done any profiling
recently to figure out where the critical points are, but I imagine most
of it is in the voice synthesis code.  FluidSynth has a polyphony
setting.  It defaults to 256 which is probably rather too high.  You can
set this to something like 64 (see "settings" command).  Many sounds
have very long decays and it can become difficult to determine whether
the sound is still audible or not.  If its no longer perceptible, then
its just a waste of CPU time.  Once the max polyphony is reached, voice
stealing will occur, which causes the most likely silent candidates to
be rapidly turned off and replaced by the new voices.  Perhaps that info
is useful to you.  Beyond that, I'd say just have a look at the source.
A lot of the synthesis stuff is in src/fluid_voice.c, in particular
fluid_voice_write() which also ends up calling the interpolation code
(via #includes).  Cheers.
        Josh Green

Attachment: signature.asc
Description: This is a digitally signed message part


reply via email to

[Prev in Thread] Current Thread [Next in Thread]