fluid-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fluid-dev] Purpose of dither?


From: Miguel Lobo
Subject: Re: [fluid-dev] Purpose of dither?
Date: Sat, 12 May 2007 01:20:02 +0200

No, you do not... :) Ever heard of texture synthesis? All you need
is statistical properties, not the signal itself to generate which
looks like a cat. So, it can be done without the original :) Since the
main features of the signal are preserved after truncation (otherwise
it would have been useless), it should be enough to synthesize noise'.

1) If the premise is that we can use an algorithm that can reconstruct the original signal, even in the presence of high-energy harmonics in the same frequency range as our signal, with less noise than through the use of dithering, no offence but I want to see the code.

2) Even if such an algorithm exists and is usable, the raw s16 audio, which is the format that the FluidSynth function we're talking about is supposed to output, will still have the huge harmonics, and will sound really poor unless the software (or hardware!) using it knows it has to use our hypothetical harmonics removal algorithm.  Put it another way, by adding a compulsory step before playback, we would be defining a different audio format that is no longer raw s16.

3) If one is really desperate for transmitting higher quality audio using 16 bits, there is a much easier solution than inventing a new audio format, and that is using the float output format, which is already supported by FluidSynth.

4) By the definition of information itself, once information is lost (e.g. by truncation) there is no getting it back algorithmically.  An algorithm can't output more information (in the sense of information entropy) than it receives as input.  It can try to reconstruct what the original information may have looked like, but there can't be any guarantee that it will get the reconstruction right, because it doesn't actually *have* that information.  For example, say that there was an algorithm that, given the flat-color cat picture in the Wikipedia page, could reconstruct an approximation to the original high-color picture.  Hurrah, we say, we can save bandwidth by transmitting the smaller flat cat picture and having the receiver reconstruct the original.  But wait, what if we actually wanted to send a picture with flat colors?  What if we wanted to transmit a picture with an intermediate degree of color flatness?  The information that would have allowed us to distinguish between these cases is the information that was lost in the truncation, so it is not available to the reconstruction algorithm.

In any case, if we actually saw some code, we would be able to have a more concrete discussion.

Regards,
Miguel

reply via email to

[Prev in Thread] Current Thread [Next in Thread]