qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [4249] Improve audio api use in WM8750.


From: andrzej zaborowski
Subject: Re: [Qemu-devel] [4249] Improve audio api use in WM8750.
Date: Fri, 25 Apr 2008 03:41:06 +0200

On 25/04/2008, Jan Kiszka <address@hidden> wrote:
> OK, it's late... The real issue here is that the wm8750's internal cache
>  correlates with the MusicPal guest's internal buffering threshold - it
>  happens to be 4K as well. Thus, the line above just pushes the problem
>  to guests that play with 2K buffers. And this demonstrates nicely that
>  the current caching is fragile, only suitable for a subset of guests.
>
>  Back to square #1, only cache what piles up during controllable periods:
>  inside the callback. In my case, I _depend_ on flushing after the
>  callback, because this is where data gets transfered to the Wolfson, and
>  it gets transferred in larger chunks as well. Thus, flushing later
>  easily causes buffer underruns.
>
>  And for those scenarios where data arrives asynchronously in smaller
>  chunks between the callbacks, we may also flush before entering the
>  subordinated callback.
>
>  But, frankly, how may cycle does all this caching actually save us? Did
>  you measure it? I doubt it is relevant.

It's not really caching but rather trying to emulate the hardware's
FIFOs so that we get the same behavior.  But I see the problem: the
FIFO on wm8750 side was used to avoid buffering data more than once
for the Spitz and the Neo1973 machines. But they use a I2S interface
which is totally different than that of MusicPal (they both have a hw
register through which all samples have to go, rather than a kind of
DMA).  What we need is an api to let the cpu explicitely flush the
data, because in the model with DMA, only the CPU knows when it's good
moment to do that (e.g at the end of audio_callback in hw/musicpal.c),
I'll try to come up with something like that.  An arbitrary threshold
in wm8750 won't work for all machines.

Regards




reply via email to

[Prev in Thread] Current Thread [Next in Thread]