qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Document Qemu coding style


From: David Turner
Subject: Re: [Qemu-devel] [PATCH] Document Qemu coding style
Date: Wed, 1 Apr 2009 01:28:14 +0200



On Wed, Apr 1, 2009 at 12:38 AM, malc <address@hidden> wrote:

For starters you could have asked about things you believe are subtle.

True enough, but for a long time the project was confidential and alluding to it publicly
was not recommended. I'll try to break the habit.
 

Now to the part i do not understand: what do you mean by time-based(sic)? 

There's one clock involved, as far as audio is concerned, and it's the
one dervied from the speed with which host audio can consume/produce
audio, anything else just doesn't work (for scenarios i was interested
in anyway)

I had some problems when running qemu on low-powered computers. If I remember
correctly, the emulation part was taking so much of the CPU that audio could very
well stutter in strange ways on some platforms. One of the reason for it was that the
host audio output (e.g. esd) stopped consuming samples at a consistent rate, which
forced SWVoiceOut buffers to fill up and delay audio production in the emulated
system even more.

In certain cases, there is also a non-trivial latency between the moment the emulated
system produces audio-samples (e.g. sends them through dma to the emulated
hardware), and the moment it is effectively sent to the host backend. This is mostly
related to playing with the audio timer tick configuration which by default is so small
that it should not matter.

What I mean by time-based means a way to deal with varying latencies in both audio
production and consumption. I agree it's a hard problem and is not totally required for
QEMU. It's just that I spent some time trying to understand how the audio subsystem
dealt with the problem, except that it didn't.
 

As for the question of why everything just calls audio_pcm_sw_write
raised in AUDIO.txt, the reason, if my memory serves and it might as
well not do that all that well, things are the way they are for the
reason that any given driver can, theoretically, use the respective
audio subsystems/libraries own, less naive, st_rate_flow equivalent.

Interesting, thanks for the explanation, this makes it clear.
 
P.S. Last/only time i looked at Android couldn't help but notice that,
    apart from other things, capture was implemented for coreaudio,
    it would have been nice, if for nothing else but the sake of
    completeness, to have that in main QEMU too.

Actually, the details of audio-related changes performed are:

- adding audio input to the CoreAudio backend

- adding dynamic linking support to the esd and alsa backends
(using dlopen/dlsym allows the emulator to run on platforms where
all the corresponding libraries / sound server are not available).

- rewriting the sdl backend run_out() method. For some reason the old one tended
to lock up QEMU in certain weird cases I could never completely understand.

- modifying the sub-system to be able to use different backends for audio input
  and output.

- adding a new "winaudio.c" backend for Windows that uses Wave functions instead
  of DirectX (mainly to be able to build on old versions of mingw that didn't provide
  directx-compatible headers, I'm not sure if it's still needed these days, but at least
  the code is 10x simpler than the dxaudio.c one, talk about a complex sound API).

I plan to provide patches for all of these for upstream. Sorry if this hasn't been done yet.


--
mailto:address@hidden




reply via email to

[Prev in Thread] Current Thread [Next in Thread]