qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/5] Add target memory mapping API


From: Anthony Liguori
Subject: Re: [Qemu-devel] [PATCH 1/5] Add target memory mapping API
Date: Mon, 19 Jan 2009 13:15:32 -0600
User-agent: Thunderbird 2.0.0.19 (X11/20090105)

Avi Kivity wrote:
Anthony Liguori wrote:
Paul Brook wrote:
It looks like what you're actually doing is pushing the bounce buffer allocation into the individual packet consumers.

Maybe a solution to this is a 'do IO on IOVEC' actor, with an additional flag that says whether it is acceptable to split the allocation. That way both block and packet interfaces use the same API, and avoids proliferation of manual bounce buffers in packet devices.

I think there may be utility in having packet devices provide the bounce buffers, in which case, you could probably unique both into a single function with a flag. But why not just have two separate functions?

Those two functions can live in exec.c too. The nice thing about using map() is that it's easily overriden and chained. So what I'm proposing.

cpu_physical_memory_map()
cpu_physical_memory_unmap()

This should be the baseline API with the rest using it.

Yup.

do_streaming_IO(map, unmap, ioworker, opaque);

Why pass map and unmap?

Because we'll eventually have:

pci_device_memory_map()
pci_device_memory_unmap()

In the simplest case, pci_device_memory_map() just calls cpu_physical_memory_map(). But it may do other things.

grant based devices needn't go through this at all, since you never mix grants and physical addresses, and since grants never need bouncing.

So the grant map/unmap function doesn't need to deal with calling cpu_physical_memory_map/unmap. You could still use the above API or not. It's hard to say.

do_packet_IO(map, unmap, buffer, size, ioworker, opaque);

If you pass the buffer then the device needs to allocate large amounts of bounce memory.

If do_packet_IO took a buffer, instead of calling alloc_buffer(size) when map fails (you run out of bounce memory), you simply use buffer. Otherwise, alloc_buffer() must be able to allocate enough memory to satisfy any request.

Since each packet device knows it's maximum size up front, it makes sense for the device to allocate it. You could also not care and just trust that callers do the right thing.

Regards,

Anthony Liguori





reply via email to

[Prev in Thread] Current Thread [Next in Thread]