qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v1 19/22] memory: per-AddressSpace dispatch


From: Avi Kivity
Subject: Re: [Qemu-devel] [RFC v1 19/22] memory: per-AddressSpace dispatch
Date: Thu, 04 Oct 2012 19:19:44 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120911 Thunderbird/15.0.1

On 10/04/2012 07:13 PM, Blue Swirl wrote:
> On Thu, Oct 4, 2012 at 6:38 AM, Avi Kivity <address@hidden> wrote:
>> On 10/03/2012 10:24 PM, Blue Swirl wrote:
>>> >
>>> >  #else
>>> > -void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf,
>>> > -                            int len, int is_write)
>>> > +
>>> > +void address_space_rw(AddressSpace *as, target_phys_addr_t addr, uint8_t 
>>> > *buf,
>>> > +                      int len, bool is_write)
>>>
>>> I'd make address_space_* use uint64_t instead of target_phys_addr_t
>>> for the address. It may actually be buggy for 32 bit
>>> target_phys_addr_t  and 64 bit DMA addresses, if such architectures
>>> exist. Maybe memory.c could be made target independent one day.
>>
>> We can make target_phys_addr_t 64 bit unconditionally.  The fraction of
>> deployments where both host and guest are 32 bits is dropping, and I
>> doubt the performance drop is noticable.
> 
> My line of thought was that memory.c would not be tied to physical
> addresses, but it would be more general. Then exec.c would specialize
> the API to use target_phys_addr_t. Similarly PCI would specialize it
> to pcibus_t, PIO to pio_addr_t and DMA to dma_addr_t.

The problem is that all any transition across the boundaries would then
involve casts (explicit or implicit) with the constant worry of whether
we're truncating or not.  Note we have transitions in both directions,
with the higher layer APIs calling memory APIs, and the memory API
calling them back via MemoryRegionOps or a new MemoryRegionIOMMUOps.

What does this flexibility buy us, compared to a single hw_addr fixed at
64 bits?


-- 
error compiling committee.c: too many arguments to function



reply via email to

[Prev in Thread] Current Thread [Next in Thread]