qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] RFC: memory API changes


From: Paolo Bonzini
Subject: Re: [Qemu-devel] RFC: memory API changes
Date: Mon, 23 Mar 2015 16:18:12 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0


On 23/03/2015 16:11, Peter Maydell wrote:
> On 23 March 2015 at 14:39, Paolo Bonzini <address@hidden> wrote:
>>
>>
>> On 23/03/2015 13:24, Peter Maydell wrote:
>>> (This is part of the work I'm doing for transaction attributes.)
>>>
>>> Currently we have several APIs used for making physical
>>> memory accesses:
>>>
>>> 1. cpu_physical_memory_rw &c
>>>
>>> 2. address_space_rw/read/write/map
>>>
>>> 3. ld/st*_phys
>>>
>>> These do more-or-less overlapping jobs and it's not
>>> obvious which should be used when. Also they need to be
>>> expanded to support transaction attributes and (in some
>>> cases) reporting of failed memory transactions. I propose:
>>>
>>>  * ld/st*_phys to be renamed to as_ld*, eg
>>>     ldub_phys -> as_ldub
>>>     ldl_be_phys -> as_ldl_be
>>>     stq_phys -> as_stq
>>>     stl_le_phys -> as_ldl_le
>>
>> I think shorthand functions with no extra arguments still have a place.
> 
> The trouble is that since C doesn't do polymorphism you
> then end up with awkward names for one or the other...

True.  But since it's not a new API we can keep the old name for the
simple one.

>>  I was thinking of having them only temporarily, until we add functions
>> (e.g. pci_dma_ld or amba_ld) that deal with the MemTxResult by setting
>> some bus-specific abort bit.  However, this API would complicate the
>> case when the same core code is used for both PCI and sysbus devices.
>> Perhaps AddressSpaces can grow a callback that transforms a "bad"
>> MemTxResult to a "good" one with some side effects?
> 
> So, for PCI you can have something which sets an abort bit
> automatically, because the PCI spec mandates that kind of
> register level exposure of transaction failures. But for AMBA
> (and I guess many other buses), there's no such standardization.
> The bus standard says "your transaction might fail", but what
> the device actually does in that situation is up to the device
> (which might ignore it, go into some lockup mode til the guest
> resets it, make a note in a device-specific status register...)

Still, you have the problem of sharing code between devices that might
have different failure modes. :(  I don't really have a solution.

> For PCI, I thought the approach here was going to be that the default
> background AddressSpace handlers set the abort bit and then returned
> the "-1" or whatever result the spec says? In that case the
> ldl functions would never return a failure result.

Yes.  I'm not sure why it didn't work out however.

>>>  * cpu_physical_memory_rw are obsolete and should be replaced
>>>    with uses of the as_* functions -- we should at least note
>>>    this in the header file. (Can't do this as an automated change
>>>    really because the correct AS to use is callsite dependent.)
>>
>> All users that should _not_ be using address_space_memory have been
>> already changed to address_space_rw, or should have, so it can be done
>> automatically.  Same for cpu_physical_memory_map/unmap, BTW.
> 
> Hmm. Checking a few, I notice that for instance the kvm-all.c
> cpu_physical_memory_rw() should probably be using cpu->as.

Yes, that's something that I'll have to change soon as I implement
system management mode support in x86 KVM...

> And the uses in the bitband read/write accessors in hw/arm/armv7m.c
> should also be using a CPU address space. Most uses in devices
> should really be taking a pointer to the address space to use
> as a device property...

Yes, that's what was done for PCI devices and thus their sysbus variants
too (when they exist).  But for most MMIO devices address_space_memory
is probably good enough, and changing it wholesale is not going to make
things worse than they are.

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]