[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-ppc] [PATCH 4/4] hw/ppc/spapr: Implement the h_page_init hyper
From: |
Thomas Huth |
Subject: |
Re: [Qemu-ppc] [PATCH 4/4] hw/ppc/spapr: Implement the h_page_init hypercall |
Date: |
Thu, 11 Feb 2016 08:53:07 +0100 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.0 |
On 11.02.2016 00:47, David Gibson wrote:
> On Wed, Feb 10, 2016 at 07:09:12PM +0100, Thomas Huth wrote:
>> This hypercall either initializes a page with zeros, or copies
>> another page.
>> According to LoPAPR, the i-cache of the page should also be
>> flushed when using H_ICACHE_INVALIDATE or H_ICACHE_SYNCHRONIZE,
>> but this is currently only done when running with TCG - assuming
>> the cache will be flushed with KVM anyway when switching back to
>> kernel / VM context.
>
> I don't think that's true. dcache and icache aren't usually flushed
> by kernel/user or even process context changes in Linux. Cache
> control instructions aren't priveleged so, I think to get this right
> you'd need a helper which does dcbst and icbi across the page.
Oh, ok, didn't know/expect that ... I'll try to come up with a solution...
> I'm pretty sure libc needs to do this at several points, but alas I
> don't think there's an exported function to do it.
There seems to be at least a __builtin___clear_cache() function for
flushing the instruction cache (see
<https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html>).
I haven't found anything similar for the data cache yet ... in the worst
case, I guess I need to write a wrapper with some inline assembly,
similar to kvmppc_eieio() in kvm_ppc.h ... would that be acceptable?
>> The code currently also does not explicitely flush the data cache
>> with H_ICACHE_SYNCHRONIZE, since this either also should be done
>> when switching back to kernel / VM context (with KVM), or not matter
>> anyway (for TCG).
>>
>> Signed-off-by: Thomas Huth <address@hidden>
>> ---
>> hw/ppc/spapr_hcall.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 42 insertions(+)
>>
>> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
>> index f14f849..91e703d 100644
>> --- a/hw/ppc/spapr_hcall.c
>> +++ b/hw/ppc/spapr_hcall.c
>> @@ -387,6 +387,47 @@ static target_ulong h_set_xdabr(PowerPCCPU *cpu,
>> sPAPRMachineState *spapr,
>> return H_SUCCESS;
>> }
>>
>> +static target_ulong h_page_init(PowerPCCPU *cpu, sPAPRMachineState *spapr,
>> + target_ulong opcode, target_ulong *args)
>> +{
>> + target_ulong flags = args[0];
>> + target_ulong dest = args[1];
>> + target_ulong src = args[2];
>> + uint8_t buf[TARGET_PAGE_SIZE];
>> +
>> + if (flags & ~(H_ICACHE_SYNCHRONIZE | H_ICACHE_INVALIDATE
>> + | H_COPY_PAGE | H_ZERO_PAGE)) {
>> + qemu_log_mask(LOG_UNIMP, "h_page_init: Bad flags (" TARGET_FMT_lx
>> "\n",
>> + flags);
>
> This should return H_PARAMETER as well as logging, surely?
Yes, that's likely better.
>> + }
>> +
>> + if (!is_ram_address(spapr, dest) || (dest & ~TARGET_PAGE_MASK) != 0) {
>> + return H_PARAMETER;
>> + }
>> +
>> + if (flags & H_COPY_PAGE) {
>> + if (!is_ram_address(spapr, src) || (src & ~TARGET_PAGE_MASK) != 0) {
>> + return H_PARAMETER;
>> + }
>> + cpu_physical_memory_read(src, buf, TARGET_PAGE_SIZE);
>> + } else if (flags & H_ZERO_PAGE) {
>> + memset(buf, 0, TARGET_PAGE_SIZE);
>> + }
>> +
>> + if (flags & (H_COPY_PAGE | H_ZERO_PAGE)) {
>> + cpu_physical_memory_write(dest, buf, TARGET_PAGE_SIZE);
>> + }
>
> Hmm, so this does 2 copies for an H_COPY_PAGE and a zero and a copy
> for H_ZERO_PAGE, which is going to be substantially slower than the
> caller might expect.
I guess I'd better use cpu_physical_memory_map and memset/memcpy.
I likely need cpu_physical_memory_map now anyway to be able to flush the
caches related to that page...
>> + if (flags & (H_ICACHE_SYNCHRONIZE | H_ICACHE_INVALIDATE)) {
>> + if (tcg_enabled()) {
>> + tb_flush(CPU(cpu));
>> + }
>> + /* XXX: Flush data cache for H_ICACHE_SYNCHRONIZE? */
>> + }
>> +
>> + return H_SUCCESS;
>> +}
Thanks for the review!
Thomas
signature.asc
Description: OpenPGP digital signature