qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/4] ppc: change CPUPPCState access_type from in


From: Mark Cave-Ayland
Subject: Re: [Qemu-devel] [PATCH 1/4] ppc: change CPUPPCState access_type from int to uint8_t
Date: Sun, 10 Sep 2017 19:00:13 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1

On 10/09/17 17:30, Laurent Vivier wrote:

> On 10/09/2017 16:37, Mark Cave-Ayland wrote:
>> This change was suggested by Alexey in advance of a subsequent commit which
>> adds access_type into vmstate_ppc_cpu.
>>
>> Signed-off-by: Mark Cave-Ayland <address@hidden>
>> ---
>>  target/ppc/cpu.h     |    4 ++--
>>  target/ppc/machine.c |    4 +++-
>>  2 files changed, 5 insertions(+), 3 deletions(-)
>>
>> diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
>> index 12f0949..59d1656 100644
>> --- a/target/ppc/cpu.h
>> +++ b/target/ppc/cpu.h
>> @@ -1010,8 +1010,8 @@ struct CPUPPCState {
>>      /* Next instruction pointer */
>>      target_ulong nip;
>>  
>> -    int access_type; /* when a memory exception occurs, the access
>> -                        type is stored here */
>> +    uint8_t access_type; /* when a memory exception occurs, the access
>> +                            type is stored here */
> 
> I think this breaks TCG as we have:
> 
> target/ppc/translate.c:
> 
>      82 void ppc_translate_init(void)
> ...
>     191 
>     192     cpu_access_type = tcg_global_mem_new_i32(cpu_env,
>     193                                              offsetof(CPUPPCState, 
> access_type), "access_type");
>     194 
>     195     done_init = 1;
>     196 }
> 
> it expects an int32_t (or int).

Indeed, yes. I'm really surprised this didn't break compilation or
anything at runtime...

Having a further look I can't see any implementations for
tcg_global_mem_new_u8() or tcg_gen_movi_u8() so changing this isn't a
straightforward type swap.

Alexey, do you still think this is required?


ATB,

Mark.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]