qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] loader: Fix misaligned member access


From: Philippe Mathieu-Daudé
Subject: Re: [Qemu-devel] [PATCH] loader: Fix misaligned member access
Date: Mon, 23 Apr 2018 12:49:56 -0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0

On 04/23/2018 12:32 PM, Peter Maydell wrote:
> On 23 April 2018 at 15:26, Philippe Mathieu-Daudé <address@hidden> wrote:
>>> On 04/23/2018 11:04 AM, Peter Maydell wrote:
>>>> On 23 April 2018 at 14:57, Philippe Mathieu-Daudé <address@hidden> wrote:
>>>>> On 04/23/2018 12:16 AM, David Gibson wrote:
>>>>>> On Sun, Apr 22, 2018 at 11:41:20AM +0100, Peter Maydell wrote:
>>>>>>> If we need to do an unaligned load, then ldl_p() is the
>>>>>>> right way to do it. (We could also just do
>>>>>>>  *addr = ldl_be_p(prop) but we maybe don't want to
>>>>>>> bake in knowledge that FDT is big-endian).
>>>>>
>>>>> Since it is, ldl_be_p() seems the clever/cleaner way indeed, but then we
>>>>> assume we know the underlying type of fdt32_t; while using memcpy we
>>>>> respect the FDT API.
>>>>
>>>>  *addr = fdt32_to_cpu(ldl_p(prop));
>>>>
>>>> is better than a raw memcpy still.
>>
>> ldl_p() is target-specific, I'd prefer loader code to be target agnostic.
>>
>> Since FDT is big-endian, are you OK I use, as you suggested,
>>
>>     *addr = ldq_be_p(prop);
>>
>> (with a comment about FDT being BE)?
> 
> Oops, yes, forgot that ldq_p is the target-endian version.
> ldq_he_p() is the "load in host endianness" function, so
>    *addr = fdt64_to_cpu(ldq_he_p(prop));

I think I never noticed ldq_he_p(), good to know.

$ git grep -E '(ld|st)._he_'
net/checksum.c:130:        stw_he_p(&tcp->th_sum, 0);
net/checksum.c:151:        stw_he_p(&udp->uh_sum, 0);
util/bufferiszero.c:47:        uint64_t t = ldq_he_p(buf);
util/bufferiszero.c:61:        t |= ldq_he_p(buf + len - 8);

Not many users...



reply via email to

[Prev in Thread] Current Thread [Next in Thread]