qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-arm] [RFC PATCH 1/1] memory: Support unaligned accesses on ali


From: Andrew Jeffery
Subject: Re: [Qemu-arm] [RFC PATCH 1/1] memory: Support unaligned accesses on aligned-only models
Date: Fri, 14 Jul 2017 15:50:22 +0930

Hi Paolo,

Thanks for taking a look!

On Thu, 2017-07-13 at 14:05 +0200, Paolo Bonzini wrote:
> On 30/06/2017 05:00, Andrew Jeffery wrote:
> > This RFC patch stems from a discussion on a patch for an ADC model[1] where 
> > it
> > was pointed out that I should be able to use the .impl member of
> > MemoryRegionOps to constrain how my read() and write() callbacks where 
> > invoked.
> > 
> > I tried Phil's suggested approach and found I got reads of size 4, but with 
> > an
> > address that was not 4-byte aligned.
> > 
> > Looking at the source for access_with_adjusted_size() lead to the comment
> > 
> >      /* FIXME: support unaligned access? */
> > 
> > which at least suggests that the implementation isn't complete.
> > 
> > So, this patch is a quick and incomplete attempt at resolving the issue to 
> > see
> > whether I'm on the right track or way off in the weeds.
> > 
> > I've lightly tested it with the ADC model mentioned above, and it appears 
> > to do
> > the right thing (I changed the values generated by the ADC to distinguish
> > between the lower and upper 16 bits).
> 
> I think the idea is okay.
> 
> > +    access_addr[0] = align_down(addr, access_size);
> > +    access_addr[1] = align_up(addr + size, access_size);
> > +
> > +        for (cur = access_addr[0]; cur < access_addr[1]; cur += 
> > access_size) {
> > +            uint64_t mask_bounds[2];
> > +            mask_bounds[0] = MAX(addr, cur) - cur;
> > +            mask_bounds[1] =
> > +                MIN(addr + size, align_up(cur + 1, access_size)) - cur;
> > +
> > +            access_mask = (-1ULL << mask_bounds[0] * 8) &
> > +                (-1ULL >> (64 - mask_bounds[1] * 8));
> 
> Please use MAKE_64BIT_MASK.

Okay.

> 
> > +            r |= access(mr, cur, &access_value, access_size,
> > +                  (MAX(addr, cur) - addr), access_mask, attrs);
> > +
> > +            /* XXX: Can't do this hack for writes */
> > +            access_value >>= mask_bounds[0] * 8;
> > +        }
> 
> Can you subtract access_addr[0] from mask_bounds[0] and mask_bounds[1]
> (instead of cur) to remove the need for this right shift?

I haven't looked at the patch since I sent it. Given you think the idea
 of the patch is okay I'll get back to working on it and think about
this.

Cheers,

Andrew

> 
> Thanks,
> 
> Paolo

Attachment: signature.asc
Description: This is a digitally signed message part


reply via email to

[Prev in Thread] Current Thread [Next in Thread]