qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] handling emulation fine-grained memory protection


From: Richard Henderson
Subject: Re: [Qemu-devel] handling emulation fine-grained memory protection
Date: Mon, 3 Jul 2017 09:07:32 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1

On 07/03/2017 03:04 AM, Peter Maydell wrote:
For the ARM v7M microcontrollers we currently treat their memory
protection unit like a funny kind of MMU that only has a 1:1
address mapping. This basically works but it means that we can
only support protection regions which are a multiple of 1K in
size and on a 1K address boundary (because that's what we define
as the "page size" for it). The real hardware lets you define
protection regions on a granularity down to 64 bytes (both size
and address).

So far we've got away with this, but I think only because the
payloads we've tested haven't really used the MPU much or at all.
With v8M I expect the MPU (and its secure/non-secure cousin the
Security Attribution Unit) to be much more heavily used, so it
would be nice if we could lift this limitation somehow.

Does anybody have any good ideas for how this ought to be done?
We could wind down the "page size" for these CPUs (since we
now have runtime-configurable-page-size for ARM CPUs this
shouldn't compromise the A profile cores which can stick to
1K or 4K pages) but I don't think we can get down as low as
64 bytes due to all the things we keep in the low bits of
TLB entries.

It's close..  We need 3 bits that do not overlap any requested alignment.

Does the v7m profile have 8-byte aligned operations? I see that STREXD is out, and I can't think of anything else. So bits 8, 16, 32 are up for grabs, which does fit a 64-byte page minimum.

That said...

I'm guessing we'd need to have "this page has fine grained
protection regions" imply "take the slow path" and then do
the protection check in the slow path. Alex Graf pointed out
to me a while back that we already have a data structure for
handling sub-page-sized things in the slow path (the subpage
handling in the memory system), but can we easily (or otherwise)
use it, or would it be simpler just to have a separate thing?

I think it would be simpler to have a separate thing, since the regular subpage handling requires memory allocation.

I would just think about a bit, TLB_PROT_RECHECK or so, that not only takes the slow path through the helper, but also the slow path back through tlb_fill.

Since these are defined by system registers, I can imagine there can only be a few pages for which this fine grained handling might apply any any one time. This would certainly be preferable to reducing the effectiveness of the entire TLB by a factor of 16.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]