qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 24/25] tcg: Allocate a guard page after code_


From: Richard Henderson
Subject: Re: [Qemu-devel] [PATCH v3 24/25] tcg: Allocate a guard page after code_gen_buffer
Date: Wed, 23 Sep 2015 13:00:32 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0

On 09/23/2015 12:39 PM, Peter Maydell wrote:
>> +# ifdef _WIN32
> 
> Why the space before ifdef here ?

#ifdef USE_STATIC_CODE_GEN_BUFFER
# ifdef _WIN32
# else
# endif /* WIN32 */
#elif defined(_WIN32)
#else
#endif

It's something that glibc requires for its coding style, and I find myself
using it most of the time.

>> +static inline void do_protect(void *addr, long size, int prot)
>> +{
>> +    DWORD old_protect;
>> +    VirtualProtect(addr, size, PAGE_EXECUTE_READWRITE, &old_protect);
> 
> The 'prot' argument isn't used -- did you mean to pass it
> in as VirtualProtect argument 3 ?

Oops, yes.

>>  static inline void *alloc_code_gen_buffer(void)
>>  {
>>      void *buf = static_code_gen_buffer;
>> +    size_t full_size, size;
>> +
>> +    /* The size of the buffer, rounded down to end on a page boundary.  */
>> +    full_size = (((uintptr_t)buf + sizeof(static_code_gen_buffer))
>> +                 & qemu_real_host_page_mask) - (uintptr_t)buf;
>> +
>> +    /* Reserve a guard page.  */
>> +    size = full_size - qemu_real_host_page_size;
>> +
>> +    /* Honor a command-line option limiting the size of the buffer.  */
>> +    if (size > tcg_ctx.code_gen_buffer_size) {
>> +        size = (((uintptr_t)buf + tcg_ctx.code_gen_buffer_size)
>> +                & qemu_real_host_page_mask) - (uintptr_t)buf;
>> +    }
>> +    tcg_ctx.code_gen_buffer_size = size;
>> +
>>  #ifdef __mips__
>> -    if (cross_256mb(buf, tcg_ctx.code_gen_buffer_size)) {
>> -        buf = split_cross_256mb(buf, tcg_ctx.code_gen_buffer_size);
>> +    if (cross_256mb(buf, size)) {
>> +        buf = split_cross_256mb(buf, size);
>> +        size = tcg_ctx.code_gen_buffer_size;
>>      }
>>  #endif
>> -    map_exec(buf, tcg_ctx.code_gen_buffer_size);
>> +
>> +    map_exec(buf, size);
>> +    map_none(buf + size, qemu_real_host_page_size);
>> +    qemu_madvise(buf, size, QEMU_MADV_HUGEPAGE);
> 
> I think we're now doing the MADV_HUGEPAGE over "buffer size
> minus a page" rather than "buffer size". Does that mean
> we've gone from doing the madvise on a whole number of
> hugepages to doing it on something that's not a whole number
> of hugepages, and if so does the kernel decide not to use
> hugepages here?

On the whole I don't think it matters.  The static buffer isn't page aligned to
begin with, much less hugepage aligned, so the fact that we're allocating a
round number like 32mb here doesn't really mean much.  The beginning and/or end
pages of the buffer definitely aren't going to be hugepage.

Worse, the same is true for the mmap path, since I've never seen the kernel
select a hugepage aligned address.  You'd think that adding MAP_HUGEPAGE would
be akin to MADV_HUGEPAGE, with the additional hint that the address should be
appropriately aligned for the hugepage, but no.  It implies forced use of
something from the hugepage pool and that requires extra suid capabilities.

I've wondered about over-allocating on the mmap path, so that we can choose the
hugepage aligned subregion.  But as far as I can tell, my kernel doesn't
allocate hugepages at all, no matter what we do.  So it seems a little silly to
go so far out of the way to get an aligned buffer.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]