qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 24/25] tcg: Allocate a guard page after code_


From: Richard Henderson
Subject: Re: [Qemu-devel] [PATCH v3 24/25] tcg: Allocate a guard page after code_gen_buffer
Date: Wed, 23 Sep 2015 15:12:35 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0

On 09/23/2015 01:37 PM, Peter Maydell wrote:
On 23 September 2015 at 13:00, Richard Henderson <address@hidden> wrote:
I've wondered about over-allocating on the mmap path, so that we can choose the
hugepage aligned subregion.  But as far as I can tell, my kernel doesn't
allocate hugepages at all, no matter what we do.  So it seems a little silly to
go so far out of the way to get an aligned buffer.

This raises the converse question of "why are we bothering with
MADV_HUGEPAGE at all?" :-)

I beg your pardon -- I was merely looking in the wrong place for the info. /proc/pid/smap does show that nearly all of the area is using huge pages:

Main memory:
7fc130000000-7fc1b0000000 rw-p 00000000 00:00 0
Size:            2097152 kB
Rss:               88064 kB
Pss:               88064 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     88064 kB
Referenced:        88064 kB
Anonymous:         88064 kB
AnonHugePages:     88064 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Locked:                0 kB

code_gen_buffer:
7fc1d76e6000-7fc1f76e6000 rwxp 00000000 00:00 0
Size:             524288 kB
Rss:               58472 kB
Pss:               58472 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     58472 kB
Referenced:        58472 kB
Anonymous:         58472 kB
AnonHugePages:     57344 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
Locked:                0 kB


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]