qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] vl: Adjust the place of calling mlockall to spe


From: zhanghailiang
Subject: Re: [Qemu-devel] [PATCH] vl: Adjust the place of calling mlockall to speedup VM's startup
Date: Tue, 23 Sep 2014 18:19:45 +0800
User-agent: Mozilla/5.0 (Windows NT 6.1; rv:31.0) Gecko/20100101 Thunderbird/31.1.1

On 2014/9/23 16:30, Michael S. Tsirkin wrote:
On Tue, Sep 23, 2014 at 03:57:47PM +0800, zhanghailiang wrote:
If we configure mlock=on and memory policy=bind at the same time,
It will consume lots of time for system to treat with memory,
especially when call mbind after mlockall.

Adjust the place of calling mlockall, calling mbind before mlockall
can remarkably reduce the time of VM's startup.

Signed-off-by: zhanghailiang <address@hidden>

The idea makes absolute sense to me:
bind after lock will force data copy of
all pages. bind before lock gives us an
indication where to put data on fault in.

Acked-by: Michael S. Tsirkin <address@hidden>



Thanks for your quick reviewing..

Best Regards,
zhanghailiang

---
Hi,

Actually, for mbind and mlockall, i have made a test about the time consuming
for the different call sequence.

The results is shown below. It is obviously that mlockall called before mbind is
more time-consuming.

Besides, this patch is OK with memory hotplug.

TEST CODE:
     if (mbind_first) {
         printf("mbind --> mlockall\n");
         mbind(ptr, ram_size/2, MPOL_BIND, &node0mask, 2,
               MPOL_MF_STRICT | MPOL_MF_MOVE);
         mbind(ptr + ram_size/2, ram_size/2, MPOL_BIND, &node1mask, 2,
               MPOL_MF_STRICT | MPOL_MF_MOVE);
         mlockall(MCL_CURRENT | MCL_FUTURE);
     } else {
         printf("mlockall --> mbind\n");
         mlockall(MCL_CURRENT | MCL_FUTURE);
         mbind(ptr, ram_size/2, MPOL_BIND, &node0mask, 2 ,
               MPOL_MF_STRICT | MPOL_MF_MOVE);
         mbind(ptr + ram_size/2, ram_size/2, MPOL_BIND, &node1mask, 2,
               MPOL_MF_STRICT | MPOL_MF_MOVE);
     }

RESULT 1:
#time /home/test_mbind 10240 0
memroy size 10737418240
mlockall --> mbind

real    0m11.886s
user    0m0.004s
sys     0m11.865s
#time /home/test_mbind 10240 1
memroy size 10737418240
mbind --> mlockall

real    0m5.334s
user    0m0.000s
sys     0m5.324s

RESULT 2:
#time /home/test_mbind 4096 0
memroy size 4294967296
mlockall --> mbind

real    0m5.503s
user    0m0.000s
sys     0m5.492s
#time /home/test_mbind 4096 1
memroy size 4294967296
mbind --> mlockall

real    0m2.139s
user    0m0.000s
sys     0m2.132s

Best Regards,
zhanghailiang
---
  vl.c | 11 +++++------
  1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/vl.c b/vl.c
index dc792fe..adf4770 100644
--- a/vl.c
+++ b/vl.c
@@ -134,6 +134,7 @@ const char* keyboard_layout = NULL;
  ram_addr_t ram_size;
  const char *mem_path = NULL;
  int mem_prealloc = 0; /* force preallocation of physical target memory */
+int enable_mlock = false;
  int nb_nics;
  NICInfo nd_table[MAX_NICS];
  int autostart;
@@ -1421,12 +1422,8 @@ static void smp_parse(QemuOpts *opts)

  }

-static void configure_realtime(QemuOpts *opts)
+static void realtime_init(void)
  {
-    bool enable_mlock;
-
-    enable_mlock = qemu_opt_get_bool(opts, "mlock", true);
-
      if (enable_mlock) {
          if (os_mlock() < 0) {
              fprintf(stderr, "qemu: locking memory failed\n");
@@ -3973,7 +3970,7 @@ int main(int argc, char **argv, char **envp)
                  if (!opts) {
                      exit(1);
                  }
-                configure_realtime(opts);
+                enable_mlock = qemu_opt_get_bool(opts, "mlock", true);
                  break;
              case QEMU_OPTION_msg:
                  opts = qemu_opts_parse(qemu_find_opts("msg"), optarg, 0);
@@ -4441,6 +4438,8 @@ int main(int argc, char **argv, char **envp)

      machine_class->init(current_machine);

+    realtime_init();
+
      audio_init();

      cpu_synchronize_all_post_init();
--
1.7.12.4


.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]