qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/3] cpu: numa: Fix the mapping initialization o


From: Dou Liyang
Subject: Re: [Qemu-devel] [PATCH 0/3] cpu: numa: Fix the mapping initialization of VCPUs and NUMA nodes
Date: Thu, 19 Jan 2017 20:17:02 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

Hi, Eduardo

At 01/19/2017 01:06 AM, Eduardo Habkost wrote:
On Wed, Jan 18, 2017 at 09:26:36PM +0800, Dou Liyang wrote:
Hi, All

**
ERROR:/tmp/qemu-test/src/tests/vhost-user-test.c:668:test_migrate: assertion failed: 
(qdict_haskey(rsp, "return"))
GTester: last random seed: R02Sf52546c4daff8087416f43fa7c146db8
ftruncate: Permission denied
ftruncate: Permission denied
qemu-system-aarch64: /tmp/qemu-test/src/qom/cpu.c:346: cpu_common_map_numa_node: 
Assertion `cpu->cpu_index < max_cpus' failed.
Broken pipe

I don't know What's the meaning of this log ?

Is the qemu-system-aarch64 can't recognize the
qom/cpu.c:346: assert(cpu->cpu_index < max_cpus);

This means the assert() line is being triggered for some reason,
and cpu_index is >= max_cpus when we cpu_common_map_numa_node()
gets called. We need to investigate why.


I have investigated the reason why it is failed.

Because not all targets(aarch64-linux-user, x86_64-linux-user ...)
need to compile the vl.c(include the max_cpus) and numa.c, but the all
may compile the ./qom/cpu.c.
So, when we Link those targets, we may can't find the vl.o or numa.o
that we want.

Add "#ifdef CONFIG_NUMA" to fix it.

+static void cpu_common_map_numa_node(CPUState *cpu)
+{
+    #ifdef CONFIG_NUMA
+    int i;
+
+    assert(cpu->cpu_index < max_cpus);
+    for (i = 0; i < nb_numa_nodes; i++) {
+        if (test_bit(cpu->cpu_index, numa_info[i].node_cpu)) {
+            cpu->numa_node = i;
+            return;
+        }
+    }
+    #endif
+}
+

And I am not sure if it is necessary to resend this patch for
fixing the bug before Igor's patches is OK completely?

Thanks,
        Liyang.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]