qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Add another sanity check to smp_parse() functio


From: Bastian Koppelmann
Subject: Re: [Qemu-devel] [PATCH] Add another sanity check to smp_parse() function
Date: Wed, 26 Aug 2015 14:11:05 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.8.0



Am 25.08.2015 um 15:25 schrieb Thomas Huth:
On 19/08/15 17:58, Eduardo Habkost wrote:
On Wed, Jul 22, 2015 at 03:59:50PM +0200, Thomas Huth wrote:
The code in smp_parse already checks the topology information for
sockets * cores * threads < cpus and bails out with an error in
that case. However, it is still possible to supply a bad configuration
the other way round, e.g. with:

  qemu-system-xxx -smp 4,sockets=1,cores=4,threads=2

QEMU then still starts the guest, with topology configuration that
is rather incomprehensible and likely not what the user wanted.
So let's add another check to refuse such wrong configurations.

Signed-off-by: Thomas Huth <address@hidden>
---
  vl.c | 8 +++++++-
  1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/vl.c b/vl.c
index 5856396..c8d24b1 100644
--- a/vl.c
+++ b/vl.c
@@ -1224,7 +1224,13 @@ static void smp_parse(QemuOpts *opts)
              exit(1);
          }
- max_cpus = qemu_opt_get_number(opts, "maxcpus", 0);
+        max_cpus = qemu_opt_get_number(opts, "maxcpus", cpus);
+        if (sockets * cores * threads > max_cpus) {
+            fprintf(stderr, "cpu topology: error: "
+                    "sockets (%u) * cores (%u) * threads (%u) > maxcpus 
(%u)\n",
+                    sockets, cores, threads, max_cpus);
+            exit(1);
+        }
I am always afraid of breaking existing setups when we do that, because
there may be existing VMs running with these weird configurations, and
people won't be able to live-migrate them to a newer QEMU.

But I think we really have to start refusing to run obviously broken
configurations one day, or we will never fix this mess, so:

Reviewed-by: Eduardo Habkost <address@hidden>

I want to apply this through the x86 tree, but I would like to get some
Acked-by from other maintainers first.
Ok, thanks!

So *ping* to the other CPU core maintainers here ... could I get some
more ACKs, please?

  Thomas


Acked-by: Bastian Koppelmann <address@hidden>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]