qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RESEND PATCH v4 6/6] acpi: Add hardware implementation


From: Zhu Guihua
Subject: Re: [Qemu-devel] [RESEND PATCH v4 6/6] acpi: Add hardware implementation for memory hot unplug
Date: Wed, 25 Mar 2015 14:13:23 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0


On 03/24/2015 06:26 PM, Igor Mammedov wrote:
On Tue, 24 Mar 2015 17:34:29 +0800
Zhu Guihua <address@hidden> wrote:

On 03/23/2015 08:47 PM, Igor Mammedov wrote:
On Mon, 23 Mar 2015 18:59:28 +0800
Zhu Guihua <address@hidden> wrote:

On 03/16/2015 10:59 PM, Igor Mammedov wrote:
[...]
diff --git a/hw/i386/acpi-dsdt-mem-hotplug.dsl b/hw/i386/acpi-dsdt-mem-hotplug.dsl
index 1e9ec39..ef847e2 100644
--- a/hw/i386/acpi-dsdt-mem-hotplug.dsl
+++ b/hw/i386/acpi-dsdt-mem-hotplug.dsl
@@ -29,6 +29,7 @@
                External(MEMORY_SLOT_PROXIMITY, FieldUnitObj) // read only
                External(MEMORY_SLOT_ENABLED, FieldUnitObj) // 1 if enabled, 
read only
                External(MEMORY_SLOT_INSERT_EVENT, FieldUnitObj) // (read) 1 if 
has a insert event. (write) 1 to clear event
+            External(MEMORY_SLOT_REMOVE_EVENT, FieldUnitObj) // (read) 1 if 
has a remove event. (write) 1 to clear event
                External(MEMORY_SLOT_SLECTOR, FieldUnitObj) // DIMM selector, 
write only
                External(MEMORY_SLOT_OST_EVENT, FieldUnitObj) // _OST event 
code, write only
                External(MEMORY_SLOT_OST_STATUS, FieldUnitObj) // _OST status 
code, write only
@@ -55,6 +56,8 @@
                        If (LEqual(MEMORY_SLOT_INSERT_EVENT, One)) { // Memory 
device needs check
                            MEMORY_SLOT_NOTIFY_METHOD(Local0, 1)
                            Store(1, MEMORY_SLOT_INSERT_EVENT)
+                    } Elseif (LEqual(MEMORY_SLOT_REMOVE_EVENT, One)) { // 
Ejection request
+                        MEMORY_SLOT_NOTIFY_METHOD(Local0, 3)
clear removing field here.
You mean clear remove event here?
yes
I tested this method, clear remove event here will lead to guest
kernel panic.
it shouldn't cause panic if it only clears flag in QEMU
(that's what it should do).


                        }
                        // TODO: handle memory eject request
                        Add(Local0, One, Local0) // goto next DIMM
@@ -156,5 +159,12 @@
                    Store(Arg2, MEMORY_SLOT_OST_STATUS)
                    Release(MEMORY_SLOT_LOCK)
                }
+
+            Method(MEMORY_SLOT_EJECT_METHOD, 2) {
+                Acquire(MEMORY_SLOT_LOCK, 0xFFFF)
+                Store(ToInteger(Arg0), MEMORY_SLOT_SLECTOR) // select DIMM
+                Store(One, MEMORY_SLOT_REMOVE_EVENT)
redo it using enabled field. Otherwise it could cause guest/QEMU crash:

- if 1st memory was asked to be removed
- then OSPM processes it but has not called _EJ0 yet leaving is_removed set
- then QEMU adds/removes another DIMM triggering slots scan
      which would issue second Notify(remove) since is_removed is still set
- as result it will cause failure in OSPM or in QEMU if OSPM issues double EJ0()

If OSPM processed the ejection request but not called _EJ0, the device
will still be present in qemu,
we must handle this.
There is nothing to handle in this case, if OSPM hasn't called _EJ0 then
nothing happens and device stays in QEMU.

So I think OSPM issues double EJ0 maybe reasonable
in this situation.
What's your opinion?
the first _EJ0 must do ejection, as for the second I think it should be NOP.
So we should judge the enabled field to check whether the device is
present before
issuing Notify(remove)?
I wouldn't check if device is present.
I'd unconditionally clear it and make sure on QEMU side that
operation is NOP if device is not present.

I'm sorry that I have not fully understood your meaning about
'redo it using enabled field'.  How to do it?

MEMORY_SLOT_ENABLED is read only, how can I use it to handle EJ0?

Thanks,
Zhu





reply via email to

[Prev in Thread] Current Thread [Next in Thread]