qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 6/7] scripts/dump-guest-memory.py: add vmcore


From: Laszlo Ersek
Subject: Re: [Qemu-devel] [PATCH v3 6/7] scripts/dump-guest-memory.py: add vmcoreinfo
Date: Tue, 11 Jul 2017 22:22:30 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1

On 07/11/17 12:30, Marc-André Lureau wrote:
> Add vmcoreinfo ELF note if vmcoreinfo device is ready.
> 
> To help the python script, add a little global vmcoreinfo_gdb
> structure, that is populated with vmcoreinfo_gdb_update().
> 
> Signed-off-by: Marc-André Lureau <address@hidden>
> ---
>  scripts/dump-guest-memory.py | 46 
> ++++++++++++++++++++++++++++++++++++++++++++
>  hw/acpi/vmcoreinfo.c         |  3 +++
>  2 files changed, 49 insertions(+)

... I've gotten a bit confused here, but I think this is what happened:
at 12:04 CEST today you commented on the "volatile thing"; at 12:30 CEST
you posted this v3 series, and I only followed up on your v2 comment at
15:25 CEST. So it's no surprise that whatever we discussed there can't
be seen in this patch.

So... IIUC our discussion there, you're going to post a v4 for this,
with a function-scoped, and internal-linkage, "vmcoreinfo_gdb_helper"
variable, also qualifying it "volatile", and accessing it with the
"function::variable" pattern from the python script. Is that about right?

One more comment below (actually two, but for one location):

> diff --git a/scripts/dump-guest-memory.py b/scripts/dump-guest-memory.py
> index f7c6635f15..80730658ae 100644
> --- a/scripts/dump-guest-memory.py
> +++ b/scripts/dump-guest-memory.py
> @@ -14,6 +14,7 @@ the COPYING file in the top-level directory.
>  """
>  
>  import ctypes
> +import struct
>  
>  UINTPTR_T = gdb.lookup_type("uintptr_t")
>  
> @@ -120,6 +121,22 @@ class ELF(object):
>          self.segments[0].p_filesz += ctypes.sizeof(note)
>          self.segments[0].p_memsz += ctypes.sizeof(note)
>  
> +
> +    def add_vmcoreinfo_note(self, vmcoreinfo):
> +        """Adds a vmcoreinfo note to the ELF dump."""
> +        # compute the header size, and copy that many bytes from the note
> +        header = get_arch_note(self.endianness, 0, 0)
> +        ctypes.memmove(ctypes.pointer(header),
> +                       vmcoreinfo, ctypes.sizeof(header))
> +        # now get the full note
> +        note = get_arch_note(self.endianness,
> +                             header.n_namesz - 1, header.n_descsz)
> +        ctypes.memmove(ctypes.pointer(note), vmcoreinfo, ctypes.sizeof(note))
> +
> +        self.notes.append(note)
> +        self.segments[0].p_filesz += ctypes.sizeof(note)
> +        self.segments[0].p_memsz += ctypes.sizeof(note)
> +
>      def add_segment(self, p_type, p_paddr, p_size):
>          """Adds a segment to the elf."""
>  
> @@ -505,6 +522,34 @@ shape and this command should mostly work."""
>                  cur += chunk_size
>                  left -= chunk_size
>  
> +    def phys_memory_read(self, addr, size):
> +        qemu_core = gdb.inferiors()[0]
> +        for block in self.guest_phys_blocks:
> +            if block["target_start"] <= addr < block["target_end"] \
> +               and addr + size < block["target_end"]:

Thanks for touching this up, but now I have two more new comments :)

First (and sorry about putting my request unclearly in the v2 review), I
think we need the following, and only the following checks here:
- "addr" against block["target_start"],
- "addr + size" against block["target_end"].

Second, if you are comparing limits of the same kind (that is, inclusive
vs. inclusive, and exclusive vs. exclusive), then equality is valid and
should be accepted. Therefore,

  block["target_start"] <= addr

is correct (exact match is valid), but

  addr + size < block["target_end"]

is incorrect (too strict), because "addr + size" is an exclusive limit
-- same as block["target_end"] -- so equality should again be accepted:

  addr + size <= block["target_end"]

If you clean these up, you can add my

Acked-by: Laszlo Ersek <address@hidden>

but I would still like a real Pythonista to review this patch. Adding
Janosch.

Janosch -- can you please help review this patch?

Thanks,
Laszlo


> +                haddr = block["host_addr"] + (addr - block["target_start"])
> +                return qemu_core.read_memory(haddr, size)
> +        return None
> +
> +    def add_vmcoreinfo(self):
> +        if not gdb.parse_and_eval("vmcoreinfo_gdb_helper"):
> +            return
> +
> +        addr = gdb.parse_and_eval("vmcoreinfo_gdb_helper.vmcoreinfo_addr_le")
> +        addr = bytes([addr[i] for i in range(4)])
> +        addr = struct.unpack("<I", addr)[0]
> +
> +        mem = self.phys_memory_read(addr, 16)
> +        if not mem:
> +            return
> +        (version, addr, size) = struct.unpack("<IQI", mem)
> +        if version != 0:
> +            return
> +
> +        vmcoreinfo = self.phys_memory_read(addr, size)
> +        if vmcoreinfo:
> +            self.elf.add_vmcoreinfo_note(vmcoreinfo.tobytes())
> +
>      def invoke(self, args, from_tty):
>          """Handles command invocation from gdb."""
>  
> @@ -518,6 +563,7 @@ shape and this command should mostly work."""
>  
>          self.elf = ELF(argv[1])
>          self.guest_phys_blocks = get_guest_phys_blocks()
> +        self.add_vmcoreinfo()
>  
>          with open(argv[0], "wb") as vmcore:
>              self.dump_init(vmcore)
> diff --git a/hw/acpi/vmcoreinfo.c b/hw/acpi/vmcoreinfo.c
> index 0ea41de8d9..bfef211aad 100644
> --- a/hw/acpi/vmcoreinfo.c
> +++ b/hw/acpi/vmcoreinfo.c
> @@ -20,6 +20,8 @@
>  #include "sysemu/sysemu.h"
>  #include "qapi/error.h"
>  
> +VMCoreInfoState *vmcoreinfo_gdb_helper;
> +
>  void vmcoreinfo_build_acpi(VMCoreInfoState *vis, GArray *table_data,
>                             GArray *vmci, BIOSLinker *linker)
>  {
> @@ -181,6 +183,7 @@ static void vmcoreinfo_realize(DeviceState *dev, Error 
> **errp)
>          return;
>      }
>  
> +    vmcoreinfo_gdb_helper = VMCOREINFO(dev);
>      qemu_register_reset(vmcoreinfo_handle_reset, dev);
>  }
>  
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]