qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 6/7] scripts/dump-guest-memory.py: add vmcore


From: Laszlo Ersek
Subject: Re: [Qemu-devel] [PATCH v2 6/7] scripts/dump-guest-memory.py: add vmcoreinfo
Date: Thu, 6 Jul 2017 19:29:40 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1

On 07/06/17 12:16, Marc-André Lureau wrote:
> Add vmcoreinfo ELF note if vmcoreinfo device is ready.
> 
> To help the python script, add a little global vmcoreinfo_gdb
> structure, that is populated with vmcoreinfo_gdb_update().
> 
> Signed-off-by: Marc-André Lureau <address@hidden>
> ---
>  scripts/dump-guest-memory.py | 40 ++++++++++++++++++++++++++++++++++++++++
>  hw/acpi/vmcoreinfo.c         |  3 +++
>  2 files changed, 43 insertions(+)
> 
> diff --git a/scripts/dump-guest-memory.py b/scripts/dump-guest-memory.py
> index f7c6635f15..2dd2ed6983 100644
> --- a/scripts/dump-guest-memory.py
> +++ b/scripts/dump-guest-memory.py
> @@ -14,6 +14,7 @@ the COPYING file in the top-level directory.
>  """
>  
>  import ctypes
> +import struct
>  
>  UINTPTR_T = gdb.lookup_type("uintptr_t")
>  
> @@ -120,6 +121,20 @@ class ELF(object):
>          self.segments[0].p_filesz += ctypes.sizeof(note)
>          self.segments[0].p_memsz += ctypes.sizeof(note)
>  
> +
> +    def add_vmcoreinfo_note(self, vmcoreinfo):
> +        """Adds a vmcoreinfo note to the ELF dump."""
> +        chead = type(get_arch_note(self.endianness, 0, 0))
> +        header = chead.from_buffer_copy(vmcoreinfo[0:ctypes.sizeof(chead)])

Maybe it's obvious to others, but I would have been helped a lot if a
comment had explained that you are creating a fake note (with 0 desc
size and 0 name size) to figure out the size of the note header. And
then you copy that many bytes out of the vmcoreinfo ELF note.

> +        note = get_arch_note(self.endianness,
> +                             header.n_namesz - 1, header.n_descsz)

Why the -1?

... I think I'm giving up here for this method. My python is weak and I
can't follow this too well. Please add some comments.

More comments below:

> +        ctypes.memmove(ctypes.pointer(note), vmcoreinfo, ctypes.sizeof(note))
> +        header_size = ctypes.sizeof(note) - header.n_descsz
> +
> +        self.notes.append(note)
> +        self.segments[0].p_filesz += ctypes.sizeof(note)
> +        self.segments[0].p_memsz += ctypes.sizeof(note)
> +
>      def add_segment(self, p_type, p_paddr, p_size):
>          """Adds a segment to the elf."""
>  
> @@ -505,6 +520,30 @@ shape and this command should mostly work."""
>                  cur += chunk_size
>                  left -= chunk_size
>  
> +    def phys_memory_read(self, addr, size):
> +        qemu_core = gdb.inferiors()[0]
> +        for block in self.guest_phys_blocks:
> +            if block["target_start"] <= addr < block["target_end"]:

Although I don't expect a single read to straddle phys-blocks, I would
prefer if you checked (addr + size) -- and not just addr -- against
block["target_end"].

> +                haddr = block["host_addr"] + (addr - block["target_start"])
> +                return qemu_core.read_memory(haddr, size)
> +
> +    def add_vmcoreinfo(self):
> +        if not gdb.parse_and_eval("vmcoreinfo_gdb_helper"):
> +            return
> +
> +        addr = gdb.parse_and_eval("vmcoreinfo_gdb_helper.vmcoreinfo_addr_le")
> +        addr = bytes([addr[i] for i in range(4)])
> +        addr = struct.unpack("<I", addr)[0]
> +
> +        mem = self.phys_memory_read(addr, 16)
> +        (version, addr, size) = struct.unpack("<IQI", mem)
> +        if version != 0:
> +            return
> +
> +        vmcoreinfo = self.phys_memory_read(addr, size)
> +        if vmcoreinfo:
> +            self.elf.add_vmcoreinfo_note(vmcoreinfo.tobytes())
> +
>      def invoke(self, args, from_tty):
>          """Handles command invocation from gdb."""
>  
> @@ -518,6 +557,7 @@ shape and this command should mostly work."""
>  
>          self.elf = ELF(argv[1])
>          self.guest_phys_blocks = get_guest_phys_blocks()
> +        self.add_vmcoreinfo()
>  
>          with open(argv[0], "wb") as vmcore:
>              self.dump_init(vmcore)
> diff --git a/hw/acpi/vmcoreinfo.c b/hw/acpi/vmcoreinfo.c
> index 0ea41de8d9..b6bcb47506 100644
> --- a/hw/acpi/vmcoreinfo.c
> +++ b/hw/acpi/vmcoreinfo.c
> @@ -163,6 +163,8 @@ static void vmcoreinfo_handle_reset(void *opaque)
>      memset(vis->vmcoreinfo_addr_le, 0, ARRAY_SIZE(vis->vmcoreinfo_addr_le));
>  }
>  
> +static VMCoreInfoState *vmcoreinfo_gdb_helper;
> +
>  static void vmcoreinfo_realize(DeviceState *dev, Error **errp)
>  {
>      if (!bios_linker_loader_can_write_pointer()) {
> @@ -181,6 +183,7 @@ static void vmcoreinfo_realize(DeviceState *dev, Error 
> **errp)
>          return;
>      }
>  
> +    vmcoreinfo_gdb_helper = VMCOREINFO(dev);
>      qemu_register_reset(vmcoreinfo_handle_reset, dev);
>  }
>  
> 

I guess we don't build QEMU with link-time optimization at the moment.

With link-time optimization, I think gcc might reasonably optimize away
the assignment to "vmcoreinfo_gdb_helper", and "vmcoreinfo_gdb_helper"
itself. This is why I suggested "volatile":

static VMCoreInfoState * volatile vmcoreinfo_gdb_helper;

Do you think volatile is only superfluous, or do you actively dislike it
for some reason?

Thanks,
Laszlo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]