qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH 0/4] Fix memory leaks in QEMU


From: Kirill Batuzov
Subject: [Qemu-devel] [PATCH 0/4] Fix memory leaks in QEMU
Date: Fri, 18 Apr 2014 17:41:20 +0400

I tried running QEMU under Valgrind's Memcheck tool and managed to find
some memory leaks.  I only checked "definitely lost" reports.  I ignored
reports related to SDL/GTK because it is hard to tell if memory leak occurred
in QEMU or in the library.

All found errors followed one pattern:
 1) Callee allocates some memory and returns pointer to the caller expecting it
    to free this memory later.
 2) Caller knows nothing about the nature of the pointer returned and does not
    perform the required cleanup.

In actual code callee often does not allocate memory directly but calls some
function that calls some other function ... that calls some other
function that dynamically allocates memory.  Tracking pointers through this
stacktrace is not an easy task without detailed knowledge of QEMU core.

The things get even more complicated because QEMU has 4 different ways to
allocate and deallocate memory mixed together:
 1) malloc and free,
 2) g_malloc and g_free,
 3) tcg_malloc and tcg_pool_reset,
 4) QObjects with reference counting.

List of culprits.

error_set:
  It allocates memory for error description.  Then this error description
  is propagated to the caller and needs to be deallocated at some point with
  error_free.  Usually it is done correctly.  I fixed one missing deallocation
  in graphic_console_init.

object_property_get_qobject:
  Returned QObject implements reference counting, caller must perform
  decref.  Outside object model this function is only used in ACPI.  I added
  missing decrefs, but I wonder if it can be replaced with "safe" function
  object_property_get_int.  The only difference is in handling case when
  requested property is not set.

PortioList:
  For some unknown reason people tend to allocate PortioList with g_new but
  not g_free it later.  They probably think that portio_list_add saves pointer
  in some global variables, which it does not.  Also portio_list_destroy was
  never called and memory allocated in portio_list_init was leaking.  I'm not
  sure how PortioList was inteded to be used by its author.  There is
  portio_list_del function that removes memory regions related to PortioList
  from memory hierarchy.  It can be used for cleanup in the end of the work
  and it needs PortioList to perform the task.  But it is never used, so for
  now I made all PortioList variables local (allocated on stack) and added
  calls to portio_list_destroy after portio_list_add.

qemu_allocate_irqs:
  The most troublesome case.  It will need its own patch series and I need
  some advices on how to deal with it.

  qemu_allocate_irqs allocates two arrays: one for actual IRQState structures
  and one for pointers to IRQState structures also known as qemu_irq.  While
  the first array is used during emulation process, the second one is only
  needed to return pointers to the caller.  Elements of the second array are
  eventually passed to sysbus_connect_irq or qdev_connect_gpio_out and get
  copied in the process (they are passed by value).  After that the second
  array is not needed anymore and all pointers to it get lost.  At that moment
  memory leak occurs if the array is not free'd.  And it almost never does.

  qemu_allocate_irqs has poor interface.  Not only it returns pointer to
  dynamically allocated memory and expects caller to free it, but it does so
  in a very counterintuitive manner.  Caller still needs IRQs allocated by
  qemu_allocate_irqs yet it must free returned memory at the same time.

  Each emulated board needs interrupts so nearly every board calls
  qemu_allocate_irqs at least once.  There are more than 90 occurrences in QEMU
  sources.  Changing it will affect every target.  Testing all of them will
  be very hard. But I believe it should be done nevertheless.

  Peculiar fact: out of 90+ calls to qemu_allocate_irqs 47 allocate 1
  interrupt.  27 of them use the following weird code snippet:
    some_function(qemu_allocate_irqs(handler, opaque, 1)[0]);
  instead of using simpler qemu_allocate_irq.

  The proper solution would be to introduce qemu_init_irqs function which
  initializes already allocated array of qemu_irq, then rewrite every piece of
  code calling qemu_allocate_irqs with either qemu_init_irqs or
  qemu_allocate_irq, and then remove qemu_allocate_irqs completely.

  Pros: we get rid of qemu_allocate_irqs in one go.
  Cons: literally ever board gets affected. 

  Another approach is to introduce new function qemu_init_irqs, mark
  qemu_allocate_irqs as deprecated and gradually rewrite code over period of
  time.  But I do not think it is viable.  QEMU core has complex API which is
  not well documented.  As a result people learn it from examples.  And all
  examples of interrupt allocation will be bad at that moment.  As a result
  maintainers will need to keep very close watch to not allow new "bad" code
  to slip into upstream.

  Any thoughts on how to deal with qemu_allocate_irqs? Is there any archive
  of guest system images for testing purposes? The list on the wiki page
  covers only small part of supported boards. 

Kirill Batuzov (4):
  Replace acpi_pcihp_get_bsel with generic object_property_get_int
  acpi-build: properly decrement objects' reference counters
  graphic_console_init: do not receive unneeded error descriptions
  PortioList: fix PortioList uses so they do not leak memory

 hw/acpi/pcihp.c         |   23 ++++++-----------------
 hw/audio/adlib.c        |    7 ++++---
 hw/display/qxl.c        |    9 +++++----
 hw/display/vga.c        |   16 +++++++++-------
 hw/dma/i82374.c         |    7 ++++---
 hw/i386/acpi-build.c    |    6 ++++++
 hw/isa/isa-bus.c        |    7 ++++---
 hw/ppc/prep.c           |    7 ++++---
 hw/watchdog/wdt_ib700.c |    7 ++++---
 ui/console.c            |    7 ++-----
 10 files changed, 48 insertions(+), 48 deletions(-)

-- 
1.7.10.4




reply via email to

[Prev in Thread] Current Thread [Next in Thread]