qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v10 3/6] tpm: allocate/map buffer for TPM Physic


From: Igor Mammedov
Subject: Re: [Qemu-devel] [PATCH v10 3/6] tpm: allocate/map buffer for TPM Physical Presence interface
Date: Wed, 5 Sep 2018 11:17:24 +0200

On Wed, 5 Sep 2018 12:21:39 +0400
Marc-André Lureau <address@hidden> wrote:

> Hi
> 
> On Tue, Sep 4, 2018 at 10:51 AM Igor Mammedov <address@hidden> wrote:
> >
> > On Mon, 03 Sep 2018 23:48:15 +0200
> > Juan Quintela <address@hidden> wrote:
> >
> > > Marc-André Lureau <address@hidden> wrote:
> > > > Hi
> > > >
> > > > On Fri, Aug 31, 2018 at 7:32 PM Marc-André Lureau
> > > > <address@hidden> wrote:
> > > >>
> > > >> From: Stefan Berger <address@hidden>
> > > >>
> > > >> Implement a virtual memory device for the TPM Physical Presence 
> > > >> interface.
> > > >> The memory is located at 0xFED45000 and used by ACPI to send messages 
> > > >> to the
> > > >> firmware (BIOS) and by the firmware to provide parameters for each one 
> > > >> of
> > > >> the supported codes.
> > > >>
> > > >> This interface should be used by all TPM devices on x86 and can be
> > > >> added by calling tpm_ppi_init_io().
> > > >>
> > > >> Note: bios_linker cannot be used to allocate the PPI memory region,
> > > >> since the reserved memory should stay stable across reboots, and might
> > > >> be needed before the ACPI tables are installed.
> > > >>
> > > >> Signed-off-by: Stefan Berger <address@hidden>
> > > >> Signed-off-by: Marc-André Lureau <address@hidden>
> > > >> Reviewed-by: Igor Mammedov <address@hidden>
> > >
> > >
> > > ....
> > >
> > > >> + */
> > > >> +#define TPM_PPI_ADDR_SIZE           0x400
> > > >> +#define TPM_PPI_ADDR_BASE           0xFED45000
> > >
> > > > There is a (new) issue with the PPI ram region:
> > > >
> > > > READ of size 32 at 0x61d000090480 thread T6
> > > >     #0 0x5622bd8de0f4 in buffer_zero_avx2
> > > > /home/elmarco/src/qq/util/bufferiszero.c:169
> > > >     #1 0x5622bd8de899 in select_accel_fn
> > > > /home/elmarco/src/qq/util/bufferiszero.c:282
> > > >     #2 0x5622bd8de8f1 in buffer_is_zero
> > > > /home/elmarco/src/qq/util/bufferiszero.c:309
> > > >     #3 0x5622bc209f94 in is_zero_range 
> > > > /home/elmarco/src/qq/migration/ram.c:82
> > > >     #4 0x5622bc21938d in save_zero_page_to_file
> > > > /home/elmarco/src/qq/migration/ram.c:1694
> > > >     #5 0x5622bc219452 in save_zero_page
> > > > /home/elmarco/src/qq/migration/ram.c:1713
> > > >     #6 0x5622bc21db67 in ram_save_target_page
> > > > /home/elmarco/src/qq/migration/ram.c:2289
> > > >     #7 0x5622bc21e13e in ram_save_host_page
> > > > /home/elmarco/src/qq/migration/ram.c:2351
> > > >     #8 0x5622bc21ea3a in ram_find_and_save_block
> > > > /home/elmarco/src/qq/migration/ram.c:2413
> > > >     #9 0x5622bc223b5d in ram_save_iterate
> > > > /home/elmarco/src/qq/migration/ram.c:3193
> > > >     #10 0x5622bd16f544 in qemu_savevm_state_iterate
> > > > /home/elmarco/src/qq/migration/savevm.c:1103
> > > >     #11 0x5622bd157e75 in migration_iteration_run
> > > > /home/elmarco/src/qq/migration/migration.c:2897
> > > >     #12 0x5622bd15892e in migration_thread
> > > > /home/elmarco/src/qq/migration/migration.c:3018
> > > >     #13 0x5622bd902f31 in qemu_thread_start
> > > > /home/elmarco/src/qq/util/qemu-thread-posix.c:504
> > > >     #14 0x7f42f0ef4593 in start_thread (/lib64/libpthread.so.0+0x7593)
> > > >     #15 0x7f42f0c280de in clone (/lib64/libc.so.6+0xfa0de)
> > > > 0x61d000090490 is located 0 bytes to the right of 2064-byte region
> > > > [0x61d00008fc80,0x61d000090490)
> > > >
> > > > migration code is assuming RAM is multiple of TARGET_PAGE_SIZE.
> > >
> > > Physical RAM is multiple of TARGET_PAGE_SIZE O:-)
> > > That assumtion is held in too many places, if you can change the size to
> > > be multiple of page size, that would be greate.
> 
> It needs the RAMBlock to be a multiple of TARGET_PAGE_SIZE, but the
> memory region (in guest and qemu MemoryRegion) can be less. This is
> already the case with for ex, "/address@hidden/acpi/rsdp", which is 36 bytes.
RSDP is different in sense that it's not mapped to GPA


> I suppose 2 different MemoryRegions (backed by different RAMBlock)
> that would be placed on the same page (for ex, at offset 0x0, size
> 0x10 and offset 0x10, size 0x100) would work fine in general and with
> migration?
memory_region_add_subregion() maps whole region only,
theoretically you can make an alias region and map that but it would
be rather ugly.

> 
> > >
> > > > Should the migration code be fixed, or should the TPM code allocate
> > > > ram differently?
> > >
> > > Migration people (i.e. me) would preffer that you fix the TPM
> > > allocation.  Or you can decide that this is *not* RAM.  The unit of
> > > transfer for migrate ram is ram pages, a.k.a. TARGET_PAGE_SIZE.
> > I'd loath to waste whole page in already cramped area.
> > Can we implement it as mmio region? (it will add some bolerplate code for 
> > read/write
> > handlers/migration and cause vmexits on every read/write but it's not a hot 
> > path so we might not care)
> >
> 
> 
> I have done some small adjustments to allow
> memory_region_init_ram_device_ptr() to work with a MemoryRegions size
> != backed RAMBlock, and it seems to work fine (and doesn't need to
> allocate more space of guest memory range, or fall back to mmio)
> 
> thanks
> 
> > > > In all case, I think the migration code should either be fixed or have
> > > > an assert.
> > >
> > > An assert for sure.
> > >
> > > Fixed .... Do we have real devices that put ram regions that are smaller
> > > than page size?
> > >
> > > Later, Juan.
> >
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]