qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PULL v1 0/7] MMIO Exec pull request


From: KONRAD Frederic
Subject: Re: [Qemu-devel] [PULL v1 0/7] MMIO Exec pull request
Date: Thu, 20 Jul 2017 11:53:35 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1



On 07/20/2017 11:42 AM, Peter Maydell wrote:
On 17 July 2017 at 19:58, Dr. David Alan Gilbert <address@hidden> wrote:
* Edgar E. Iglesias (address@hidden) wrote:
Is there a way we can prevent migration of the RAMBlock?

Not yet, I think we'd have to:
    a) Add a flag to the RAMBlock
    b) Set it/clear it on registration
    c) Have a RAMBLOCK_FOREACH_MIGRATABLE macro
    d) Replace all of the RAMBLOCK_FOREACH (and the couple of hand coded
    cases) with the RAMBLOCK_FOREACH_MIGRATABLE
    e) Worry about the corner cases!

I've got a few worries about what happens when the kernel tries to
do dirty yncing - I'm not sure if we have to change anything on that
interface to skip those RAMBlocks.

OK, so what should we do for 2.10 ?

We could:
  * implement the changes you suggest above, and mark only
    vmstate_register_ram'd blocks as migratable
    (would probably need to fix some places which buggily
    don't call vmstate_register_ram)
  * implement the changes above, but special case mmio-interface
    so only its ramblock is marked unmigratable
  * postpone the changes above until 2.11, and for 2.10 register
    a migration-blocker in mmio-interface so that we at least
    give the user a useful error rather than having it fail
    obscurely on vmload (and release note this)

(Or something else?)

I do think we definitely need to fix this for 2.11 at latest.

I think we take less risks with the second one.
Maybe there is other problematic devices which don't call
vmstate_register_ram'd? Which would be broken by the first?

BTW the issue will show up only if ones execute code from the
LQSPI so maybe a mix between (3) and (2) ?

Fred


thanks
-- PMM




reply via email to

[Prev in Thread] Current Thread [Next in Thread]