qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [Qemu-devel] [QEMU PATCH v5 4/6] migration: migrate QTAIL


From: David Gibson
Subject: Re: [Qemu-ppc] [Qemu-devel] [QEMU PATCH v5 4/6] migration: migrate QTAILQ
Date: Fri, 7 Oct 2016 14:25:52 +1100
User-agent: Mutt/1.7.0 (2016-08-17)

On Thu, Oct 06, 2016 at 08:01:56PM +0100, Dr. David Alan Gilbert wrote:
> * Jianjun Duan (address@hidden) wrote:
> > 
> > 
> > On 10/05/2016 09:56 AM, Dr. David Alan Gilbert wrote:
> > > * Jianjun Duan (address@hidden) wrote:
> > >> Currently we cannot directly transfer a QTAILQ instance because of the
> > >> limitation in the migration code. Here we introduce an approach to
> > >> transfer such structures. In our approach such a structure is tagged
> > >> with VMS_LINKED. We then modified vmstate_save_state and 
> > >> vmstate_load_state
> > >> so that when VMS_LINKED is encountered, put and get from VMStateInfo are
> > >> called respectively. We created VMStateInfo vmstate_info_qtailq for 
> > >> QTAILQ.
> > >> Similar VMStateInfo can be created for other data structures such as 
> > >> list.
> > >> This approach will be used to transfer pending_events and ccs_list in 
> > >> spapr
> > >> state.
> > >>
> > >> We also create some macros in qemu/queue.h to access a QTAILQ using 
> > >> pointer
> > >> arithmetic. This ensures that we do not depend on the implementation
> > >> details about QTAILQ in the migration code.
> > > 
> > > I think we're going to need a way to have a more flexible
> > > loops; and thus my choice here wouldn't be to use the .get/.put together
> > > with the VMSD; but I think we'll end up needing a new
> > > data structure, maybe a VMStateLoop *loop in VMStateField.
> > > 
> > > So would it be easier if you added that new member, then you wouldn't 
> > > have to
> > > modify every get() and put() function that already exists in the previous 
> > > patch.
> > > 
> > > Specifically, your format of QTAILQ is perfectly reasonable - a
> > > byte before each entry which is 1 to indicate there's an entry or 0
> > > to indicate termination, but there are lots of other variants, e.g.
> > > 
> > >    a) put_scsi_requests uses that byte to hold a flag, so it's 0,1,2
> > >       0 still means terminate but 1 or 2 set a flag in the structure.
> > 
> > I quickly take a look of put_scsi_requests. It is transferring a QTAILQ of
> > SCSIRequest. However it goes into the structure inside to dump the
> > elements out.
> > If using my approach, I would have a VMSD for SCSIRequest. The
> > additional byte used to indicate the end of the queue would lie outside
> > the SCSCIRequest data block, so there would be no confusion.
> 
> Hmm OK; I don't think it's that easy but we'll see.
> 
> However, can I make one much simpler request; please split this patch
> so that the VMSTATE_LINKED and 
> vmstate_save_state/vmstate_load_state/vmfield_get_type_name
> are in one patch, while the QTAILQ patches are in a separate patch.
> (I'd be OK if you moved the VMSTATE_LINKED into the previous patch).
> 
> I've just been thinking about a different use for the same mechanism;
> I want to do a:
>   VMSTATE_WITH_TMP(t1*, type1, type2, vmsd)
> 
> which also sets the LINKED, where the .get/.put allocate a temporary
> structure (of type/size type2), set up *tmp = t1 and then do the 
> vmstate_load/save
> using the vmsd on the temporary; something like (untested):
> 
> static int get_tmp(QEMUFile *f, void *pv, size_t unused_size, VMStateField 
> *field)
> {
>     const VMStateDescription *vmsd = field->vmsd;
>     size_t size = field->size;
>     int version_id = field->version_id;
>     void *tmp = gmalloc(size);
>     int ret;
>     
>     *(void **)tmp = pv;
>     ret = vmstate_load_state(f, vmsd, tmp, version_id);
>     gfree(tmp);
>     return ret;
> }
> 
> This can be in a generic macro; and we would impose that type2 must be a 
> struct
> with the first element is 'type1* parent' (compile checked).
> This would work nicely for where we have to do some maths to generate some
> temporary results prior to migration; the .pre_save of the vmsd can read the 
> data
> from pv->parent and write it to the other fields but not have to use
> qemu_get_*/qemu_put_* at all.
> 
> Dave

Oh, I like this idea.  I know there are a number of places where
should-be-obsolete fields are still present in structures purely to
catch incoming migration info which is then converted to the modern
representation in post_load.  This would allow cleaning a bunch of
those up.

It would also mean we don't necessarily need explicit handling of
queues/lists.  I objected to early versions of this series which
dumped the qtailq into an array and used the existing array vmstate
types, because it meant not just an only-for-migration field in the
structure, but a substantial slab of only-for-migration data.

If we added the concept of temporary "catching" structures to the
vmsd, that objection would go away.  I'd be happy enough to
temporarily dump the queue into an array, transfer that over the
stream into another temporary array, then load it into the destination
queue.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]