qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v1 1/2] vhost-user: support SET_MEM_TABLE waite


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH v1 1/2] vhost-user: support SET_MEM_TABLE waite the result of mmap
Date: Tue, 10 Feb 2015 11:41:26 +0100

On Tue, Feb 10, 2015 at 06:27:04PM +0800, Linhaifeng wrote:
> 
> 
> On 2015/2/10 16:46, Michael S. Tsirkin wrote:
> > On Tue, Feb 10, 2015 at 01:48:12PM +0800, linhaifeng wrote:
> >> From: Linhaifeng <address@hidden>
> >>
> >> Slave should reply to master and set u64 to 0 if
> >> mmap all regions success otherwise set u64 to 1.
> >>
> >> Signed-off-by: Linhaifeng <address@hidden>
> > 
> > How does this work with existig slaves though?
> > 
> 
> Slaves should work like this:
> 
> int set_mem_table(...)
> {
>     ....
>     for (idx = 0, i = 0; idx < memory.nregions; idx++) {
>       ....
>       mem = mmap(..);
>       if (MAP_FAILED == mem) {
>               msg->msg.u64 = 1;
>                 msg->msg.size = MEMB_SIZE(VhostUserMsg, u64);
>               return 1;
>       }
>     }
> 
>     ....
> 
>     msg->msg.u64 = 0;
>     msg->msg.size = MEMB_SIZE(VhostUserMsg, u64);
>     return 1;
> }
> 
> If slaves not reply QEMU will always wait.

Are you sure existing slaves reply?

> >> ---
> >>  docs/specs/vhost-user.txt | 1 +
> >>  1 file changed, 1 insertion(+)
> >>
> >> diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
> >> index 650bb18..c96bf6b 100644
> >> --- a/docs/specs/vhost-user.txt
> >> +++ b/docs/specs/vhost-user.txt
> >> @@ -171,6 +171,7 @@ Message types
> >>        Id: 5
> >>        Equivalent ioctl: VHOST_SET_MEM_TABLE
> >>        Master payload: memory regions description
> >> +      Slave payload: u64 (0:success >0:failed)
> >>  
> >>        Sets the memory map regions on the slave so it can translate the 
> >> vring
> >>        addresses. In the ancillary data there is an array of file 
> >> descriptors
> >> -- 
> >> 1.7.12.4
> >>
> > 
> > 
> 
> -- 
> Regards,
> Haifeng



reply via email to

[Prev in Thread] Current Thread [Next in Thread]