qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCHv2 09/12] vhost: vhost net support


From: Michael S. Tsirkin
Subject: [Qemu-devel] Re: [PATCHv2 09/12] vhost: vhost net support
Date: Sat, 27 Feb 2010 21:38:24 +0200
User-agent: Mutt/1.5.19 (2009-01-05)

On Fri, Feb 26, 2010 at 09:18:03AM -0600, Anthony Liguori wrote:
> On 02/26/2010 08:49 AM, Michael S. Tsirkin wrote:
>>
>> KVM code needs all kind of work-arounds for KVM specific issues.
>> It also assumes that KVM is registered at startup, so it
>> does not try to optimize finding slots.
>>    
>
> No, the slot mapping changes dynamically so KVM certainly needs to  
> optimize this.

Maybe, but it does not, KVM algorithms are n^2 or worse.

> But the point is, why can't we keep a central list of slots somewhere  
> that KVM and vhost-net can both use?  I'm not saying we use a common  
> function to do this work, I'm saying qemu should maintain a proper slot  
> list than anyone can access.
>
>> I propose merging this as is, and then someone who has an idea
>> how to do this better can come and unify the code.
>>    
>
> Like I said, this has been a huge source of very subtle bugs in the  
> past.  I'm open to hearing what other people think, but I'm concerned  
> that if we merge this code, we'll end up facing some nasty bugs that  
> could easily be eliminated by just using the code in kvm-all that has  
> already been tested rather extensively.
>
> There really aren't that many work-arounds in the code BTW.  The work  
> arounds just result in a couple of extra slots too so they shouldn't be  
> a burden to vhost.
>
>> Mine has no bugs, let's switch to it!
>>
>> Seriously, need to tread very carefully here.
>> This is why I say: merge it, then look at how to reuse code.
>>    
>
> Once it's merged, there's no incentive to look at reusing code.
> Again, I don't think this is a huge burden to vhost.  The two bits of code  
> literally do exactly the same thing.  They just use different data  
> structures that ultimately contain the same values.

Not exactly. For example kvm track ROM and video ram addresses.

>>> C++ habits die hard :-)
>>>      
>>
>> What's that about?
>>    
>
> '++i' is an odd thing to do in C in a for() loop.  We're not explicit
> about it in Coding Style but the vast majority of code just does
> 'i++'.

Ugh. Do we really need to specify every little thing?

>>>> +    vq->desc = cpu_physical_memory_map(a,&l, 0);
>>>> +    if (!vq->desc || l != s) {
>>>> +        r = -ENOMEM;
>>>> +        goto fail_alloc;
>>>> +    }
>>>> +    s = l = offsetof(struct vring_avail, ring) +
>>>> +        sizeof(u_int64_t) * vq->num;
>>>> +    a = virtio_queue_get_avail(vdev, idx);
>>>> +    vq->avail = cpu_physical_memory_map(a,&l, 0);
>>>> +    if (!vq->avail || l != s) {
>>>> +        r = -ENOMEM;
>>>> +        goto fail_alloc;
>>>> +    }
>>>>
>>>>        
>>> You don't unmap avail/desc on failure.  map() may fail because the ring
>>> cross MMIO memory and you run out of a bounce buffer.
>>>
>>> IMHO, it would be better to attempt to map the full ring at once and
>>> then if that doesn't succeed, bail out.  You can still pass individual
>>> pointers via vhost ioctls but within qemu, it's much easier to deal with
>>> the whole ring at a time.
>>>      
>> + a = virtio_queue_get_desc(vdev, idx);
>> I prefer to keep as much logic about ring layout as possible
>> in virtio.c
>>    
>
> Well, the downside is that you need to deal with the error path and  
> cleanup paths and it becomes more complicated.
>
>>>> +    s = l = offsetof(struct vring_used, ring) +
>>>> +        sizeof(struct vring_used_elem) * vq->num;
>>>>
>>>>        
>>> This is unfortunate.  We redefine this structures in qemu to avoid
>>> depending on Linux headers.
>>>      
>> And we should for e.g. windows portability.
>>
>>    
>>>   But you're using the linux versions instead
>>> of the qemu versions.  Is it really necessary for vhost.h to include
>>> virtio.h?
>>>      
>> Yes. And anyway, vhost does not exist on non-linux systems so there
>> is no issue IMO.
>>    
>
> Yeah, like I said, it's unfortunate because it means a read of vhost and  
> a reader of virtio.c is likely to get confused.  I'm not saying there's  
> an easy solution, it's just unfortunate.
>
>>>> +    vq->used_phys = a = virtio_queue_get_used(vdev, idx);
>>>> +    vq->used = cpu_physical_memory_map(a,&l, 1);
>>>> +    if (!vq->used || l != s) {
>>>> +        r = -ENOMEM;
>>>> +        goto fail_alloc;
>>>> +    }
>>>> +
>>>> +    r = vhost_virtqueue_set_addr(dev, vq, idx, dev->log_enabled);
>>>> +    if (r<   0) {
>>>> +        r = -errno;
>>>> +        goto fail_alloc;
>>>> +    }
>>>> +    if (!vdev->binding->guest_notifier || !vdev->binding->host_notifier) {
>>>> +        fprintf(stderr, "binding does not support irqfd/queuefd\n");
>>>> +        r = -ENOSYS;
>>>> +        goto fail_alloc;
>>>> +    }
>>>> +    r = vdev->binding->guest_notifier(vdev->binding_opaque, idx, true);
>>>> +    if (r<   0) {
>>>> +        fprintf(stderr, "Error binding guest notifier: %d\n", -r);
>>>> +        goto fail_guest_notifier;
>>>> +    }
>>>> +
>>>> +    r = vdev->binding->host_notifier(vdev->binding_opaque, idx, true);
>>>> +    if (r<   0) {
>>>> +        fprintf(stderr, "Error binding host notifier: %d\n", -r);
>>>> +        goto fail_host_notifier;
>>>> +    }
>>>> +
>>>> +    file.fd = event_notifier_get_fd(virtio_queue_host_notifier(q));
>>>> +    r = ioctl(dev->control, VHOST_SET_VRING_KICK,&file);
>>>> +    if (r) {
>>>> +        goto fail_kick;
>>>> +    }
>>>> +
>>>> +    file.fd = event_notifier_get_fd(virtio_queue_guest_notifier(q));
>>>> +    r = ioctl(dev->control, VHOST_SET_VRING_CALL,&file);
>>>> +    if (r) {
>>>> +        goto fail_call;
>>>> +    }
>>>>
>>>>        
>>> This function would be a bit more reasonable if it were split into
>>> sections FWIW.
>>>      
>> Not sure what do you mean here.
>>    
>
> Just a suggestion.  For instance, moving the setting up of the notifiers  
> to a separate function would help with readability IMHO.


Hmm. I'll look into it.
I actually think that for functions that just do a list of things
unconditionally, without branches or loops, or with just error handling
as here, it is perfectly fine for them to be of any length.

>>
>>> You never unmap() the mapped memory and you're cheating by assuming that
>>> the virtio rings have a constant mapping for the life time of a guest.
>>> That's not technically true.  My concern is that since a guest can
>>> trigger remappings (by adjusting PCI mappings) badness can ensue.
>>>      
>> I do not know how this can happen. What do PCI mappings have to do with this?
>> Please explain. If it can, vhost will need notification to update.
>>    
>
> If a guest modifies the bar for an MMIO region such that it happens to  
> exist in RAM, while this is a bad thing for the guest to do, I don't  
> think we do anything to stop it.  When the region gets remapped, the  
> result will be that the mapping will change.

So IMO this is the bug. If there's a BAR that matches RAM
physical address, it should never get mapped. Any idea how
to check this?

> Within qemu, because we carry the qemu_mutex, we know that the mappings  
> are fixed as long as we're in qemu.  We're very careful to assume that  
> we don't rely on a mapping past when we drop the qemu_mutex.
>
> With vhost, you register a slot table and update it whenever mappings  
> change.  I think that's good enough for dealing with ram addresses.  But  
> you pass the virtual address for the rings and assume those mappings  
> never change.

So, the issue IMO is that an MMIO address gets passed instead of RAM.
There's no reason to put virtio rings not in RAM, we just need to
verify this.

> 
> I'm pretty sure a guest can cause those to change and I'm not 100% sure,  
> but I think it's a potential source of exploits if you assume a mapping.  
> In the very least, a guest can trick vhost into writing to ram that it 
> wouldn't normally write to.

This seems harmless. guest can write anywhere in ram, anyway.

>>> If you're going this way, I'd suggest making static inlines in the
>>> header file instead of polluting the C file.  It's more common to search
>>> within a C file and having two declarations can get annoying.
>>>
>>> Regards,
>>>
>>> Anthony Liguori
>>>      
>> The issue with inline is that this means that virtio net will depend on
>> target (need to be recompiled).  As it is, a single object can link with
>> vhost and non-vhost versions.
>>    
>
> Fair enough.
>
> Regards,
>
> Anthony Liguori




reply via email to

[Prev in Thread] Current Thread [Next in Thread]