qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RfC PATCH 06/15] virtio-gpu/2d: add virtio gpu core co


From: Dave Airlie
Subject: Re: [Qemu-devel] [RfC PATCH 06/15] virtio-gpu/2d: add virtio gpu core code
Date: Mon, 2 Mar 2015 08:03:00 +1000

>
> But one non-formal question: As far as I understood virtio-gpu's mode of
> operation from this patch, it looks like there is one resource per scanout,
> and that resource is basically the whole screen (which can be updated
> partially).
>
> If that is the case, what do we gain from being able to display a resource
> on multiple scanouts? If we don't associate a scanout to a resource with
> set_scanout, the resource won't ever be displayed on that scanout; and if we
> associate it, the scanout's position and dimension will be exactly the same
> as the resource's, so associating a resource with multiple scanouts means
> that all those scanouts will be duplicates of each other, which in turn
> means we can duplicate heads. But that seems more like a frontend feature to
> me...

Just how real hw works, and sw expects it, we have resources, and
multiple scanouts
can access one resource, or each scanout can have its own private resource.
> too).
>
>> +        cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
>> +        return;
>> +    }
>> +
>> +    res = g_new0(struct virtio_gpu_simple_resource, 1);
>> +
>> +    res->width = c2d.width;
>> +    res->height = c2d.height;
>> +    res->format = c2d.format;
>> +    res->resource_id = c2d.resource_id;
>
>
> Considering resource_id == 0 is sometimes (apparently) used as "no resource"
> (e.g. in transfer_to_host_2d), would it make sense to test against that
> here?

Gerd, btw I think we mighr want to support a set scanout with no
resource, as it might make Linux atomic modesetting easier in the
future, we'd just scanout black.

>> +                                  struct iovec **iov)
>> +{
>> +    struct virtio_gpu_mem_entry *ents;
>> +    size_t esize, s;
>> +    int i;
>> +
>> +    esize = sizeof(*ents) * ab->nr_entries;
>
>
> Can overflow (on 32 bit hosts).
>
> (And this might be bad, because this function works just fine, and then
> either the for loop does something, or nr_entries is negative when casted to
> signed int, and then the for loop won't do anything; but res->iov_cnt
> (unsigned int) will be set to ab->nr_entries, so this may or may not be
> exploitable, but can probably lead to a crash)
>
>> +    ents = g_malloc(esize);
>
>
> And this may need to be g_try_malloc().
>
> However, I think it's best to simply limit ab->nr_entries to some sane value
> (I guess it shouldn't be even in the double digits for a single resource, so
> restricting it to 256 or something seems sane to me...) and return an error
> to the guest if that limit is exceeded.

please be careful about limiting this, limits should be pessimistic, as if the
the GPU allocates one page per iov, so a 1280x1024x32bpp resource will
have 1280 iovs.

Dave.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]