qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/6] PCI DMA API (v2)


From: Paul Brook
Subject: Re: [Qemu-devel] [PATCH 2/6] PCI DMA API (v2)
Date: Mon, 7 Apr 2008 10:44:41 -0500
User-agent: KMail/1.9.9

> +/* Return a new IOVector that's a subset of the passed in IOVector.  It
> should + * be freed with qemu_free when you are done with it. */
> +IOVector *iovector_trim(const IOVector *iov, size_t offset, size_t size);

Using qemu_free directly seems a bad idea. I guess we're likely to want to 
switch to a different memory allocation scheme in the future.
The comment is also potentially misleading because iovector_new() doesn't 
mention anything about having to free the vetor.

> +int bdrv_readv(BlockDriverState *bs, int64_t sector_num,
>...
> +    size = iovector_size(iovec);
> +    buffer = qemu_malloc(size);

This concerns me for two reasons:
(a) I'm alway suspicious about the performance implications of using malloc on 
a hot path.
(b) The size of the bufer is unbounded. I'd expect multi-megabyte transters to 
be common, and gigabyte sized operations are plausible.

At minimum you need a comment acknowledging that we've considered these 
issues.

> +void *cpu_map_physical_page(target_phys_addr_t addr)
> +    /* DMA'ing to MMIO, just skip */
> +    phys_offset = cpu_get_physical_page_desc(addr);
> +    if ((phys_offset & ~TARGET_PAGE_MASK) != IO_MEM_RAM)
> +       return NULL;

This is not OK. It's fairly common for smaller devies to use a separate DMA 
engine that writes to a MMIO region. You also never check the return value of 
this function, so it will crash qemu.

> +void pci_device_dma_unmap(PCIDevice *s, const IOVector *orig,

This funcion should not exist.  Dirty bits should be set by the memcpy 
routines.

Paul




reply via email to

[Prev in Thread] Current Thread [Next in Thread]