qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PULL 075/118] macio: handle non-block ATAPI DMA transf


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PULL 075/118] macio: handle non-block ATAPI DMA transfers
Date: Tue, 24 Jun 2014 13:22:30 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 24.06.2014 um 13:02 hat Alexander Graf geschrieben:
> The way DBDMA works is that you put in something similar to a
> scatter-gather list: A list of chunks to read / write and where in
> memory those chunks live. DBDMA then goes over its list and does the
> pokes. So for example if the list is
> 
>   [ memaddr = 0x12000 | len = 500 ]
>   [ memaddr = 0x13098 | len = 12 ]
> 
> then it reads 500 bytes from IDE, writes them at memory offset
> 0x12000 and after that reads another 12 bytes from IDE and puts them
> at memory offset 0x13098.
> 
> The reason we have such complicated code for real DMA is that we
> can't model this easily with our direct block-to-memory API. That
> one can only work on a 512 byte granularity. So when we see
> unaligned accesses like above, we have to split them out and handle
> them lazily.

Wait... What kind of granularity are you talking about?

We do need disk accesses with a 512 byte granularity, because the API
takes a sector number. This is also what real IDE disks do, they don't
provide byte access.

However, for the memory, I can't see why you couldn't pass a s/g list
like what you wrote above to the DMA functions. This is not unusual at
all and is the same as ide/pci.c does. There is no 512-byte alignment
needed for the individual s/g list entries, only the total size should
obviously be a multiple of 512 in the general case (otherwise the list
would be too short or too long for the request).

If this is really what we're talking about, then I think your problem is
just that you try to handle the 500 byte and the 12 byte as individual
requests instead of building up the s/g list and then sending a single
request.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]