qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] about correctness of IDE emulation


From: John Snow
Subject: Re: [Qemu-devel] about correctness of IDE emulation
Date: Wed, 13 Apr 2016 14:07:25 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0


On 04/13/2016 03:25 AM, Huaicheng Li (coperd) wrote:
> 
>> On Mar 14, 2016, at 10:09 PM, Huaicheng Li <address@hidden> wrote:
>>
>>
>>> On Mar 13, 2016, at 8:42 PM, Fam Zheng <address@hidden> wrote:
>>>
>>> On Sun, 03/13 14:37, Huaicheng Li (coperd) wrote:
>>>> Hi all, 
>>>>
>>>> What I’m confused about is that:
>>>>
>>>> If one I/O is too large and may need several rounds (say 2) of DMA 
>>>> transfers,
>>>> it seems the second round transfer begins only after the completion of the
>>>> first part, by reading data from **IDEState**. But the IDEState info may 
>>>> have
>>>> been changed by VCPU threads (by writing new I/Os to it) when the first
>>>> transfer finishes. From the code, I see that IDE r/w call back function 
>>>> will
>>>> continue the second transfer by referencing IDEState’s information. 
>>>> Wouldn’t
>>>> this be problematic? Am I missing anything here?
>>>
>>> Can you give an concrete example? I/O in VCPU threads that changes IDEState
>>> must also take care of the DMA transfers, for example ide_reset() has
>>> blk_aio_cancel and clears s->nsectors. If an I/O handler fails to do so, it 
>>> is
>>> a bug.
>>>
>>> Fam
>>
>> I get it now. ide_exec_cmd() can only proceed when BUSY_STAT|DRQ_STAT is not 
>> set.
>> When the 2nd DMA transfer continues, BUSY_STAT | DRQ_STAT is already
>> set, i.e., no other new ide_exec_cmd() can enter. BSUY or DRQ is removed 
>> only when
>> all DMA transfers are done, after which new writes to IDE are allowed. Thus 
>> it’s safe.
>>
>> Thanks, Fam & Stefan.
> 
> Hi all, I have some further puzzles about IDE emulation:
> 
>   (1). IDE can only handle I/Os one by one.  So in the AIO queue there will 
> always be only
>  **ONE** I/O from this IDE, right? For the bigs I/Os which need to be spliced 
> into several 
> rounds of DMA transfers, they are also served one by one. (after one DMA 
> transfer [as an
> AIO] is finished, another DMA transfer will be submitted and so on).  Here I 
> want to convey
> that there is no batch submission in IDE path at all. True?

Correct. In general, DMA requests are fulfilled all at once, so in
general each read request to the IDE device is processed as one giant
DMA request.

I believe ATAPI DMA requests might be split by 2048 chunks, though.

>   (2). When the guest kernel prepares to do a big I/O which need multiple 
> rounds of  DMA 
> transfers, will each DMA transfer round (one PRD entry) be trapped and 
> trigger one IDE 
> emulation, or IDE will handle all the PRD in one shot? 

the IDE emulator does not attempt to process the PRDs individually, but
it builds an SGList that is passed down through the AIO stack and
eventually to Linux.

I'm not sure how Linux decides to process contiguous vs. noncontiguous
PRD entries.

The IDE emulator however does not iterate per-PRD except to build the
SGList. When the AIOCB is invoked, IDE expects that all PRDs it
submitted were handled.

(For instance, there is an AHCI flag for PRDs that an interrupt should
be signalled after *this PRD* was processed. Unfortunately, there is no
current way to detect this in QEMU, so I believe we ignore this flag
currently. AHCI describes this as an "opportunistic interrupt.")

>   (3). I traced the execution of my guest application with big I/Os (each 
> time reads 2MB),
> then in the IDE layer, I found that it’s splitted into 512KB chunks for each 
> DMA transfer. 
> Why is 512KB here?? From the BMDMA spec, PRD table can at most represent 
> 64KB/8bytes
> = 8192 buffers, each of which can be a at most 64KB continuous buffer. This 
> would give
> us 8192*64KB=512MB for each DMA. 
> 

The splitting you're seeing could be occurring in lots of different
places -- your host OS, QEMU's AIO handling itself, or the guest OS.
It's *not* happening in the IDE emulator, though.

The IDE emulator itself does not attempt to split requests by 512KB
chunks -- you can test yourself by putting a tracer in dma_cb() in
core.c to see how many bytes IDE is requesting at a time -- I was able
to ask for 1025 sectors in one-shot using a modified version of
tests/ide-test.

You can put a tracer in cmd_read_dma as well to see how many sectors the
guest is requesting from the IDE device at a time.

> Am I missing anything here?  
> 

Why do you want to use IDE? If you are looking for performance, why not
a virtio device?

> Thanks for your attention.
> 
> Best,
> Huaicheng
> 
> 

--js



reply via email to

[Prev in Thread] Current Thread [Next in Thread]