qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] use bdrv_aio functions in fdc.c


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH] use bdrv_aio functions in fdc.c
Date: Thu, 22 Mar 2012 10:15:15 +0000

On Thu, Mar 22, 2012 at 7:06 AM, liu ping fan <address@hidden> wrote:
> On Tue, Mar 20, 2012 at 9:56 PM, Stefan Hajnoczi
> <address@hidden> wrote:
>> On Mon, Mar 19, 2012 at 05:17:15PM +0800, Li Zhi Hui wrote:
>>> @@ -1196,107 +1322,108 @@ static int fdctrl_transfer_handler (void *opaque, 
>>> int nchan,
>>>          return 0;
>>>      }
>>>      cur_drv = get_cur_drv(fdctrl);
>>> -    if (fdctrl->data_dir == FD_DIR_SCANE || fdctrl->data_dir == 
>>> FD_DIR_SCANL ||
>>> -        fdctrl->data_dir == FD_DIR_SCANH)
>>> +    if ((fdctrl->data_dir == FD_DIR_SCANE) ||
>>> +        (fdctrl->data_dir == FD_DIR_SCANL) ||
>>> +        (fdctrl->data_dir == FD_DIR_SCANH)) {
>>>          status2 = FD_SR2_SNS;
>>> -    if (dma_len > fdctrl->data_len)
>>> +    }
>>> +    if (dma_len > fdctrl->data_len) {
>>>          dma_len = fdctrl->data_len;
>>> +    }
>>>      if (cur_drv->bs == NULL) {
>>> -        if (fdctrl->data_dir == FD_DIR_WRITE)
>>> -            fdctrl_stop_transfer(fdctrl, FD_SR0_ABNTERM | FD_SR0_SEEK, 
>>> 0x00, 0x00);
>>> -        else
>>> +        if (fdctrl->data_dir == FD_DIR_WRITE) {
>>> +            fdctrl_stop_transfer(fdctrl,
>>> +                FD_SR0_ABNTERM | FD_SR0_SEEK, 0x00, 0x00);
>>> +        } else {
>>>              fdctrl_stop_transfer(fdctrl, FD_SR0_ABNTERM, 0x00, 0x00);
>>> +        }
>>>          len = 0;
>>>          goto transfer_error;
>>>      }
>>> +
>>> +    if ((fdctrl->data_dir != FD_DIR_WRITE) && (fdctrl->data_pos < 
>>> dma_len)) {
>>> +        fdc_sector_num = (dma_len + FD_SECTOR_LEN - 1) / FD_SECTOR_LEN;
>>> +        opaque_cb = g_malloc0(sizeof(FDC_opaque));
>>
>> I think we can only have 1 I/O pending at a time.  Therefore it's
>> probably not necessary to create a separate struct, we can just pass the
>> FDrive/FDCtrl.
>>
>>> +        qiov = g_malloc0(sizeof(QEMUIOVector));
>>
>> This is leaked.  I think it can be a field in opaque_cb, there's no need
>> to allocate this separately on the heap.
>>
>>> +        pfdc_string = g_malloc0(fdc_sector_num * FD_SECTOR_LEN);
>>
>> Looks like this buffer is leaked.  A block I/O buffer should be
>> allocated with qemu_blockalign() instead of g_malloc0() so that memory
>> alignment requirements for O_DIRECT image files are met.
>>
>> Would it be possible to use the fifo[] buffer instead of allocating a
>> new buffer?
>>
>>> +
>>> +        qemu_iovec_init(qiov, 1);
>>> +        qiov->iov->iov_base = pfdc_string;
>>> +        qiov->iov->iov_len = fdc_sector_num * FD_SECTOR_LEN;
>>> +        qiov->niov = 1;
>>> +        qiov->size = fdc_sector_num * FD_SECTOR_LEN;
>>
>> The easiest way to do this is:
>>
>> iov.iov_base = fifo;
>> iov.iov_len  = fdc_sector_num * FD_SECTOR_LEN;
>> qemu_iovec_init_external(qiov, iov, 1);
>>
>> We shouldn't duplicate the qiov->size calculation - that's already
>> provided by qemu_iovec_init_external() or qemu_iovec_add().
>>
>>> +        opaque_cb->fdctrl = fdctrl;
>>> +        opaque_cb->qiov = qiov;
>>> +        opaque_cb->nchan = nchan;
>>> +        opaque_cb->dma_len = dma_len;
>>> +        bdrv_aio_readv(cur_drv->bs, fd_sector(cur_drv),
>>> +            qiov, fdc_sector_num, fdctrl_read_DMA_cb, opaque_cb);
>>> +        return dma_len;
>>
>> We are returning dma_len but the I/O has not yet happened.  The guest
>> could see that the DMA controller register has updated before we've
>> actually transferred data.  This seems risky.
>>
> I think that fdctrl_stop_transfer() update the FDCtrl, so until then,
> the guest driver will see the update.

You're referring to the fdc device state but I meant the ISA DMA
controller registers.  hw/dma.c:read_chan() will expose the
incremented value to the guest before we've transferred any data.  I
think we shouldn't increment the DMA transfer byte count before it has
completed.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]