qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v1 2/4] m25p80: initial verion


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [RFC PATCH v1 2/4] m25p80: initial verion
Date: Tue, 3 Apr 2012 08:03:32 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, Mar 30, 2012 at 04:37:11PM +1000, Peter A. G. Crosthwaite wrote:
> +static void flash_sync_page(struct flash *s, int page)
> +{
> +    if (s->bdrv) {
> +        int bdrv_sector;
> +        int offset;
> +
> +        bdrv_sector = (page * s->pagesize) / 512;
> +        offset = bdrv_sector * 512;
> +        bdrv_write(s->bdrv, bdrv_sector,
> +                   s->storage + offset, (s->pagesize + 511) / 512);

Devices should not use synchronous block I/O interfaces.  sd, flash, and
a couple others still do for historical reasons but new devices should
not.

The vcpu, QEMU monitor, and VNC are all blocked while I/O takes place.
This can be avoided by using bdrv_aio_writev() instead.

Can you change this code to use bdrv_aio_writev() or is this flash
device specified to complete operations within certain time constraints?
(The problem is that the image file could be on a slow harddisk or other
media that don't meet those timing requirements.)

> +static int m25p80_init(SPISlave *ss)
> +{
> +    DriveInfo *dinfo;
> +    struct flash *s = FROM_SPI_SLAVE(struct flash, ss);
> +    /* FIXME: This should be handled centrally!  */
> +    static int mtdblock_idx;
> +    dinfo = drive_get(IF_MTD, 0, mtdblock_idx++);
> +
> +    DB_PRINT("inited m25p80 device model - dinfo = %p\n", dinfo);
> +    /* TODO: parameterize */
> +    s->size = 8 * 1024 * 1024;
> +    s->pagesize = 256;
> +    s->sectorsize = 4 * 1024;
> +    s->dirty_page = -1;
> +    s->storage = g_malloc0(s->size);

Please use qemu_blockalign(s->bdrv, s->size) to allocate I/O buffers.
It honors memory alignment requirements (necessary for O_DIRECT files).

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]