qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH] honor IDE_DMA_BUF_SECTORS


From: Stefano Stabellini
Subject: [Qemu-devel] [PATCH] honor IDE_DMA_BUF_SECTORS
Date: Wed, 25 Mar 2009 13:45:53 +0000
User-agent: Thunderbird 2.0.0.14 (X11/20080505)

Hi all,
after the recent introduction of dma_buf_prepare we stopped honoring
IDE_DMA_BUF_SECTORS (the guest can issue dma requests with a greater
total length than IDE_DMA_BUF_SECTORS).
This patch adds the IDE_DMA_BUF_SECTORS limit back in place.

Comments are welcome.

Signed-off-by: Stefano Stabellini <address@hidden>

---

diff --git a/hw/ide.c b/hw/ide.c
index 96bc176..ef1356d 100644
--- a/hw/ide.c
+++ b/hw/ide.c
@@ -207,6 +207,7 @@
 #define MAX_MULT_SECTORS 16
 
 #define IDE_DMA_BUF_SECTORS 256
+#define IDE_DMA_BUF_BYTES (IDE_DMA_BUF_SECTORS * 512)
 
 #if (IDE_DMA_BUF_SECTORS < MAX_MULT_SECTORS)
 #error "IDE_DMA_BUF_SECTORS must be bigger or equal to MAX_MULT_SECTORS"
@@ -877,9 +878,10 @@ static int dma_buf_prepare(BMDMAState *bm, int is_write)
         uint32_t addr;
         uint32_t size;
     } prd;
-    int l, len;
+    int l, len, n;
 
-    qemu_sglist_init(&s->sg, s->nsector / (TARGET_PAGE_SIZE/512) + 1);
+    n = s->nsector <= IDE_DMA_BUF_SECTORS ? s->nsector : IDE_DMA_BUF_SECTORS;
+    qemu_sglist_init(&s->sg, n / (TARGET_PAGE_SIZE/512) + 1);
     s->io_buffer_size = 0;
     for(;;) {
         if (bm->cur_prd_len == 0) {
@@ -900,6 +902,13 @@ static int dma_buf_prepare(BMDMAState *bm, int is_write)
         }
         l = bm->cur_prd_len;
         if (l > 0) {
+            if (l > IDE_DMA_BUF_BYTES)
+                l = IDE_DMA_BUF_BYTES;
+            if (s->io_buffer_size + l > IDE_DMA_BUF_BYTES) {
+                l = IDE_DMA_BUF_BYTES - s->io_buffer_size;
+                if (!l)
+                    return s->io_buffer_size != 0;
+            }
             qemu_sglist_add(&s->sg, bm->cur_prd_addr, l);
             bm->cur_prd_addr += l;
             bm->cur_prd_len -= l;




reply via email to

[Prev in Thread] Current Thread [Next in Thread]