qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2] net: mcf: limit buffer descriptor count


From: Jason Wang
Subject: Re: [Qemu-devel] [PATCH v2] net: mcf: limit buffer descriptor count
Date: Fri, 23 Sep 2016 13:23:30 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0



On 2016年09月22日 18:33, Paolo Bonzini wrote:

On 22/09/2016 12:32, P J P wrote:
From: Prasad J Pandit <address@hidden>

ColdFire Fast Ethernet Controller uses buffer descriptors to manage
data flow to/fro receive & transmit queues. While transmitting
packets, it could continue to read buffer descriptors if a buffer
descriptor has length of zero and has crafted values in bd.flags.
Set upper limit to number of buffer descriptors.

Reported-by: Li Qiang <address@hidden>
Signed-off-by: Prasad J Pandit <address@hidden>
---
  hw/net/mcf_fec.c | 5 +++--
  1 file changed, 3 insertions(+), 2 deletions(-)

Update per
   -> https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg05284.html

diff --git a/hw/net/mcf_fec.c b/hw/net/mcf_fec.c
index 7c0398e..6d3418e 100644
--- a/hw/net/mcf_fec.c
+++ b/hw/net/mcf_fec.c
@@ -23,6 +23,7 @@ do { printf("mcf_fec: " fmt , ## __VA_ARGS__); } while (0)
  #define DPRINTF(fmt, ...) do {} while(0)
  #endif
+#define FEC_MAX_DESC 1024
  #define FEC_MAX_FRAME_SIZE 2032
typedef struct {
@@ -149,7 +150,7 @@ static void mcf_fec_do_tx(mcf_fec_state *s)
      uint32_t addr;
      mcf_fec_bd bd;
      int frame_size;
-    int len;
+    int len, descnt = 0;
      uint8_t frame[FEC_MAX_FRAME_SIZE];
      uint8_t *ptr;
@@ -157,7 +158,7 @@ static void mcf_fec_do_tx(mcf_fec_state *s)
      ptr = frame;
      frame_size = 0;
      addr = s->tx_descriptor;
-    while (1) {
+    while (descnt++ < FEC_MAX_DESC) {
          mcf_fec_read_bd(&bd, addr);
          DPRINTF("tx_bd %x flags %04x len %d data %08x\n",
                  addr, bd.flags, bd.length, bd.data);

Reviewed-by: Paolo Bonzini <address@hidden>

Applied, thanks.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]