qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3] net: Adding netmap network backend


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH v3] net: Adding netmap network backend
Date: Mon, 4 Nov 2013 17:35:02 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, Oct 29, 2013 at 11:12:25AM +0100, Vincenzo Maffione wrote:

Looks pretty good, I think we can merge the next revision.

> This patch only contains the simplest netmap backend for QEMU.
> In particular, this backend implementation is still not
> able to make use of batching on the TX side (frontend -> backend),
> which is where most of the TX performance gain comes from.
> As you can see from the code, there is an ioctl(NIOCTXSYNC) for each
> packet, instead of an ioctl(NIOCTXSYNC) for a batch of packets.
> In order to make TX batching possible, we would need to do some
> modifications to the generic net/net.c code, adding to the
> frontend/backend datapath interface a way to send a batch (this can
> be done using a QEMU_NET_PACKET_FLAG_MORE, without changing too
> much the existing interface).
> We will propose these features in future patches.

QEMU_NET_PACKET_FLAG_MORE sounds like a good idea.

> +#include "net/net.h"
> +#include "clients.h"
> +#include "sysemu/sysemu.h"
> +#include "qemu/error-report.h"
> +
> +#include <sys/ioctl.h>
> +#include <net/if.h>
> +#include <sys/mman.h>
> +#include <net/netmap.h>
> +#include <net/netmap_user.h>
> +#include <qemu/iov.h>

Please include system headers first, then QEMU headers.  This way QEMU
headers cannot interfere with system headers if they define macros.

Also, <qemu/iov.h> should be "qemu/iov.h":

#include <sys/ioctl.h>
#include <net/if.h>
#include <sys/mman.h>
#include <net/netmap.h>
#include <net/netmap_user.h>

#include "net/net.h"
#include "clients.h"
#include "sysemu/sysemu.h"
#include "qemu/error-report.h"
#include "qemu/iov.h"

> +/*
> + * Open a netmap device. We assume there is only one queue
> + * (which is the case for the VALE bridge).
> + */
> +static int netmap_open(NetmapPriv *me)
> +{
> +    int fd;
> +    int err;
> +    size_t l;
> +    struct nmreq req;
> +
> +    me->fd = fd = open(me->fdname, O_RDWR);
> +    if (fd < 0) {
> +        error_report("Unable to open netmap device '%s'", me->fdname);
> +        return -1;

This error message is not detailed enough, please include the errstr(3).

> +    }
> +    bzero(&req, sizeof(req));

Please use memset(&req, 0, sizeof(req)).  Some compilers/tools warn that
bzero() is deprecated.  For more info, see:
http://pubs.opengroup.org/onlinepubs/009695399/functions/bzero.html

> +    pstrcpy(req.nr_name, sizeof(req.nr_name), me->ifname);
> +    req.nr_ringid = NETMAP_NO_TX_POLL;
> +    req.nr_version = NETMAP_API;
> +    err = ioctl(fd, NIOCREGIF, &req);
> +    if (err) {
> +        error_report("Unable to register %s", me->ifname);

Lacking errno information here too.

> +        goto error;
> +    }
> +    l = me->memsize = req.nr_memsize;
> +
> +    me->mem = mmap(0, l, PROT_WRITE | PROT_READ, MAP_SHARED, fd, 0);
> +    if (me->mem == MAP_FAILED) {
> +        error_report("Unable to mmap");

Lacking errno information here too.

> +static ssize_t netmap_receive_iov(NetClientState *nc,
> +                    const struct iovec *iov, int iovcnt)
> +{
> +    NetmapState *s = DO_UPCAST(NetmapState, nc, nc);
> +    struct netmap_ring *ring = s->me.tx;
> +
> +    if (ring) {

Feel free to reduce indentation by returning early:

if (!ring) {
    return iov_size(iov, iovcnt); /* drop */
}

> +        uint32_t i = 0;
> +        uint32_t idx;
> +        uint8_t *dst;
> +        int j;
> +        uint32_t cur = ring->cur;
> +        uint32_t avail = ring->avail;
> +        int iov_frag_size;
> +        int nm_frag_size;
> +        int offset;
> +
> +        if (avail < iovcnt) {
> +            /* Not enough netmap slots. */
> +            netmap_write_poll(s, true);
> +            return 0;
> +        }
> +
> +        for (j = 0; j < iovcnt; j++) {
> +            iov_frag_size = iov[j].iov_len;
> +            offset = 0;
> +
> +            /* Split each iovec fragment over more netmap slots, if
> +               necessary (without performing data copy). */

I don't understand this comment, we always perform data copy.

> +            while (iov_frag_size) {
> +                nm_frag_size = MIN(iov_frag_size, ring->nr_buf_size);
> +
> +                if (unlikely(avail == 0)) {
> +                    /* We run out of netmap slots while splitting the
> +                       iovec fragments. */
> +                    return 0;

netmap_write_poll(s, true) missing?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]