[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH V2] net: Fix dealing with packets when runstate
From: |
Jason Wang |
Subject: |
Re: [Qemu-devel] [PATCH V2] net: Fix dealing with packets when runstate changes |
Date: |
Fri, 22 Aug 2014 15:40:15 +0800 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.0 |
On 08/21/2014 08:39 PM, zhanghailiang wrote:
> For all NICs(except virtio-net) emulated by qemu,
> Such as e1000, rtl8139, pcnet and ne2k_pci,
> Qemu can still receive packets when VM is not running.
> If this happened in *migration's* last PAUSE VM stage,
> The new dirty RAM related to the packets will be missed,
> And this will lead serious network fault in VM.
>
> To avoid this, we forbid receiving packets in generic net code when
> VM is not running. Also, when the runstate changes back to running,
> we definitely need to flush queues to get packets flowing again.
Hi:
Can you describe what will happen if you don't flush queues after vm is
started? Btw, the notifier dependency and impact on vhost should be
mentioned here as well.
>
> Here we implement this in the net layer:
> (1) Judge the vm runstate in qemu_can_send_packet
> (2) Add a member 'VMChangeStateEntry *vmstate' to struct NICState,
> Which will listen for VM runstate changes.
> (3) Register a handler function for VMstate change.
> When vm changes back to running, we flush all queues in the callback function.
> (4) Remove checking vm state in virtio_net_can_receive
>
> Signed-off-by: zhanghailiang <address@hidden>
> ---
> v2:
> - remove the superfluous check of nc->received_disabled
> ---
> hw/net/virtio-net.c | 4 ----
> include/net/net.h | 2 ++
> net/net.c | 31 +++++++++++++++++++++++++++++++
> 3 files changed, 33 insertions(+), 4 deletions(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 268eff9..287d762 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -839,10 +839,6 @@ static int virtio_net_can_receive(NetClientState *nc)
> VirtIODevice *vdev = VIRTIO_DEVICE(n);
> VirtIONetQueue *q = virtio_net_get_subqueue(nc);
>
> - if (!vdev->vm_running) {
> - return 0;
> - }
> -
> if (nc->queue_index >= n->curr_queues) {
> return 0;
> }
> diff --git a/include/net/net.h b/include/net/net.h
> index ed594f9..a294277 100644
> --- a/include/net/net.h
> +++ b/include/net/net.h
> @@ -8,6 +8,7 @@
> #include "net/queue.h"
> #include "migration/vmstate.h"
> #include "qapi-types.h"
> +#include "sysemu/sysemu.h"
>
> #define MAX_QUEUE_NUM 1024
>
> @@ -96,6 +97,7 @@ typedef struct NICState {
> NICConf *conf;
> void *opaque;
> bool peer_deleted;
> + VMChangeStateEntry *vmstate;
> } NICState;
>
> NetClientState *qemu_find_netdev(const char *id);
> diff --git a/net/net.c b/net/net.c
> index 6d930ea..113a37b 100644
> --- a/net/net.c
> +++ b/net/net.c
> @@ -242,6 +242,28 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
> return nc;
> }
>
> +static void nic_vmstate_change_handler(void *opaque,
> + int running,
> + RunState state)
> +{
> + NICState *nic = opaque;
> + NetClientState *nc;
> + int i, queues;
> +
> + if (!running) {
> + return;
> + }
> +
> + queues = MAX(1, nic->conf->peers.queues);
> + for (i = 0; i < queues; i++) {
> + nc = &nic->ncs[i];
> + if (nc->info->can_receive && !nc->info->can_receive(nc)) {
> + continue;
> + }
Why not just using qemu_can_send_packet() here?
> + qemu_flush_queued_packets(nc);
> + }
> +}
> +
> NICState *qemu_new_nic(NetClientInfo *info,
> NICConf *conf,
> const char *model,
> @@ -259,6 +281,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
> nic->ncs = (void *)nic + info->size;
> nic->conf = conf;
> nic->opaque = opaque;
> + nic->vmstate =
> qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
> + nic);
>
> for (i = 0; i < queues; i++) {
> qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
> @@ -379,6 +403,7 @@ void qemu_del_nic(NICState *nic)
> qemu_free_net_client(nc);
> }
>
> + qemu_del_vm_change_state_handler(nic->vmstate);
> g_free(nic);
> }
>
> @@ -452,6 +477,12 @@ void qemu_set_vnet_hdr_len(NetClientState *nc, int len)
>
> int qemu_can_send_packet(NetClientState *sender)
> {
> + int vmstat = runstate_is_running();
> +
> + if (!vmstat) {
> + return 0;
> + }
I think you want "vmstart" here?
> +
> if (!sender->peer) {
> return 1;
> }