qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] mark nic as trusted


From: Dor Laor
Subject: Re: [Qemu-devel] [PATCH] mark nic as trusted
Date: Mon, 12 Jan 2009 14:26:03 +0200
User-agent: Thunderbird 2.0.0.18 (X11/20081119)

Gleb Natapov wrote:
On Mon, Jan 12, 2009 at 02:20:33AM +0000, Jamie Lokier wrote:
  
Dor Laor wrote:
    
The installer of the guest agent is responsible for punching a hole in the
firewall.
      
That's asking a lot from a generic installer.  Guests differ
enormously in how you do that - including different Linux guests.

    
Using network for vmchannel has its downsides. This is one of them.
Every networking daemon needs some king of configuration and this is
well understood by admins. BTW vmchannel will use only outgoing
connection and they are usually allowed by firewalls.

  
Something else you have to do is disable forwarding between the
vmchannel NIC and other NICs - even if the other NICs are forwarding
enabled to each other.  How do you do that on Linux?
/proc/sys/net/ipv4/ip_forward is global, not per NIC...  How do you do
it on other guests?
    
No need to do that. Slirp will drop any packet forwarded to it while
running in restricted mode.

  
It's easy to imagine a few simple guest agents written in C that
compile easily on any guest unix you might want to run on... except
this vmchannel setup would be the only non-portable part, and highly
non-portable at that.

    
Actually the only nonportable part I see is finding vmchannel network
device. After vmchannel device is determined getting IP from a network
device is portable between Unixes.


  
               - Link local addresses for ipv4 are problematic when
          using on other
          nics in parallel
     Likewise, the guest could check the address situation beforehand.
It does check (meaning we need to fully implement the link local rfc).
The problem is that even if we check that no one is using this guest local
link address, another nic can use link local addresses. So a remote host on
the LAN of the other nic might chose the same address we are using.
      
No, that's not enough.  Even when you have globally unique link-local
addresses, you have the problem that NICs configured for link-local IP
always have the same subnet, so routing doesn't work.

    
Most Unixes have SO_BINDTODEVICE to solve this problem. Windows and
others will probably have to add host route. But I prefer to use privet
subnet outside of link local addresses. One less RFC to worry about.

  
You could workaround this by using non-standard link-local IP on the
vmchannel NIC.  Now you're playing more games...

    
                  - We should either 1. not use link local on other
          links 2. Use
          standard dhcp addresses 3. do
                     not use tcp/ip for vmchannel communication.

           So additional nic can do the job and we have several
          flavours to choose
          from.
     The solution should be generic enough so that any nic can be
     connected
     to vmchannel.
      
It sounds "generic" in the sense that you need a custom configuration
which depends on the rest of the guest's configuration.  Not really
"drop in guest vmchannel app and it just works", is it?

    
We all wanted to use something else for vmchannel, but unfortunately we
were pushed to a networking solution. I still have PF_VMCHANNEL socket
family code, so if you can convince David Miller that network is not
good fit for vmchannel go for it :) There are certain restriction
applies when you talk to him though: you can't mention virtualization
as justification for vmchannel!

  
If the guest vmchannel app installer looks at other NICs, and picks an
IP subnet that the others aren't using, or uses link-local when that's
not used on the others...  That will work most of the time.  But
sometimes it will break a working guest some hours after it's
installed.  What happens if the guests's LAN NIC is using DHCP, so the
vmchannel app picks link-local - and then the guests's LAN NIC changes
to link-local itself after some hours running?  That's not uncommon
behaviour nowadays on some networks.
    
In my opinion we shouldn't be too smart about choosing of a subnet and
leave that to admin (using some reasonable default of cause).

  
Handling all the cases _reliably_, adapting reactively to network
config _changes_ on the other NICs while running, and doing so across
many guest types (even just Linux distros and Windows) without having
to have custom code for each guest type, is harder than it looks.

On the other hand, using packet sockets and not IP over the vmchannel
NIC... (just pick another ethernet type) that would work reliably, but
without the convenience of TCP/IP.  It would need more support in the
guest vmchannel app, and guest root access, but both sound plausible
to implement.

    
Right. We have 3 options with their pros and cons:
1. Use link local addresses for the vmchannel link ONLY.
    Do not allow other nics to use it. The upside is there is no
    new subnet to manage.
    btw: anybody know how physical host with multiple nic using link
    local behaves?
2. Use standard ip range using slirp dhcp for the vmchannel.
    There is no link local addresses for this nic. The down side
    is that the admin needs to provide/manage another subnet mask.
    Also slirp has to be changed in order to allow dynamic replacement
    of the 'host' IP (although shutdown+boot go around it.)
3. Using packet socket.
    The upside - no IP addressing.
    The downside - no IP addressing (kidding, mainly tcp reliability).
    The guest agent/host  need to get synchronous  acks  for every  message.


  
Firewalls can still filter your packets though.

--
			Gleb.


  


reply via email to

[Prev in Thread] Current Thread [Next in Thread]