[#vmnictyp]:

Types of (virtual) NIC “hardware” used by virtual machines

Overview of common NIC options

The functionality of implementing a NIC tends to be built into the virtual machine software. So, the exact options available may vary, based on what virtual machine software is being used. However, many of the implementations, found in the different virtual machine software, end up offering some of the same functionality as other implementations found in other virtual machine software.

This section provides an overview of some of the different approaches for offering network functionality. (The precise naming/descriptions used for any of this functionality may vary, based on what virtual machine software is being used.) This overview has been provided so that details about one specific implementation (from one virtual machine software) won't need to repeat a lot of details that are very similar to the options provided by another piece of virtual machine software. Instead, references to these generic methods are provided (which cuts down on the amount of reading required to cover the options rather fully).

Recommendations

There are often multiple options. Of the options, having the host machine treat the virtual machine like a standard networking application is typically the most limited, but is often the easiest to set up. In theory, the only difficult part of that would be setting up automatic addressing used to provide addresses to the networking software on the virtual machine. (That networking stack is generally just built into modern operating systems.) However, even that is often relatively painless because automatic addressing for IPv4, since such automatic addressing is often handled by the virtual machine software (using an internal DHCP server). For doing some basic testing, or other early steps (such as installing an operating system over a network), this sort of network configuration can be a sensible way to get things done quickly.

However, eventually it may be best to take the time to fulfill all the requirements of having the host machine process the traffic (without bridging).

Other options may commonly offer no significant advantages over working implementations of those implemenations just mentioned. They are covered here for comparison and the sake of completeness. (Studying these differences may not really be needed, if the sole goal is just to complete enough of a guide to make something working. In that sort of case, getting by suitably might not require anything more than just reviewing the sections on having the host machine treat the virtual machine like a standard networking application and having the host machine process the traffic (without bridging).)

[#vnetaplc]: Have the host machine treat the virtual machine like a standard networking application

In this mode, the host machine simply allows the virtual machine to make outgoing TCP and UDP connections the same way that most software can make outgoing TCP and UDP connections. This method may often seem to be the simplest to set up. (With Qemu, this may be the default.) However, it is recommended to become familiar with the limitations before deciding on this as the preferred implementation.

The virtual machine software reviews traffic on the virtual NIC that is part of the virtual machine. If the traffic is a recognized type, possibly an Ethernet frame with a source address that matches the MAC of the virtual NIC, then the virtual machine software may try to reproduce such traffic on the host machine. The virtual machine software, just like other network-capable software on the host server, may be able to use the operating system's network stack to effectively communicate with network interfaces (including physical hardware). Therefore, the virtual machine software can send network traffic. Since this can be handled identically to how the operating system handles network traffic from any other type of software, very little (and possibly nothing at all) is needed regarding special handling or configuration of the virtual machine software.

Limitations

This method, though perhaps easiest to set up, may face some limitations that aren't experienced with (at least some of the) other networking methods.

One may be that the virtual machine software may only recognize certain types of traffic. For example, Qemu may only support Ethertype 0x0800 (IPv4), and not support other Ethertypes listed on IANA's page for Ethernet Numbers. This may rule out some types of traffic such as 0x86DD (IPv6), 0x0806 (ARP), 0x0806 (RARP), 0x809b (AppleTalk (Ethertalk), or IPX (0x8137 and/or 0x8138, and according to a guide by Cisco for blocking IPX, 0x00ff and 0x00e0). Furthermore, even if the Ethertype is 0x0800, only some IPv4 may be recognized, such as only supporting Protocol 6 (TCP)/IPv4 packets and Protocol 17 (UDP)/IPv4 packets. (These octet-long protocol numbers are referencing documented at IANA's list of assigned protocol numbers. These protocol numbers involve the 73rd-80th bits of IPv4 packets and the 49th-56th bits of IPv6 packets.) The virtual machine software may not process other types of traffic, so other sorts of IPv4 packets simply get ignored. This may cause certain programs to be unable to effectively communicate over the network, such as any program that relies on using Protocol 1 (ICMP)/IPv4 packets, Protocol 41 (IPv6 (Encapsulated))/IPv4 packets, Protocol 47 (GRE)/IPv4 packets. (Note: This example is not meant to suggest the reason for this limitation which has been known to be in Qemu. The cause might not be because of the Ethertype field in the frame, although the end result may be effectively the same.)

Another possible source of limitations may be the operating system. Operating system limitations may limit/prevent outgoing connections on certain specified TCP or UDP ports, and may limit outgoing connections with ICMP. These limitations may only apply to some software, such as software run by unprivileged users. (Speculation: The main reason for such a limitation may be that the networking stack does not support unspecified applications to be sending out ICMP messages? Some details about some support for ICMP being added are at: LWN.Net Article 420799: ICMP sockets, Linux Kernel 3.0 updates: Unprivileged ICMP_ECHO messages, and for Qemu specifically: Qemu Development list about unprivileged ICMP.)

Without ICMP, the host system may not be able to send IPv4 ping packets, and implementations of traceroute that rely on ICMP also won't work. (Implementations of traceroute that use UDP ports 33434 through 33534 may still work.) Therefore, attempts to test the network using ICMP may fail, even if TCP and UDP work just fine. (If TCP and UDP work just fine, then web browsing, DNS, and many other protocols to work just fine. However, programs like ping may not be able to use this type of network connectivity.)

Incoming packets generally won't reach the virtual machine, unless the host machine recognizes that the packet is meant for the program. There's two basic ways this can happen. One way that the host may know that the packet is meant for the virtual machine is if the incoming packet is recognized as part of a connection that was already initiated with the virtual machine. This may be quite likely with the TCP. Some logic may be able to mimic such connection recognition when using the connectionless UDP protocol, and therefore such logic generally is used so that DNS can work. Note that this only works with communications that are in response to outgoing packets that were made earlier by the virtual machine. So, this doesn't really work for new incoming connections.

The other way that the host machine may recognize that the packet is meant for the virtual machine is if the virtual machine software has, like any other program, told the host machine that it is listening for such traffic. Many times this may be allowed fairly frequently with incoming TCP or UDP ports. However, if the virtual machine is being run as an unprivileged user, then some operating systems may not allow such a program (specifically referring to a program run as that unprivileged user) to be listening on (TCP or UDP) ports below 1024. The virtual machine software will typically not request that the host machine's operating system sends traffic to the virtual machine, unless the virtual machine software has been specifically configured to perform that task. Even in cases where this is possible, the necessary configuration of the virtual machine software, to be listening to various possible types of incoming connections, may be undesired complexity: It may be preferable to spend the time to set up another method of networking, such as using a network interface device visible to the host machine, which may allow more types of incoming connections.

In addition to all those limitations mentioned so far, there may be some other reason(s) why this method of networking is not preferable. For instance, there may be some higher overhead, which may result in heavier usage of the host machine's resources, and/or lower network speeds.

Non-unique IP address

In this sort of setup, the virtual machine may not need its own IP address that is any different than the host machine. (This is nothing special about virtual machine software: this would be just like any other server software that runs on the host machine, and does not require a unique IP address.)

Software that runs in the virtual machine may try to respond to all traffic on an IP address. However, even if that software in the virtual machine will respond to any traffic that the virtual machine receives, the only traffic that reaches the virtual machine is the traffic that the host machine forwards.

This may be referred to as having the virtual machine “sharing” the IP address of a host machine. If the host machine's configuration refers to “sharing” a NIC, that term may simply mean sharing one or more IP address(es). (Note that the term “sharing” may, however, also be used in some other contexts. Despite that the term “sharing” may be used in other contexts, there may still be some important differences. Sharing an IP address like this is different than dedicating a NIC on the host machine, which might sometimes be called “sharing” because the virtual machine can use it and the host machine uses it. However, in the case of the “Take-over” method of dedicating a NIC on the host machine, the practice isn't doesn't really involve much actual “sharing”, as the host machine doesn't really use the network adapater for any purpose other than to provide it for the virtual machine.)

Really sharing an IP address may have its downsides: If the virtual machine and the host machine both believe they have the same IP address, that may hinder some communication between those machines. For example, if a web browser is running on the host machine, that web browser may not be able to communicate with a web server running on the virtual machine, because the host machine will consider the IP address to be local, and so will not send the traffic to the virtual machine. However, if the host machine is listening to TCP port 80 traffic, and passes/forwards/routes any such received traffic to the virtual machine, it is possible that such traffic may end up working (at least in some cases).

[#vnethsnc]: Using a network interface device visible to the host machine
Implementation details: Details about what NIC is being used
Using a virtual NIC on the host machine
[#vusrmdnc]: Using a standard-looking virtual NIC on the host machine to communicate with the virtual machine's NIC

This may involve having the virtual machine's NICs (which will be non-physical, since they are part of a virtual machine) communicate with a virtual NIC on the host machine. The virtual NIC on the host machine may use TUN/TAP technology.

[#mktuntap]: Having/Creating a TUN/TAP device

(This has been, and may still be, the best available option on some platforms. If this functions well, this solution should work out very nicely. It may just take some time, perhaps only a few minutes, to set up.)

Virtual machine software may use an available TAP-compatible tunneling device.

Some of this specific information may have come from Qemu User Documentation: section on using TAP, which has sections about using TAP with operating systems that are officially supported by Qemu.

Note that there may be a requirement (by the operating system? Or by the virtual machine software?) to be a privileged user in order to use this method of networking. This may mean that the entire virtual machine software program will need to run as a superuser (“root”). For example, when when using the Qemu software, at the time of this writing, Qemu Wiki: Networking Documentation says “Generally speaking,”, using this method of networking “requires that you invoke QEMU as root.”

If this method of networking is going to be used, make sure that a TUN/TAP device is available. (If it isn't clear that such a device is available yet, then see the see section on having/creating a TUN/TAP device.)

Other options?

At the time of this writing, the author of this text has not heavily reviewed the immediately following material. However, it is provided for reference anyway, in case somebody finds it useful.

Virtual Distributed Ethernet (VDE)

There might be a supported alternative to TAP networking?

In practice, this solution might be rather focused on using the virtual machine software called Qemu or the derivation, kvm. Also, this might only be ready in Linux-based environments? (Not similar environments, like BSD platforms?)

Ubuntu.com guide to using Windows XP under Qemu has notes about using VDE. Implementing this successfully may involve using Dnsmasq, some additional software. This guide on Ubuntu.com does refer to this solution as “an alternative to TAP networking.”

(The guide at Ubuntu.com goes on to state that most of the information comes from a KVM (Advanced Networking section?), and Dan Walrond's guide: “QEMU - Debian - Linux - TUN/TAP - network bridge”. Some other documentation that might be related may be VirtualSquare's wiki on Virtual Distributed Ethernet.)

It appears that using VDE with Qemu once involved using an external program called vdeqemu (or vdekvm). However, VirtualSquare's wkii about vdeqemu and vdekvm notes that “These tools are obsolete” because now “qemu and kvm have already builtin vde support.” Later, the page notes, “The usage of vdekvm is the same of vdeqemu: they are both simple links to vdeq.”

[#nicvrtsw]: Using a virtual NIC that is created by the virtual machine software

The virtual machine software may make a virtual NIC. The good news is that this may be a very, very simple way of creating the virtual NIC. (It might even occur automatically. Again, such details may be implementation-dependent.) Additionally, the virtual machine software may implement one or more methods of handling traffic on the (virtual) NIC on the host machine. This virtual NIC might, perhaps, only exist when the virtual machine software is running. (The author of this text suspects, although this may need further confirmation, that Hyper-V may often have a service running. That might be suitable to keep this virtual NICs existing?)

This sort of NIC may look just like a standard network card: the operating system and other software may be able to work with this virtual network card similar to a real network card. For example, with Hyper-V the network cards will show up where network cards are listed, such as when using netstat -nr (in Microsoft Windows), or IPConfig or by selecting (the hyperlink called) “Manage Network Adapters” which can be found in the “Network and Sharing Center”.

The following paragraphs may be speculcation: confirming this may require spending some time with software that implements this as an actual option: perhaps such software might include Virtual PC for Windows, and/or Hyper-V for Microsoft Windows, and/or Parallels for Mac?

This sort of NIC might not look like a standard NIC that is generally visible to the host operating system. (It might be implemented by interacting with network traffic. Perhaps this is implemented by modifying the behavior of the “network stack”.) By not looking like a standard NIC, software on the host machine will not see this virtual NIC as looking like a standard NIC. That may have a benefit such as decreasing the likelihood of software, on the physical machine, being misconfigured to try to use a NIC that won't really work. This approach might limit some standard software, such as traffic handling software that implements “bridging”, from being able to work with the virtual NIC. This often-unnecessarily lack of modularity could impose more limits on flexibility. For instance, the virtual machine software is providing bridging functionality, but have limits like only supporting bridging to a physical hardware NIC that is on the same IP subnet. This means the software on the virtual machine would need to use the same subnet as the subnet used on the hardware NIC on the host machine.

It is possible that some networking software (perhaps tunneling software) that is designed to work with a NIC may be unable to work with this type of virtual NIC on the host machine. (However, it seems likely that most network traffic will, somehow, be able to be sent over this virtual NIC, so that the virtual machine can receive the traffic.)

[#niconlvm]: Dedicating a NIC on the host machine
Take-over

If this is an option that is implemented, the details are likely similar to using a virtual NIC on the host. The NIC on the host machine may become dedicated to the virtual machine, so the virtual machine can use that hardware NIC just like any other NIC that is part of the virtual machine. When a hardware NIC is taken over in this fashion, such a network interface card may not be available for any other software on the host operating system (at least while the virtual machine is actively taking over the NIC).

As an example of this: TechNet guide to Hyper-V (in Windows Server 2008): Configuring Virtual Networks states, “Hyper-V then binds the standard services and protocols to the virtual network adapter instead of the physical network adapter, and binds only the Virtual Network Service Protocol to the physical network adapter.” So, aware of this!

Connecting the hardware NIC to a virtual NIC

this may be speculcation: confirming this may require spending some time with software that implements this as an actual option: perhaps such software might include Virtual PC for Windows, and/or Hyper-V for Microsoft Windows, and/or Parallels for Mac?

The virtual machine software may support bridging with a hardware NIC. In practice, this is likely to be very similar or even identical in concept, and perhaps implementation as well, as the following approach: using a NIC provided by the virtual machine software, and then supporting bridging a virtual machine to a NIC on the host machine. The virtual machine software may implement these approaches, connecting things together so seamlessly that any complexity from multiple pieces is invisible to the end user.

In addition to various methods of implementing virtual hardware, the virtual machine software may often provide some similar approaches to how the traffic (involving the NIC on the host machine) gets processed.