SR-IOV or “Single Root I/O Virtualization” is a very interesting feature that can provide virtual machines shared access to physical network cards installed in the hypervisor. This may sound a lot like what a virtual NIC and a vSwitch does, but the feature works very similarly to PCI passthrough, granting a VM direct access to the NIC hardware. In order to understand SR-IOV, it helps to understand how PCI passthrough works. Here is a quote from a post I did a few years ago:
“PCI Passthrough – or VMDirectPath I/O as VMware calls it – is not at all a new feature. It was originally introduced back in vSphere 4.0 after Intel and AMD introduced the necessary IOMMU processor extensions to make this possible. For passthrough to work, you’ll need an Intel processor supporting VT-d or an AMD processor supporting AMD-Vi as well as a motherboard that can support this feature.
In a nutshell, PCI passthrough allows you to give a virtual machine direct access to a PCI device on the host. And when I say direct, I mean direct – the guest OS communicates with the PCI device via IOMMU and the hypervisor completely ignores the card.”
SR-IOV takes PCI passthrough to the next level. Rather than granting exclusive use of the device to a single virtual machine, the device is shared or ‘partitioned’. It can be shared between multiple virtual machines, or even shared between virtual machines and the hypervisor itself. For example, a single 10Gbps NIC could be ‘passed through’ to a couple of virtual machines for direct access, and at the same time it could be attached to a vSwitch being used by other VMs with virtual NICs and vmkernel ports too. Think shared PCI passthrough.
Continue reading “An In-depth Look at SR-IOV NIC Passthrough”
New bundled VMXNET3 driver corrects PSOD crash issue.
As mentioned in a recent post, a problem in the tools 10.3.0 bundled VMXNET3 driver could cause host PSODs and connectivity issues. As of September 12th, VMware Tools 10.3.2 is now available, which corrects this issue.
The problematic driver was version 184.108.40.206 in tools 10.3.0. According to the release notes, it has been replaced with version 220.127.116.11. In addition to this fix, there are four resolved issues listed as well.
VMware mentions the following in the 10.3.2 release notes:
“Note: VMware Tools 10.3.0 is deprecated due to a VMXNET3 driver related issue. For more information, see KB 57796. Install VMware Tools 10.3.2, or VMware Tools 10.2.5 or an earlier version of VMware Tools.”
Kudos to the VMware engineering teams for getting 10.3.2 released so quickly after the discovery of the problem!
Downgrading to Tools 10.2.5 is an effective workaround.
If you have installed the new VMware Tools 10.3.0 release in VMs running recent versions of Windows, you may be susceptible to host PSODs and other general connectivity problems. VMware has just published KB 57796 regarding this problem, and has recalled 10.3.0 so that it’s no longer available for download.
Tools 10.3.0 includes a new version of the VMXNET3 vNIC driver – version 18.104.22.168 – for Windows, which seems to be the primary culprit. Thankfully, not every environment with Tools 10.3.0 will run into this. It appears that the following conditions must be met:
- You are running a build of ESXi 6.5.
- You have Windows 2012, Windows 8 or later VMs with VMXNET3 adapters.
- The VM hardware is version 13 (the version released along with vSphere 6.5).
- Tools 10.3.0 with the 22.214.171.124 VMXNET3 driver is installed in the Windows guests.
VMware is planning to have this issue fixed in the next release of Tools 10.3.x.
If you fall into the above category and are at risk, it would be a good idea to address this even if you haven’t run into any problems. Since this issue is specific to VMXNET3 version 126.96.36.199 – which is bundled only with Tools 10.3.0 – downgrading to Tools 10.2.5 is an effective workaround. Simply uninstall tools, and re-install version 10.2.5, which is available here.
Another option would be to replace VMXNET3 adapters with E1000E based adapters in susceptible VMs. I would personally rather downgrade to Tools 10.2.5 as both of these actions would cause VM impact and the VMXNET3 adapter is far superior.
Again, you’d only need to do this for VMs that fall into the specific categories listed above. Other VMs can be left as-is running 10.3.0 without concern.
On a positive note, Tools 10.3.0 hasn’t been bundled with any builds of ESXi 6.5, so unless you’ve gone out and obtained tools directly from the VMware download page recently, you shouldn’t have it in your environment.
10Gbps from a 10Mbps NIC? Why not? Debunking the VM link speed myth once and for all!
** Edit on 11/6/2017: I hadn’t noticed before I wrote this post, but Raphael Schitz (@hypervisor_fr) beat me to the debunking! Please check out his great post on the subject as well here. **
I have been working with vSphere and VI for a long time now, and have spent the last six and a half years at VMware in the support organization. As you can imagine, I’ve encountered a great number of misconceptions from our customers but one that continually comes up is around VM virtual NIC link speed.
Every so often, I’ll hear statements like “I need 10Gbps networking from this VM, so I have no choice but to use the VMXNET3 adapter”, “I reduced the NIC link speed to throttle network traffic” and even “No wonder my VM is acting up, it’s got a 10Mbps vNIC!”
I think that VMware did a pretty good job documenting the role varying vNIC types and link speed had back in the VI 3.x and vSphere 4.0 era – back when virtualization was still a new concept to many. Today, I don’t think it’s discussed very much. People generally use the VMXNET3 adapter, see that it connects at 10Gbps and never look back. Not that the simplicity is a bad thing, but I think it’s valuable to understand how virtual networking functions in the background.
Today, I hope to debunk the VM link speed myth once and for all. Not with quoted statements from documentation, but through actual performance testing.
Continue reading “Debunking the VM Link Speed Myth!”
Over the years, I’ve been on quite a few network performance cases and have seen many reasons for performance trouble. One that is often overlooked is the impact of CPU contention and a VM’s inability to schedule CPU time effectively.
Today, I’ll be taking a quick look at the actual impact CPU scheduling can have on network throughput.
To demonstrate, I’ll be using my dual-socket management host. As I did in my recent VMXNET3 ring buffer exhaustion post, I’ll be testing with VMs on the same host and port group to eliminate bottlenecks created by physical networking components. The VMs should be able to communicate as quickly as their compute resources will allow them.
- 2x Intel Xeon E5 2670 Processors (16 cores at 2.6GHz, 3.3GHz Turbo)
- 96GB PC3-12800R Memory
- ESXi 6.0 U3 Build 5224934
- 1x vCPU
- 1024MB RAM
- VMXNET3 Adapter (1.1.29 driver with default ring sizes)
- Debian Linux 7.4 x86 PAE
- iperf 2.0.5
The VMs I used for this test are quite small with only a single vCPU and 1GB of RAM. This was done intentionally so that CPU contention could be more easily simulated. Much higher throughput would be possible with multiple vCPUs and additional RX queues.
The CPUs in my physical host are Xeon E5 2670 processors clocked at 2.6GHz per core. Because this processor supports Intel Turbo Boost, the maximum frequency of each core will vary depending on several factors and can be as high as 3.3GHz at times. To take this into consideration, I will test with a CPU limit of 2600MHz, as well as with no limit at all to show the benefit this provides.
To measure throughput, I’ll be using a pair of Debian Linux VMs running iperf 2.0.5. One will be the sending side and the other the receiving side. I’ll be running four simultaneous threads to maximize throughput and load.
I should note that my testing is far from precise and is not being done with the usual controls and safeguards to ensure accurate results. This said, my aim isn’t to be accurate, but rather to illustrate some higher-level patterns and trends.
Continue reading “VM Network Performance and CPU Scheduling”
ESXi is generally very efficient when it comes to basic network I/O processing. Guests are able to make good use of the physical networking resources of the hypervisor and it isn’t unreasonable to expect close to 10Gbps of throughput from a VM on modern hardware. Dealing with very network heavy guests, however, does sometimes require some tweaking.
I’ll quite often get questions from customers who observe TCP re-transmissions and other signs of packet loss when doing VM packet captures. The loss may not be significant enough to cause a real application problem, but may have some performance impact during peak times and during heavy load.
After doing some searching online, customers will quite often land on VMware KB 2039495 and KB 1010071 but there isn’t a lot of context and background to go with these directions. Today I hope to take an in-depth look at VMXNET3 RX buffer exhaustion and not only show how to increase buffers, but to also to determine if it’s even necessary.
Not unlike physical network cards and switches, virtual NICs must have buffers to temporarily store incoming network frames for processing. During periods of very heavy load, the guest may not have the cycles to handle all the incoming frames and the buffer is used to temporarily queue up these frames. If that buffer fills more quickly than it is emptied, the vNIC driver has no choice but to drop additional incoming frames. This is what is known as buffer or ring exhaustion.
Continue reading “VMXNET3 RX Ring Buffer Exhaustion and Packet Loss”