Synology’s Active Backup for Business is a powerful, license-free backup tool included with many of their higher end “plus” and rackmount NAS units. Here is my latest video taking a look at its integration with VMware vSphere and ESXi. I walk through installation, setup, backup and restore.
In short, it is an excellent tool that provides features you’d expect to see from enterprise “paid” backup solutions. For those with home labs or smaller environments, it makes the value proposition of buying a Synology NAS much more enticing!
The Synology Storage Console for VMware allows storage admins to perform numerous multi-step workflows without ever having to leave the vSphere Client.
I’ve been playing around with Synology’s DS1621+ in my home lab the last couple of weeks and have been thoroughly impressed so far. In my last couple of posts, I got the storage pools created and networking and iSCSI configured. As I was going through the manual setup process, I came across Synology’s Storage Console for VMware plugin. This plugin allows some degree of control of the NAS unit directly from within the vSphere Client and automates some simple workflows. Here are some high level features:
Datastore Creation – Creates a backing LUN, target and the VMFS or NFS datastore in a single workflow.
Datastore Resizing – Resizes the backing LUN and expands the VMFS datastore in a single workflow.
Log Bundles – Allows collection of support log bundles of the Synology NAS from the vSphere Client.
Status Dashboard – Provides NAS status and LUN status information.
Enables application-consistent LUN snapshots.
You can find some more information on the storage console plugin here. As long as you have a supported Synology NAS running DSM 6.2.3 or later and vCenter Server 6.5 U2 or later, you can give it a try.
Installation is very straight-forward. The plugin is installed by deploying a small CentOS based OVA appliance into the environment. During deployment, you’ll be prompted for your vCenter credentials and connection information.
Once the VM powers on, it’ll automatically connect to vCenter Server and install the plugin for you. Opening the VM’s console will greet you with a simple GUI that allows you to change passwords and basic TCP/IP settings.
The default username and password for this appliance is root/synology, so be sure to change it.
After logging out and back into the vSphere Client, the Synology Storage Console plugin should be available via the main menu. You’ll first need to add your NAS unit(s) in the Storage System page. Once done, you’ll be able to use the features the plugin offers. Some simple stats are available from the dashboard section. You can see your storage pools, volumes and available free space. This is useful if you are over-allocating space via thin provisioning.
The Synology DS1621+ is an excellent performer with a 10Gbps NIC. Here’s how to configure networking and iSCSI targets and LUNs for use in a VMware vSphere environment.
In my previous post on the Synology DS1621+, I configured storage pools and volumes. Now that our storage is ready for use, I’ll be configuring iSCSI in my VMware vSphere lab environment.
A proper network setup is the foundation for a successful iSCSI deployment. I won’t go into too much detail about vSphere network configuration in this post, but here are some general recommendations when it comes to iSCSI:
Use a dedicated VLAN for iSCSI. Do not use it for any other purpose.
Use a dedicated subnet and ensure it is non-routable.
Use a dedicated VMkernel port for iSCSI in the created subnet.
If possible, ensure you have redundant NICs configured on your hosts and storage box.
Use a 9000 MTU if possible (more on this later)
To begin, I’ll be configuring the interface settings on the DS1621+. There are many different ways this can be done, but I only have one spare 10Gbps port currently, so I’ll forgo a proper multipathing configuration and keep things simple. That said, I’ll still configure two interfaces – a single 1Gbps interface for management, and a single 10Gbps interface in a different VLAN for exclusive iSCSI use.
LAN 1 is a 1Gbps interface that I have in my management network (172.16.10.0/24). This interface will be used to access the Synology DSM interface and anything else that is non-iSCSI related. In my case, LAN5 and LAN6 are the 10Gbps ports on the DS1621+. The 172.16.99.0/24 network is my iSCSI network and is non-routable. Both interfaces are connected to “access” ports on my physical switch. Since there is no 802.1q VLAN tagging, VLANs are not specified on the DS1621+. I’ll show you how to restrict iSCSI to a specific interface when we configure iSCSI targets later on. Next, we’ll move on to the vSphere networking configuration.
Because I already have my TrueNAS box up and running in VLAN 99, my vSwitch and VMkernel ports are already configured correctly for iSCSI in this network. I won’t get too much into vSphere networking configuration today, but I’ll at least show you how I have things configured.
My distributed switch called “dvs-lab” has a dvPortgroup configured with a VLAN ID of 99 configured. Unlike the DS-1621+, each of the 10Gbps NICs here are connected to 802.1q VLAN tagged ports on my switch. I have the default “route based on originating port ID” teaming configured on all the dvPortgroups.
I also have a dedicated VMkernel port configured for iSCSI in the 172.16.99.0/24 network. In the example above, host esx1 uses 172.16.99.21. Note that I have not configured a gateway for this VMkernel interface because I want this network to remain non-routable. The same is true for my physical layer-3 switch – there are no VIFs attached to the VLAN.
Quick Note on Jumbo Frames
Although there are a lot of varying opinions on jumbo frames out there, I would encourage you to consider using a 9000 MTU for iSCSI. In a tightly controlled, non-routed VLAN like that used for iSCSI in a datacenter, most of the reasons to avoid large frames simply do not apply. Storage traffic tends to be very heavy and the number of frames your ESXi host will need to process with a 1500 MTU will be very high. There is overhead associated with this high packet rate. For example, with 10Gbps networking, a 1GB/s sustained transfer rate is not unreasonable. To put that into perspective, your host would have to process almost 700,000 frames and headers every second with a 1500MTU. With a 9000 MTU, that number would be a little over 100,000 – much easier to handle.
Configuring jumbo frames is outside of the scope of this post, but in my environment, I have jumbo frames configured in the following locations:
Distributed Switch – set to 9000 MTU. This configuration passes to physical vmnics.
VMkernel port for iSCSI – set to 9000 MTU.
Physical switch – Jumbo frames enabled globally.
Synology DS1621+ – Jumbo frames enabled on NET5 interface.
In my previous post, I got the hardware all setup on the DS1621+ test unit that Synology was gracious enough to loan me. Today, I’ll be doing some of the initial storage configuration in preparation for getting iSCSI configured for use in my vSphere lab.
But first, I’ll need to get some flash storage added to the NAS.
I’ve got three Crucial MX500 1TB drives that I’ll be removing from my TrueNAS box for testing in the DS1621+. Although they are getting a bit dated, they are still solid performing SATA SSDs. Don’t forget to properly remove your LUNs/Datastores!
Installing the 2.5-inch drives is an easy job. Four small screws attach to the back of each sled and hold the drives in the correct position.
With all six drive bays populated, we’re ready to begin getting the storage prepared for use on the DS1621+.
Creating Storage Pools
The first thing that needs to be done before you can use the drives is to create one or more storage pools. This is where you group disks and apply a RAID level to them. When clicking the Create button, you’ll be greeted by the storage pool creation wizard.
You’ll first be prompted to use classic RAID levels or to use Synology’s Hybrid Raid or SHR. SHR makes the creation of a pool easier, but its best feature is the ability to make use of all disk space when disks of different sizes are in the pool. Traditional RAID levels can waste a lot of space in this type of situation. It is great for beginners, but we’ll stick with the classic RAID-levels.
A new 6-bay AMD Ryzen powered NAS unit from Synology with lots of potential!
My very first commercial NAS box that I bought over 13 years ago was the dual-bay Synology DS207+. At the time, it was the cream of the crop. The hardware was great, but Synology’s very rich software suite was what really set it apart from many of its competitors at the time. The unit served me very well for years in my home network.
Once I got my first VMware home lab setup, I moved away from consumer-grade NAS units and toward more powerful custom-built servers running FreeNAS/TrueNAS. Although awesome for home use, the SoC (system on a chip) ARM-based processors on these old units simply couldn’t handle the I/O requirements for VMs on iSCSI or NFS datastores. Unless you were willing to shell out a lot of dough for an enterprise-grade NAS/SAN, you were stuck building your own. A lot has changed in this market over the last few years. NAS units have gotten much quicker and a reasonably priced unit can now be a very feasible solution for a wide variety of applications – including virtualization. Today, Synology makes a number of multi-bay NAS units with powerful processor options. They have everything from high-performance ARM based units to Xeon-Ds and even AMD Ryzen Embedded options as in the 1621+. Although they still command a premium price, you get way more for your dollar today than you did even just a few years back. When Synology asked if I would be interested in trying out one of their business class “plus” NAS units, I jumped on the opportunity.
Synology was kind enough to send me a review sample including a DS1621+ NAS unit, three of their Synology branded 8TB hard drives and their new E10G21-F2 10Gbps SFP+ NIC. Over the next few weeks, I hope to take a look at this latest generation of multi-bay NAS systems and see how feasible they are for a small to mid-sized business network. I’m also very interested in trying out some of Synology’s included software that is catered towards VMware vSphere. For now, I just wanted to share a quick unboxing and hardware setup post.
Power Consumption: 51W (Access), 25W (HDD Hibernation)
Warranty: 3 Years
The specifications for this NAS unit are quite impressive. The one feature that gets most people excited is the embedded AMD Ryzen processor. With AMD’s hugely successful Zen architecture, this is not surprising. AMD has managed some very impressive performance numbers – especially in their 3rd and 4th generation CPUs. Being an embedded part, the Zen V1500B processor is a little different than their desktop processors. From what I can see, it is based on AMD’s first generation Zen architecture so it won’t be quite as potent clock-for-clock as some of AMD’s recent Ryzen CPUs. None the less, with four cores, eight threads and a 2.2GHz clock speed, this is a very capable CPU for a NAS. Best of all, being an embedded part, the total TDP for this processor is only 16W. Having a potent x86-64 CPU under the hood opens up the possibilities for a number of different use cases. Not only should iSCSI storage performance be up to the task, but you could even run virtual machines and many of the more demanding software packages on the NAS unit.
Another great feature is Synology’s inclusion of NVMe. Three and a half inch mechanical drives do still have their place for affordable raw storage capacity, but flash storage is really necessary for good performance. All six drive bays support 2.5 inch SATA SSDs, which is great, but there are now two NVMe slots intended to be used for drive caching as well. Being able to use multiple storage tiers and caching really gives this NAS a lot of performance potential.
Without further ado, let’s check out the DS1621+ and the other goodies Synology sent over.
Synology moved away from flashy packaging years back. I like the subtle cardboard packaging because it lets the quality of the product speak for itself.
The size of the box makes the NAS unit feel larger than it actually is. There is ample protection from shipping damage with foam protecting the unit from all sides. The NAS itself is wrapped in plastic to keep dust out.
A small cardboard box includes a pair of high quality ethernet cables and a standard power cable. A small bag of screws and the drive bay keys are also contained within. From what I can see, the screws are only needed for mounting 2.5-inch drives.
The unit itself has a heavy, high quality feel to it. The outer shell and back panel are metal and only the front panel and drive bays are constructed of plastic. Six hotswap SATA drive bays are accessible from the front of the unit. Two 92mm fans dominate the back and line up perfectly behind all six drive bays and should provide good directed airflow.
There are two USB3 ports at the rear (and one at the front) as well as four 1GbE NICs and a pair of eSATA connectors. The eSATA ports can be used for Synology’s expansion units. With two DX517s, you could have up to 16 drives in total.
Synology was kind enough to include three of their self-branded 8TB HAT5300 mechanical drives with the NAS unit. From what I can see, these are manufactured by Toshiba and are 7200RPM models. Synology supports a large number of mechanical drives from a variety of manufacturers, but supplying their own removes the guess work that customers need to do and guarantees 100% compatibility.
Since I plan on using this NAS in my VMware home lab, 10GbE networking will be essential. Synology provided me with their brand new E10G21-F2 SFP+ card. Synology supports a pretty long list of 10Gbps NICs on some of their older NAS units, but the list is short for the DS1621+ at this time. I suspect they are still testing cards for compatibility as this NAS is still quite new. Similar to their branded HDDs, going with a Synology branded NIC ensures 100% compatibility. Synology sells 10Gbase-T models as well if you aren’t using SFP+ DACs or optics.
I recently deployed an all-NVMe based vSAN configuration in my home lab. I’ll be posting more information on my setup soon, but I decided to use OEM Samsung based SSDs. I’ve got 256GB SM961 MLC based drives for my cache tier, and larger 1TB enterprise-grade PM953s for capacity. These drives are plenty quick for vSAN and can be had for great prices on eBay if you know where to look.
Being OEM drives, they don’t have any heatsinks and are pretty bare. As I started running some performance tests using synthetic tools like Crystal Disk Mark and ATTO, I began to see instability. My guest running the test would completely hang after a few minutes of testing and I’d be forced to reboot the ESXi host to recover.
Looking through the logs, it became clear what had happened:
2019-08-16T15:43:26.083Z cpu0:2341677)nvme:AsyncEventReportComplete:3050:Smart health event: Temperature above threshold
2019-08-16T15:43:26.087Z cpu9:2097671)nvme:NvmeExc_ExceptionHandlerTask:317:Critical warnings detected in smart log , failing controller
2019-08-16T15:43:26.087Z cpu9:2097671)nvme:NvmeExc_RegisterForEvents:370:Async event registration requested while controller is in Health Degraded state.
One of my nvme drives had overheated! The second time I tried the test, I watched more closely.
Sure enough, it wasn’t the older PM953s overheating, but the newer Polaris based SM961 cache drives. As soon as the heavy writes started, the drive’s temperature steadily increased until it approached 70’C. The moment it hit 70, the guest hung. Looking more closely in ESXi, I could see that the drive completely disappeared. I.e. it was no longer listed as a NVMe device or HBA in the system. It appears that this is safety measure to stop the controller from cooking itself to the point of permanent damage. Since I had no idea it was running so hot, I’d say I’m thankful for this feature – but none the less, I’d have to figure out some way to keep these drives cooler.
ESXi has a limited implementation of SMART monitoring and can pull a few specific metrics. Thankfully, drive temperature is one of them. First, I needed to get the t10 identifier for my nvme drives:
[root@esx-e1:~] esxcli storage core device list |grep SAMSUNG
Display Name: Local NVMe Disk (t10.NVMe____SAMSUNG_MZVPW256HEGL2D000H1______________6628B171C9382499)
Devfs Path: /vmfs/devices/disks/t10.NVMe____SAMSUNG_MZVPW256HEGL2D000H1______________6628B171C9382499
Model: SAMSUNG MZVPW256
Display Name: Local NVMe Disk (t10.NVMe____SAMSUNG_MZ1LV960HCJH2D000MU______________1505216B24382888)
Devfs Path: /vmfs/devices/disks/t10.NVMe____SAMSUNG_MZ1LV960HCJH2D000MU______________1505216B24382888
Model: SAMSUNG MZ1LV960
Running a four second refresh interval using ‘watch’ is a useful way to monitor the drive under stress.
[root@esx-e1:~] watch -n 4 "esxcli storage core device smart get -d t10.NVMe____SAMSUNG_MZVPW256HEGL2D000H1______________6628B171C9382499"
Parameter Value Threshold Worst
---------------------------- ----- --------- -----
Health Status OK N/A N/A
Media Wearout Indicator N/A N/A N/A
Write Error Count N/A N/A N/A
Read Error Count N/A N/A N/A
Power-on Hours 974 N/A N/A
Power Cycle Count 62 N/A N/A
Reallocated Sector Count 0 95 N/A
Raw Read Error Rate N/A N/A N/A
Drive Temperature 35 70 N/A
Driver Rated Max Temperature N/A N/A N/A
Write Sectors TOT Count N/A N/A N/A
Read Sectors TOT Count N/A N/A N/A
Initial Bad Block Count N/A N/A N/A
As you can see, the maximum temperature is listed as 70’C. This isn’t a suggestion as I’ve come to learn the hard way.
To get things cooler I decided to move my fans around in my Antec VSK4000 cases. My lab is geared toward silence more than cooling so the airflow near the PCIe slots is pretty poor. I’ve now got a 120mm fan on the side-panel cooling the slots directly. This benefits my Solarflare 10Gbps NICs as well, which can get quite toasty. This helped significantly, but if I leave a synthetic test running long enough, it will eventually get to 70’C again. Clearly, I’ll need to add passive heatsinks to the SM961s if I want to keep them cool in these systems.
Realistically, it’s only synthetic and very heavy write tests that seem to get the temperature climbing to those levels. It’s unlikely that day-to-day use would cause a problem. None the less, I’m going to look into heatsinks for the drives. They can be had for $5-10 on Amazon, so it seems like a small investment for some extra peace of mind.
The morale of the story – keep an eye on your NVMe controller temps!
I’ve been using FreeNAS for several years now for both block and NFS storage in my home lab with great success. For more information on my most recent FreeNAS build, you can check out the series here.
Although I’ve been quite pleased with this setup, I had to repurpose the SSDs in the box and had yet another USB boot device failure. This meant I had to reinstall FreeNAS and left me with just a single ZFS pool with a pair of 2TB mechanical drives. It just didn’t feel right to have a full system up and running for just a pair of 2TB drives when I could run them just fine in my management ESXi host. Not to mention the fact that I’ve got 224GB of RAM available there to provide for a much larger L1 ARC cache.
In part 2 of my FreeNAS build series, I looked at using VT-d to passthrough a proper LSI SAS HBA to a VM. This is really the best possible virtual FreeNAS configuration as it bypasses all of the hypervisor’s storage stack and grants direct access to the HBA and drives. I considered using this setup, but I didn’t think it was worth the extra power consumption and cooling needed for the toasty PERC H200 card I’ve been using. Since I wanted to preserve all data on the drives, RDMs seemed to be the next logical solution. This isn’t as ‘pure’ as the VT-d solution, but it still gives the VM full block access to the drives in the system. At any rate, it was worth a try!
Disclaimer: If you are using ZFS and FreeNAS for production purposes or for any critical data that you care about, using a proper physical setup is important. I wouldn’t recommend virtualizing FreeNAS or any other ZFS based storage system for anything but testing or lab purposes.
What I hoped to do was the following:
Take the 2x2TB Western Digital hard drives out of the Dell T110.
Re-install the 2x2TB drives in my Intel S2600 management host on the integrated SATA controller.
Create a new FreeNAS virtual machine.
Add the two drives to the VM as virtual mode RDMs.
Import the existing ZFS volume that is striped across these two drives in FreeNAS.
Re-create the iSCSI target and NFS shares and have access to all existing data in the pool! (assuming all goes well).
Creating a new FreeNAS VM
Once I got the two drives installed in my Intel S2600 management host, I created a new VM and got the FreeNAS OS installed. Below is the virtual hardware configuration I used:
Guest OS type: Other, FreeBSD 64-bit CPUs: 2x vCPUs Memory: 16GB (a minimum of 8GB is required) Hard Disk: 16GB (for the FreeNAS OS boot device, a minimum of 8GB is required) New SCSI controller: LSI Logic SAS Network adapter type: VMXNET3 CD/DVD Drive: Mount the FreeNAS 11.2 ISO from a datastore
You’ll notice that some of the options I selected are not defaults for FreeBSD based VMs. This includes the LSI SAS adapter, and the VMware VMXNET3 NIC. LSI Parallel is the default for FreeBSD VMs, but the SAS adapter works well with all recent BSD builds. The same holds true for the VMXNET3 adapter, which has many benefits over the emulated E1000 adapter type.
Taking the time to remove LUNs correctly is worth the effort and prevents all sorts of complications.
This is admittedly a well-covered topic in both the VMware public documentation and in blogs, but I thought I’d provide my perspective on this as well in case it may help others. Unfortunately, improper LUN removal is still something I encountered all too often when I worked in GSS years back.
Having done a short stint on the VMware storage support team, I knew all too well the chaos that would ensue after improper LUN decommissioning. ESX 4.x was particularly bad when it came to handling unexpected storage loss. Often hosts would become unmanageable and reboots were the only way to recover. Today, things are quite different. VMware has made many strides in these areas, including better host resiliency in the face of APD (all paths down) events, as well as introducing PDL (permenant device loss) several years back. Despite these improvements, you still don’t want to yank storage out from under your hypervisors.
Today, I’ll be decommissioning an SSD drive from my freenas server, which will require me to go through these steps.
Update: Below is a recent (2021) video I did on the process in vSphere 7.0:
Step 1 – Evacuate!
Before you even consider nuking a LUN from your SAN, you’ll want to ensure all VMs, templates and files have been migrated off. The easiest way to do this is to navigate to the ‘Storage’ view in the Web Client, and then select the datastore in question. From there, you can click the VMs tab. If you are running 5.5 or 6.0, you may need to go to ‘Related Objects’ first, and then Virtual Machines.
In my case, you can see that the datastore shared-ssd still has a VM on it that will need to be migrated. I was able to use Storage vMotion without interrupting the guest.
Templates do not show up in the normal view, so be sure to check specifically for these as well. Remember, you can’t migrate templates. You’ll need to convert them to VMs first, then migrate them and convert them back to templates. I didn’t care about this one, so just deleted it from disk.
I have recently rebuilt my home lab – an all too common occurrence due to the number of times I intentionally try to break things. In the process of rebuilding, I had some ISO files I wanted to copy over to a datastore. The process failed and the Web Client greeted me with an uncharacteristically long error message.
The exact text reads:
“The operation failed for an undetermined reason. Typically, this problem occurs due to certificates that the browser does no trust. If you are using self-signed or custom certificates, open the URL below in a new browser tab and accept the certificate, then retry the operation.”
In my case, the URL that it listed was to one of my ESXi hosts in the compute-a cluster called esx-a2. The error then goes on to reference VMware KB 2147256.
It may seem odd that the vSphere Client would be telling you to visit a random ESXi host’s UI address when you are trying to upload a file via vCenter. But if you stop to think about it for a second, vCenter has no access whatsoever to your datastores. Whether you are trying to create a new VMFS datastore, upload a file or even just browse, vCenter must rely on an ESXi host with the necessary access to do the actual legwork. That ESXi host then relays the information back through the Web Client.
I’ve recently been working on a scripting project to orchestrate the power up and power down of my entire lab environment. As part of this, I’ve been using IPMI commands to power on physical servers in the correct order and at the correct time.
As I discussed in my recent FreeNAS Build Series, I’ve been using a Dell T110 tower server for storage purposes in my lab. Being an entry level server, the T110 has a very trimmed down iDRAC BMC (Baseboard Management Controller) that doesn’t have a dedicated NIC or a web based management page. Despite it’s limitations, I can still use the IPMI protocol to gather information and to run simple tasks, like powering it on and off.
pi@raspberrypi:~ $ ipmitool -I lanplus -H 172.16.10.67 -U root -P "vmware" lan print 1
Set in Progress : Set Complete
Auth Type Support : NONE MD2 MD5 PASSWORD
Auth Type Enable : Callback : MD2 MD5
: User : MD2 MD5
: Operator : MD2 MD5
: Admin : MD2 MD5
: OEM :
IP Address Source : Static Address
IP Address : 172.16.10.67
Subnet Mask : 255.255.255.0
MAC Address : b8:ac:6f:92:0b:e9
SNMP Community String : public
IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10
Default Gateway IP : 172.16.10.1
Default Gateway MAC : 00:00:00:00:00:00
Backup Gateway IP : 0.0.0.0
Backup Gateway MAC : 00:00:00:00:00:00
802.1q VLAN ID : Disabled
802.1q VLAN Priority : 0
Above you can see a simple IPMI query using the ipmitool application available for Linux and other operating systems. In this example, I’m pulling the network configuration of the BMC.
I hadn’t had a need to use IPMI with the Dell T110 until recently, but was surprised to see that the BMC was not responding to ping. Thinking the BMC was just hung up, I did a cold power cycle and double checked the configuration. After several frustrating reboots, it became clear to me that FreeNAS/FreeBSD was not playing nicely with the Dell BMC. It appeared that it would work just fine until the FreeBSD kernel used by FreeNAS started loading. As soon as the bge driver claimed the BCM5722 card, the BMC couldn’t be accessed over the network.
To make things even more frustrating, it would not recover when the machine was shut down or rebooted. I could only get the BMC on the network again after doing a cold power-cycle of the server, or after going into the BMC configuration, changing something, and rebooting.
After doing some digging, I came across a thread on the FreeBSD forum that described my symptoms exactly. I’m not the only one who has run into this issue with Dell BMCs and shared Broadcom adapters in FreeBSD. This thread then led me to FreeBSD bug 196944 regarding a regression in the Broadcom bge driver. It looks like this has actually been broken for some time – all the way back to FreeBSD 9.2 – and is still a problem in 11.1 as well.
A few people were able to work around this issue by recompiling the kernel with the Broadcom driver from back in FreeBSD 9.1. I really didn’t feel comfortable doing this level of tinkering with FreeNAS – especially since any subsequent FreeNAS patches would likely just break it again.
Thankfully, someone in comment 6 of the bug describes a potential workaround that involves nothing more than enabling the PXE bootrom of the onboard Broadcom adapter in the BIOS. This was reported as having mixed results on varying models of Dell servers, but I was willing to give it a try. After changing my onboard NIC from ‘Enabled’ to ‘Enabled with PXE’ in the BIOS, the problem disappeared!
If you have this problem – give it a shot. It’s a simple workaround and the only down side is the extra 2-3 seconds at boot up.