Preventing MikroTik RouterOS Bridge MAC Changes

I noticed that my Windows 10 VM kept becoming inaccessible after shutting down and powering up my lab. As part of the power down process, I was cutting power to the MikroTik CRS309 switch, which is the default gateway for the VMs in VLAN 1. After opening a console to the Windows VM I discovered that the network discovery feature was detecting a new network and prompting whether it was trusted. This discovery is based upon the MAC address of the default gateway – sure enough it seemed to be changing after each power up of the CRS309.

After doing some research it seems that this is expected behaviour. The bridge MAC is auto-selected based on one of the bridge ports at boot-up. Because of this, there is a good chance the MAC will change after each boot. If I left my switch up 24/7, this wouldn’t be a problem, but since I don’t, I need keep things consistent.

The MikroTik wiki mentions two options – admin-mac and auto-mac. These two can be used to force the bridge to use a static MAC address. I just selected the current auto-generated MAC to use for this purpose.

You can get the current bridge configuration using the following command:

[admin@MikroTik] > /interface bridge print
Flags: X - disabled, R - running
0 R name="bridge1" mtu=auto actual-mtu=9000 l2mtu=10218 arp=enabled arp-timeout=auto mac-address=4E:A4:50:F4:E7:1C protocol-mode=rstp fast-forward=yes igmp-snooping=no auto-mac=yes ageing-time=5m priority=0x8000 max-message-age=20s
forward-delay=15s transmit-hold-count=6 vlan-filtering=yes ether-type=0x8100 pvid=1 frame-types=admit-all ingress-filtering=no dhcp-snooping=no

I then forced a specific MAC address using the following:

[admin@MikroTik] > /interface bridge set bridge1 admin-mac=4e:a4:50:f4:e7:1C auto-mac=no

And you can validate that the change stuck using the print command again:

[admin@MikroTik] > /interface bridge print
Flags: X - disabled, R - running
0 R name="bridge1" mtu=auto actual-mtu=9000 l2mtu=10218 arp=enabled arp-timeout=auto mac-address=4E:A4:50:F4:E7:1C protocol-mode=rstp fast-forward=yes igmp-snooping=no auto-mac=no admin-mac=4E:A4:50:F4:E7:1C ageing-time=5m
priority=0x8000 max-message-age=20s forward-delay=15s transmit-hold-count=6 vlan-filtering=yes ether-type=0x8100 pvid=1 frame-types=admit-all ingress-filtering=no dhcp-snooping=no
[admin@MikroTik] >

This seemed to have the desired effect. The MAC address is now consistent at every boot up!

Overheating NVMe Flash Drives

I recently deployed an all-NVMe based vSAN configuration in my home lab. I’ll be posting more information on my setup soon, but I decided to use OEM Samsung based SSDs. I’ve got 256GB SM961 MLC based drives for my cache tier, and larger 1TB enterprise-grade PM953s for capacity. These drives are plenty quick for vSAN and can be had for great prices on eBay if you know where to look.

nvmeheat-1
The Samsung Polaris based SM961 is similar to the 960 Pro and well suited for vSAN caching.

Being OEM drives, they don’t have any heatsinks and are pretty bare. As I started running some performance tests using synthetic tools like Crystal Disk Mark and ATTO, I began to see instability. My guest running the test would completely hang after a few minutes of testing and I’d be forced to reboot the ESXi host to recover.

Looking through the logs, it became clear what had happened:

2019-08-16T15:43:26.083Z cpu0:2341677)nvme:AsyncEventReportComplete:3050:Smart health event: Temperature above threshold
2019-08-16T15:43:26.087Z cpu9:2097671)nvme:NvmeExc_ExceptionHandlerTask:317:Critical warnings detected in smart log [2], failing controller
2019-08-16T15:43:26.087Z cpu9:2097671)nvme:NvmeExc_RegisterForEvents:370:Async event registration requested while controller is in Health Degraded state.

One of my nvme drives had overheated! The second time I tried the test, I watched more closely.

Sure enough, it wasn’t the older PM953s overheating, but the newer Polaris based SM961 cache drives. As soon as the heavy writes started, the drive’s temperature steadily increased until it approached 70’C. The moment it hit 70, the guest hung. Looking more closely in ESXi, I could see that the drive completely disappeared. I.e. it was no longer listed as a NVMe device or HBA in the system. It appears that this is safety measure to stop the controller from cooking itself to the point of permanent damage. Since I had no idea it was running so hot, I’d say I’m thankful for this feature – but none the less, I’d have to figure out some way to keep these drives cooler.

ESXi has a limited implementation of SMART monitoring and can pull a few specific metrics. Thankfully, drive temperature is one of them. First, I needed to get the t10 identifier for my nvme drives:

[root@esx-e1:~] esxcli storage core device list |grep SAMSUNG
t10.NVMe____SAMSUNG_MZVPW256HEGL2D000H1______________6628B171C9382499
Display Name: Local NVMe Disk (t10.NVMe____SAMSUNG_MZVPW256HEGL2D000H1______________6628B171C9382499)
Devfs Path: /vmfs/devices/disks/t10.NVMe____SAMSUNG_MZVPW256HEGL2D000H1______________6628B171C9382499
Model: SAMSUNG MZVPW256
t10.NVMe____SAMSUNG_MZ1LV960HCJH2D000MU______________1505216B24382888
Display Name: Local NVMe Disk (t10.NVMe____SAMSUNG_MZ1LV960HCJH2D000MU______________1505216B24382888)
Devfs Path: /vmfs/devices/disks/t10.NVMe____SAMSUNG_MZ1LV960HCJH2D000MU______________1505216B24382888
Model: SAMSUNG MZ1LV960

Running a four second refresh interval using ‘watch’ is a useful way to monitor the drive under stress.

[root@esx-e1:~] watch -n 4 "esxcli storage core device smart get -d t10.NVMe____SAMSUNG_MZVPW256HEGL2D000H1______________6628B171C9382499"
Parameter Value Threshold Worst
---------------------------- ----- --------- -----
Health Status OK N/A N/A
Media Wearout Indicator N/A N/A N/A
Write Error Count N/A N/A N/A
Read Error Count N/A N/A N/A
Power-on Hours 974 N/A N/A
Power Cycle Count 62 N/A N/A
Reallocated Sector Count 0 95 N/A
Raw Read Error Rate N/A N/A N/A
Drive Temperature 35 70 N/A
Driver Rated Max Temperature N/A N/A N/A
Write Sectors TOT Count N/A N/A N/A
Read Sectors TOT Count N/A N/A N/A
Initial Bad Block Count N/A N/A N/A

As you can see, the maximum temperature is listed as 70’C. This isn’t a suggestion as I’ve come to learn the hard way.

To get things cooler I decided to move my fans around in my Antec VSK4000 cases. My lab is geared toward silence more than cooling so the airflow near the PCIe slots is pretty poor. I’ve now got a 120mm fan on the side-panel cooling the slots directly. This benefits my Solarflare 10Gbps NICs as well, which can get quite toasty. This helped significantly, but if I leave a synthetic test running long enough, it will eventually get to 70’C again. Clearly, I’ll need to add passive heatsinks to the SM961s if I want to keep them cool in these systems.

Realistically, it’s only synthetic and very heavy write tests that seem to get the temperature climbing to those levels. It’s unlikely that day-to-day use would cause a problem. None the less, I’m going to look into heatsinks for the drives. They can be had for $5-10 on Amazon, so it seems like a small investment for some extra peace of mind.

The morale of the story – keep an eye on your NVMe controller temps!

ipmitool 1.8.11 vib for ESXi

Run ipmitool directly from the ESXi command line instead of having to boot to Linux.

I just created a packaged vib that includes ipmitool 1.8.11 that can be run directly from the ESXi CLI. I needed to be able to modify fan thresholds to keep my slow-spinning fans from triggering critical alarms on my hosts. These fan thresholds aren’t exposed in the web UI and I have to modify them using ipmitool. Normally, to do this I’d have to shut down the host, and boot it up using an install of Debian on a USB stick – a bit of a pain. Why not just run ipmitool from directly within ESXi instead?

You can find the vib download, some background, installation instructions and example uses on the static page here.

NSX-T Troubleshooting Scenario 3 – Solution

Welcome to the third instalment of a new series of NSX-T troubleshooting scenarios. Thanks to everyone who took the time to comment on the first half of the scenario. Today I’ll be performing some troubleshooting and will show how I came to the solution.

Please see the first half for more detail on the problem symptoms and some scoping.

Getting Started

As we saw in the first half, the customer’s management cluster was in a degraded state. This was due to one manager – 172.16.1.41 – being in a wonky half-broken state. Although we could ping it, we could not login and all of the services it was contributing to the NSX management cluster were down.

nsxt-tshoot3a-1

What was most telling, however, was the screenshot of the VM’s console window.

nsxt-tshoot3a-4

The most important keyword there was “Read-only file system”. As many readers had correctly guessed, this is a very common response to an underlying storage problem. Like most flavors of Linux, the Linux-based OS used in the NSX appliances will set their ext4 partitions to read-only in the event of a storage failure. This is a protective mechanism to prevent data corruption and further data loss.

When this happens, the guest may be partially functional, but anything that requires write access to the read-only partitions will obviously be in trouble. This is why we could ping the manager appliance, but all other functionality was broken. The manager cluster uses ZooKeeper for clustering services. ZooKeeper requires consistent and low-latency write access to disk. Because this wasn’t available to 172.16.1.41, it was marked as down in the cluster.

After discussing this with our fictional customer, we were able to confirm that an ESXi host esx-e3 experienced a total storage outage for a few minutes and that it had since been fixed. They had assumed it was not related because the appliance was on esx-e1, not esx-e3.

Continue reading “NSX-T Troubleshooting Scenario 3 – Solution”

NSX-T Troubleshooting Scenario 3

It’s been a while since I’ve posted anything, so what better way to get back into the swing of things than a troubleshooting scenario! These last few months I’ve been busy learning the ropes in my new role as an SRE supporting NSX and VMware Cloud on AWS. Hopefully I’ll be able to start releasing regular content again soon.

Welcome to the third NSX-T troubleshooting scenario! What I hope to do in these posts is share some of the common issues I run across from day to day. Each scenario will be a two-part post. The first will be an outline of the symptoms and problem statement along with bits of information from the environment. The second will be the solution, including the troubleshooting and investigation I did to get there.

The Scenario

As always, we’ll start with a fictional customer problem statement:

“I’m not experiencing any problems, but I noticed that my NSX-T 2.4.1 manager cluster is in a degraded state. One of the unified appliances appears to be down. I can ping it just fine, but I can’t seem to login to the appliance via SSH. I’m sure I’m putting in the right password, but it won’t let me in. I’m not sure what’s going on. Please help!”

From the NSX-T Overview page, we can see that one appliance is red.

nsxt-tshoot3a-2

Let’s have a look at the management cluster in the UI:

nsxt-tshoot3a-1

The problematic manager is 172.16.1.41. It’s reporting its cluster connectivity as ‘Down’ despite being reachable via ping. It appears that all of the services including controller related services are down for this appliance as well.

nsxt-tshoot3a-3

Strangely, it doesn’t appear to be accepting the admin or root passwords via SSH. We always get an ‘Access Denied’ response. We can login successfully to the other two appliances without issue using the same credentials.

Opening a console window to 172.16.1.41 greets us with the following:

nsxt-tshoot3a-4

Error messages appear to continually scroll by from system-journald mentioning “Failed to write entry”. Hitting enter gives us the login prompt, but we immediately get the same error messages and can’t login.

What’s Next

It seems pretty clear that there is something wrong with 172.16.1.41, but what may have caused this problem? How would you fix this and most importantly, how can you root cause this?

I’ll post the solution in the next day or two, but how would you handle this scenario? Let me know! Please feel free to leave a comment below or via Twitter (@vswitchzero).

An In-depth Look at SR-IOV NIC Passthrough

SR-IOV or “Single Root I/O Virtualization” is a very interesting feature that can provide virtual machines shared access to physical network cards installed in the hypervisor. This may sound a lot like what a virtual NIC and a vSwitch does, but the feature works very similarly to PCI passthrough, granting a VM direct access to the NIC hardware. In order to understand SR-IOV, it helps to understand how PCI passthrough works. Here is a quote from a post I did a few years ago:

“PCI Passthrough – or VMDirectPath I/O as VMware calls it – is not at all a new feature. It was originally introduced back in vSphere 4.0 after Intel and AMD introduced the necessary IOMMU processor extensions to make this possible. For passthrough to work, you’ll need an Intel processor supporting VT-d or an AMD processor supporting AMD-Vi as well as a motherboard that can support this feature.

In a nutshell, PCI passthrough allows you to give a virtual machine direct access to a PCI device on the host. And when I say direct, I mean direct – the guest OS communicates with the PCI device via IOMMU and the hypervisor completely ignores the card.”

SR-IOV takes PCI passthrough to the next level. Rather than granting exclusive use of the device to a single virtual machine, the device is shared or ‘partitioned’. It can be shared between multiple virtual machines, or even shared between virtual machines and the hypervisor itself. For example, a single 10Gbps NIC could be ‘passed through’ to a couple of virtual machines for direct access, and at the same time it could be attached to a vSwitch being used by other VMs with virtual NICs and vmkernel ports too. Think shared PCI passthrough.

Continue reading “An In-depth Look at SR-IOV NIC Passthrough”

MikroTik CRS309 Configuration – Part 2

Configuring inter-VLAN routing on the MikroTik CRS309-1G-8S+IN and checking the routing table.

Welcome to part two of a series on configuring the MikroTik CRS309-1G-8S+IN for use in a VMware home lab. In part one, I discussed RouterOS and how to get VLAN trunking configured for all ports from the CLI. Today I’ll be configuring inter-VLAN routing. Keep in mind that I’m using the CRS309, but this should be applicable to the popular CRS317 and other CRS3xx switches running RouterOS as well.

As I mentioned in my unboxing and overview post, the CRS309 is a fully featured L3 switch. It uses the Marvell Prestera DX switching chip with a forwarding ASIC for line-rate L2 performance at very low power consumption levels. This is great for a home lab, because your non-routed vSAN, iSCSI and vMotion kernel traffic can take advantage of full 10Gbps line rate throughput. To use the L3 features, however, the CRS3xx switches rely on the integrated 32-bit ARM CPU cores in the Prestera DX. This means that things like inter-VLAN routing, firewalling and other L3/L4 features are done in software. MikroTik rates the switch for around 2.5Gbps for anything that will need to be processed by the CPU cores. None the less, this is still plenty of routing throughput for a home lab and I’m really happy that they include a huge number of these features even if they are somewhat performance limited.

VLAN Interface Creation

In the part one I relied on the CLI to get started. Now that we have a management IP configured, we can continue in the UI. To get inter-VLAN routing working on the CRS3xx series switches, there are two steps – first, we’ll need to create VLAN interfaces with the correct VLAN ID specified. Second, we need to assign an IP address to these interfaces.
If you recall in part one, I created a VLAN interface in VLAN 1 to get management connectivity to the switch again. That interface is visible in the UI from the Interfaces/VLAN tab:

crs309-intervlan-1

As a next step, I’ll create VLAN interfaces for the other five VLANs (11,76,98,2005 and 3156). Click the ‘Add New’ button to create a new one. I’ll start with VLAN 11 as an example.

crs309-intervlan-2

In my case there were only three options I needed to change – name, VLAN ID and Interface. I like to give my VLAN interfaces the names INT_VLANX for consistency. Be sure to assign the VLAN interface to the bridge that you created previously so that it is accessible from all ports on the switch. I repeated this for all of my VLANs.

Continue reading “MikroTik CRS309 Configuration – Part 2”

MikroTik CRS309 Configuration – Part 1

The first in a multi-part series on configuring the CRS309 for a VMware home lab. Part 1 covers 802.1q VLAN trunking.

This will be the first of several posts on configuring the new MikroTik CRS309-1G-8S+IN for use in a VMware home lab. Today’s first post will introduce MikroTik’s RouterOS and will focus on VLAN trunking. Please note that although this was written for the CRS309, this should be applicable to all CRS3xx series switches, including the popular CRS317.

RouterOS vs SwOS

MikroTik’s RouterOS firmware that comes pre-loaded on the switch is very feature rich with loads of L3 functionality. It has everything from BGP/OSPF, to firewalling, NAT, DHCP services – you name it. It’s a multi-purpose firmware that was originally created for MikroTik’s routers. Today, it works with many of their other products, including switches, and wireless devices. Because of this, it feels more like router firmware adapted for use with switches as opposed to something designed especially for them. Configuring RouterOS on the CRS309/CRS317 can be challenging and there is a learning curve to overcome – even for those with a lot of networking experience. Things get easier when you learn the terminology and understand what does and does not apply to a switch within RouterOS.

The CRS309 also includes ‘SwOS’ – a straight-forward L2 firmware that is designed especially for their switches. I found SwOS’s UI to be very intuitive and a lot more like what I’d expect to see on a switch. If you are not looking to use any of the advanced L3 features that RouterOS offers, I’d highly recommend using SwOS instead. There is no need to flash back and forth from RouterOS to SwOS and vice-versa. Both firmware packages coexist in the flash memory and you can easily switch back and forth as required.

Although SwOS could suffice for my needs in the home lab, I’m not one to shy away from a challenge and really want to be able to take advantage of some of the L3 features. If I could get them to work, I could potentially retire one or two virtual machines that I currently use for routing and firewalling.

802.1q VLAN Trunking

One of the first things I tried to do with the CRS309 was get a bunch of VLANs created and all the interfaces configured as 802.1q VLAN trunk ports. This is probably the most common VMware home lab requirement with 10Gbps networking. Each host will have a limited number of interfaces, so VLAN trunks are critical to separate out the various services.

When you get into the RouterOS WebFig interface, you’ll find several places where VLANs can be configured. My first attempt was under ‘Switch’ and ‘VLANs’ – sounds logical enough:

crs309-vlans1

But clearly this doesn’t work. There is also a section for VLANs under ‘Interfaces’, but this is for VLAN interface creation, which I’ll get into later. One of the first things to keep in mind with RouterOS is that a lot of the configuration relating to the switch – including VLAN configuration – is done in the ‘Bridge’ section – not the Switch section.

From the CRS3xx series manual:

“Since RouterOS v6.41 bridges provide VLAN aware Layer2 forwarding and VLAN tag modifications within the bridge. This set of features makes bridge operation more like a traditional Ethernet switch and allows to overcome Spanning Tree compatibility issues compared to configuration when tunnel-like VLAN interfaces are bridged..”

Another important thing to remember is that any VLAN related configuration you do in the bridge does not come into effect until an option called vlan-filtering is set to ‘yes’ for the bridge itself.

Continue reading “MikroTik CRS309 Configuration – Part 1”

MikroTik CRS309-1G-8S+IN

Initial impressions on a very energy efficient 10Gbps SFP+ switch and an excellent choice for a VMware home lab!

A little over a year ago I decided to get my lab 10Gbps capable. At the time, the Quanta LB6M seemed like the obvious choice with 24x SFP+ interfaces, several 1Gbps copper interfaces and a price tag of only $300 USD. It worked well in my lab, but as you can imagine, older 10Gbps technology is far from energy efficient. The switch idled at close to 130W, which wasn’t far off from all three of my compute hosts put together. It was also very loud with seven high-RPM 40mm fans. Even when at their lowest setting, these chassis and PSU fans were painfully loud. There is no doubt that this switch is more at home in a datacenter than a home lab. It wasn’t long before I started keeping an eye out for newer offerings based on more efficient and cost-effective technologies. That’s when I came across MikroTik’s latest 3xx series “Cloud Router Switches”.

The MikroTik CRS309-1G-8S+IN

MikroTik is a technology firm based out of Latvia and is well known for their unique networking products. They sell a range of switches, routers, wireless products and even their caseless RouterBOARD systems for those interested in doing custom routers.

I was originally looking to purchase the CRS309’s bigger brother – the CRS317. Feature-wise they are very similar, but the CRS317 doubles up on SFP+ and 1Gbps copper interfaces. With two SFP+ ports on each of my four ESXi hosts, the CRS309’s eight SFP+ ports seemed to be sufficient. The feature that sold me on the CRS309 as opposed to the CRS317, however, was its completely passive/fanless design. The CRS317’s fans only spin up if the unit gets hot, but I liked the simplicity of a completely fanless solution.

Feature-wise, the CRS309 is a pretty impressive switch. Its state of the art Marvell Prestera 98DX8208 switching chip (98DX8216 in the CRS317) is what makes this such an efficient unit. The 98DX8208 is highly integrated and is good for line rate forwarding on all the SFP+ ports. It also includes an integrated dual core 32-bit ARM based processor running 800MHz. You can find more information on the Marvell Prestera here. The flash storage is a pretty spartan 16MB, but this seems to be plenty for the RouterOS and SwOS firmware to coexist on the switch simultaneously. I’ll get more into the CRS309’s software in a future post.

From a L2 perspective, it’s a perfectly capable unit. It can do line-rate L1 and L2 forwarding at about 81,000Mbps aggregate, or 162,000 Mbps full-duplex. This is because the 98DX8208 has an ASIC switching component and all L2 forwarding is done in hardware. The L3 features of the switch, however, are all done in software and must be processed by the integrated ARM CPU cores. Based on MikroTik’s test results, about 2.5Gbps can be expected when traffic has to go through the CPU. Your mileage may vary though depending on the features you use. This may not sound great, but in a home lab environment, you don’t really need high throughput for inter-VLAN routing or other L3 features. I’m most interested in having 10Gbps line rate throughput for host-to-host VSAN traffic, iSCSI, vMotion etc. If you really do need faster routing, you could always spin up a VyOS VM and use your hypervisor’s horsepower for that purpose. Given the $269 USD price of the unit, I think it’s awesome that you get a full suite of L3 features even if throughput is somewhat limited.

You can find more information and the specifications of the CRS309 at Mikrotik’s site.

Continue reading “MikroTik CRS309-1G-8S+IN”

Manually Patching an ESXi Host from the CLI

Manually patching standalone ESXi hosts without access to vCenter or Update Manager using offline bundles and the CLI.

There are many different reasons you may want to patch your ESXi host. VMware regularly releases bug fixes and security patches, or perhaps you need a newer build for compatibility with another application or third-party tool. In my situation, the ESXi 6.7 U1 ESXi hosts (build 10302608) are not compatible with NSX-T 2.4.0, so I need to get them patched to at least 6.7 EP06 (build 11675023).

hostupgcli-1

Before you get started, you’ll want to figure out which patch release you want to update to. There is quite often some confusion surrounding the naming of VMware patch releases. In some cases, a build number is referenced, for example, 10302608. In other cases, a friendly name is referenced – something like 6.7 EP06 or 6.5 P03. The EP in the name denotes an ‘Express Patch’ with a limited number of fixes released outside of the regular patch cadence, where as a ‘P’ release is a standard patch. In addition to this, major update releases are referred to as ‘U’, for example, 6.7 U1. And to make things more confusing, a special ‘Release Name’ is quite often referenced in security bulletins and other documents. Release names generally contain the release date in them. For example, ESXi670-201903001 for ESXi 6.7 EP07.

The best place to start is VMware KB 1014508, which provides links to numerous KB articles that can be used for cross referencing build numbers with friendly versions names. The KB we’re interested in for ESXi is KB 2143832.

Continue reading “Manually Patching an ESXi Host from the CLI”