The 486 Restoration – Part 1

As you may have noticed in my recent Building a Retro Gaming Rig series, I’m quite passionate about 1990s era PC hardware. Machines from this time are very nostalgic to me as this is when I really started getting interested in PCs and technology in general. Granted, PC gaming is what really drove my interest in hardware initially, but down the line, I really started enjoyed the hardware just for the sake of it.

I’ve only recently started acquiring and collecting vintage hardware in the last year or so, but I’ve always been drawn to 486 systems. Although we had an old monochrome 8-bit machine growing up – possibly a 80286 – the first PC I was really interested in was a 486 system bought in 1994.

486-2

This won’t be the first 486 system I have in my collection. I got a well maintained DEC low profile system from my brother-in-law last summer. It’s a nice system that I hope to take a look at in another post, but it’s very integrated. Everything is on-board and proprietary so it leaves very little room for tweaking. That said, I really wanted something I could customize.

Today’s project all started with an ad on Kijiji I stumbled on a few weeks back – a 486 tower system in “working condition”. Inspecting the posted images carefully, I could see that the system was far from complete – it was missing a video card and didn’t have a hard drive.

Continue reading

NSX Troubleshooting Scenario 4 – Solution

Welcome to the fourth installment of a new series of NSX troubleshooting scenarios. Thanks to everyone who took the time to comment on the first half of scenario four. Today I’ll be performing some troubleshooting and will resolve the issue.

Please see the first half for more detail on the problem symptoms and some scoping.

Getting Started

During the scoping in the first half of the scenario, we saw that the problem was squarely in the customer’s new secondary NSX deployment. Two test virtual machines – linux-r1 and linux-r2 – could not be added to any of the universal logical switches.

From the ‘Logical Switches’ view in the NSX Web Client UI, we could see that these universal logical switches were synchronized across both NSX Managers. These existed from the perspective of the Primary and Secondary manager views:

tshoot4a-3

Perhaps the most telling observation, however, was the absence of distributed port groups associated with the universal logical switches on the dvs-rem switch:

tshoot4a-7

As we can see above, the port groups do exist for logical switches in the VNI 900x range. These are non-universal, logical switches available to the secondary NSX deployment only.

In the host preparation section, we can see that dvs-rem is indeed the configured distributed switch for the compute-r cluster and that both hosts look good from a VTEP/VXLAN perspective:

tshoot4b-3

So why are these port groups missing? Without them, VMs simply can’t be added to the associated logical switches.

The Solution

Although you’ve probably noticed that I like to dig deep in some of these scenarios, this one is actually pretty straight forward. A straight forward, but all too common problem – the cluster has not been added to the universal transport zone.

tshoot4b-4

You’d be surprised how often I see this, but to be fair, it’s very easily overlooked. I sometimes need to remind myself to check all the basics first, especially when dealing with new deployments. The key symptom that raised red flags for me was the lack of auto-generated port groups on the distributed switch. The addition of the cluster to the transport zone will trigger the creation of these port groups. If they don’t exist, this should be the first thing that is checked.

As soon as I added the compute-r cluster to the Universal TZ transport zone, we see an immediate slew of portgroup creation tasks:

tshoot4b-5

I’ve now essentially told NSX that I want all the logical switches in that transport zone to span to the compute-r cluster. In NSX-V, we can think of a transport zone as a boundary spanning one or more clusters. Only clusters in that transport zone will have the logical switches available to them for use.

The concept of a ‘Universal Transport Zone’ just takes this a step further and allows clusters in different vCenter instances to connect to the same universal logical switches. The fact that we saw portgroups for the 9000-900X range of VNIs tells us that the compute-r cluster existed in the non-universal Transport Zone called ‘Remote TZ’, but was missing from ‘Universal TZ’.

Conclusion

Thanks again to everyone for posting their testing suggestions and theories! I hope you enjoyed this scenario. If you have other suggestions for troubleshooting scenarios you’d like to see, please leave a comment, or reach out to me on Twitter (@vswitchzero).

 

 

NSX Troubleshooting Scenario 4

Time for another NSX troubleshooting scenario! Welcome to the fourth installment of my new NSX troubleshooting series. What I hope to do in these posts is share some of the common issues I run across from day to day. Each scenario will be a two-part post. The first will be an outline of the symptoms and problem statement along with bits of information from the environment. The second will be the solution, including the troubleshooting and investigation I did to get there.

NSX Troubleshooting Scenario 4

As always, we’ll start with a customer problem statement:

“We recently deployed Cross-vCenter NSX for a remote datacenter location. When we try to add VMs to the universal logical switches, there are no VM vNICs in the list to add. This works fine at the primary datacenter.”

This customer’s environment will be the same as what we outlined in scenario 3. Keep in mind that this should be treated separately. Forget everything from the previous scenario.

tshoot3a-9

The main location is depicted on the left. A three host cluster called compute-a exists there. All of the VLAN backed networks route through a router called vyos. The Universal Control Cluster exists at this location, as does the primary NSX manager.

The ‘remote datacenter’ is to the right of the dashed line. The single ‘compute-r’ cluster there is associated with the secondary NSX manager at that location. According to the customer, this was only recently added.

Continue reading

NSX 6.4.0 Upgrade Compatibility

Thinking about upgrading to NSX 6.4.0? As I discussed in my recent Ten Tips for a Successful NSX Upgrade post, it’s always a good idea to do your research before upgrading. Along with reading the release notes, checking the VMware compatibility Matrix is essential.

VMware just updated some of the compatibility matrices to include information about 6.4.0. Here are the relevant Links:

From an NSX upgrade path perspective, you’ll be happy to learn that any current build of NSX 6.2.x or 6.3.x should be fine. At the time of writing, this would be 6.2.9 and earlier as well as 6.3.5 and earlier.

640upg-0

NSX upgrade compatibility – screenshot from 1/17/2018.

On a positive note, VMware required a workaround to be done for some older 6.2.x builds to go to 6.3.5, but this is no longer required for 6.4.0. The underling issue that required this has been resolved.

From a vCenter and ESXi 6.0 and 6.5 perspective, the requirements for NSX 6.4.0 remain largely unchanged from late 6.3.x releases. What you’ll immediately notice is that NSX 6.4.0 is not supported with vSphere 5.5. If you are running vSphere 5.5, you’ll need to get to at least 6.0 U2 before considering NSX 6.4.0.

From the NSX 6.4.0 release notes:

Supported: 6.0 Update 2, 6.0 Update 3
Recommended: 6.0 Update 3. vSphere 6.0 Update 3 resolves the issue of duplicate VTEPs in ESXi hosts after rebooting vCenter server. See VMware Knowledge Base article 2144605 for more information.

Supported: 6.5a, 6.5 Update 1
Recommended: 6.5 Update 1. vSphere 6.5 Update 1 resolves the issue of EAM failing with OutOfMemory. See VMware Knowledge Base Article 2135378 for more information.

Note: vSphere 5.5 is not supported with NSX 6.4.0.

It doesn’t appear that the matrix has been updated yet for other VMware products that interact with NSX, such as vCloud Director.

Before rushing out to upgrade to NSX 6.4.0, be sure to check for compatibility – especially if you are using any third party products. It may be some time before other vendors certify their products for 6.4.0.

Stay tuned for a closer look at some of the new NSX 6.4.0 features!

 

NSX 6.4 Now Available!

The long anticipated NSX 6.4.0 (build number 7564187) has finally been released. There is a long list of new features that I’ll hopefully cover in more depth in a future post. For now, here are a few highlights:

  • Layer-7 Distributed Firewall – The DFW now supports some layer-7 application context.
  • IDFW – The IDFW now supports user sessions via RDP.
  • HTML5 – Some of the new functionality is done in HTML5. You can see the supported functions here.
  • New Upgrade Coordinator – A single portal that simplifies the upgrade process! More on this later.
  • New HTML5 dashboard – The new NSX default home page.
  • System Scale – A new dashboard that helps you to monitor maximums and supported scale parameters.
  • Single Click Support Bundles – An awesome feature that GSS will appreciate!
  • API Improvements – Now supporting JSON and XML for formatting.
  • NSX Edge Enhancements – Many improvements to BGP, NAT and faster HA failovers.

I think most people will agree that the new L7 firewall is the most exciting new feature. This really takes micro segmentation to the next level and provides features that were only possible with 3rd party add-on products in the past.

I think it’s also important to note that there are many bug fixes in NSX 6.4.0. Not only does it provide many new features, but it fixes bugs found in earlier releases. There are over 33 bug fixes alone in the NSX 6.4.0 release!

Relevant links:

Stay tuned for more info! Martijn Smit at vmguru.com (@smitmartijn) did a great post for those looking for more info on some of the new features!

NSX Troubleshooting Scenario 3 – Solution

Thanks to everyone who took the time to comment on the first half of scenario 3, both here and on twitter. There were many great suggestions, and some were spot-on!

For more detail on the problem, some diagrams and other scoping information, be sure to check out the first half of scenario 3.

Getting Started

During the initial scoping in the first half, we didn’t really see too much out of the ordinary in the UI aside from some odd ‘red alarm’ exclamation marks on the compute-r hosts in the Host Preparation section.

More than one commenter pointed out that this needs to be investigated. I wholeheartedly agree. Despite seeing a green status for host VIB installation, firewall status and VXLAN, there is clearly still a problem. That problem is related to ‘Communication Channel Health’.

tshoot3b-1

The communication channel health check was a new feature added in NSX 6.2 and makes it easy to see which hosts are having problems communicating with both NSX Manager and the Control Cluster. In our case, both esx-r1 and esx-r2 are reporting problems with their control plane agent (netcpa) to all three controllers.

Continue reading

NSX Troubleshooting Scenario 3

Welcome to the third installment of my new NSX troubleshooting series. What I hope to do in these posts is share some of the common issues I run across from day to day. Each scenario will be a two-part post. The first will be an outline of the symptoms and problem statement along with bits of information from the environment. The second will be the solution, including the troubleshooting and investigation I did to get there.

NSX Troubleshooting Scenario 3

I’ll start off again with a brief customer problem description:

“We’ve recently deployed Cross-vCenter NSX for a remote datacenter location. All of the VMs at that location don’t have connectivity. They can’t ping their gateway, nor can they ping VMs on other hosts. Only VMs on the same host can ping each other.”

This is a pretty vague description of the problem, so let’s have a closer look at this environment. To begin, let’s look at the high-level physical interconnection between datacenters in the following diagram:

tshoot3a-9

 

There isn’t a lot of detail above, but it helps to give us some talking points. The main location is depicted on the left. A three host cluster called compute-a exists there. All of the VLAN backed networks route through a router called vyos. The Universal Control Cluster exists at this location, as does the primary NSX manager.

Continue reading