NSX 6.4.3 Now Available!

Express maintenance release fixes two discovered issues.

If it feels like 6.4.2 was just released, you’d be correct – only three weeks ago. The new 6.4.3 release (build 9927516) is what’s referred to as an express maintenance release. These releases aim to correct specific customer identified problems as quickly as possible rather than having to wait many months for the next full patch release.

In this release, only two identified bugs have been fixed. The first is an SSO issue that can occur in environments with multiple PSCs:

“Fixed Issue 2186945: NSX Data Center for vSphere 6.4.2 will result in loss of SSO functionality under specific conditions. NSX Data Center for vSphere cannot connect to SSO in an environment with multiple PSCs or STS certificates after installing or upgrading to NSX Data Center for vSphere 6.4.2.”

The second is an issue with IPsets that can impact third party security products – like Palo Alto Networks and Checkpoint Net-X services for example:

“Issue 2186968: Static IPset not reported to containerset API call. If you have service appliances, NSX might omit IP sets in communicating with Partner Service Managers. This can lead to partner firewalls allowing or denying connections incorrectly. Fixed in 6.4.3.”

You can find more information on these problems in VMware KB 57770 and KB 57834.

So knowing that these are the only two fixes included, the question obviously becomes – do I really need to upgrade?

If you are running 6.4.2 today, you might not need to. If you have more than one PSC associated with the vCenter Server that NSX manager connects to, or if you use third party firewall products that work in conjunction with NSX, the answer would be yes. If you don’t, there is really no benefit to upgrading to 6.3.4 and it would be best to save your efforts for the next major release.

That said, if you were already planning an upgrade to 6.4.2, it only makes sense to go to 6.4.3 instead. You’d get all the benefits of 6.4.2 plus these two additional fixes.

Kudos goes out to the VMware NSBU engineering team for their quick work in getting these issues fixed and getting 6.4.3 out so quickly.

Relevant Links:

 

Manual Upgrade of NSX Host VIBs

Complete manual control of the NSX host VIB upgrade process without the use of vSphere DRS.

NSX host upgrades are well automated these days. By taking advantage of ‘fully automated’ DRS, hosts in a cluster can be evacuated, put in maintenance mode, upgraded, and even rebooted without any user intervention. By relying on DRS for resource scheduling, NSX doesn’t have to worry about doing too many hosts simultaneously and the process can generally be done without end-users even noticing.

But what if you don’t want this level of automation? Maybe you’ve got very sensitive VMs that can’t be migrated, or VMs pinned to hosts for some reason. Or maybe you just want maximum control of the upgrade process and which hosts are upgraded – and when.

There is no reason why you can’t have full control of the host upgrade process and leave DRS in manual mode. This is indeed supported.

Most of the documentation and guides out there assume that people will want to take advantage of DRS-driven upgrades, but this doesn’t mean it’s the only supported method. There is no reason why you can’t have full control of the host upgrade process and this is indeed supported. Today I’ll be walking through this in my lab as I upgrade to NSX 6.4.1.

Step 1 – Clicking the Upgrade Link

Once you’ve upgraded your NSX manager and control cluster, you should be ready to begin tackling your ESXi host clusters. Before you proceed, you’ll need to ensure your host clusters have DRS set to ‘Manual’ mode. Don’t disable DRS – that will get rid of your resource pools. Manual mode is sufficient.

Next, you’ll need to browse to the usual ‘Installation’ section in the UI, and click on the ‘Host Preparation’ tab. From here, it’s now safe to click the ‘Upgrade Available’ link on the cluster to begin the upgrade process. Because DRS is in manual mode, nothing will be able happen. Hosts can’t be evacuated, and as a result, VIBs can’t be upgraded. In essence, the upgrade has started, but immediately stalls and awaits manual intervention.

 

upgnodrs-3
This upgrade is essentially hung up waiting for hosts to enter maintenance mode.

 

In 6.4.1 as shown above, a clear banner message is displayed reminding you that DRS is in manual mode and that hosts must be manually put in maintenance mode.

Continue reading “Manual Upgrade of NSX Host VIBs”

NSX 6.4.2 Now Available!

It’s always an exciting day when a new build of NSX is released. As of August 21st, NSX 6.4.2 (Build 9643711) is now available for download. Despite being just a ‘dot’ release, VMware has included numerous functional enhancements in addition to the usual bug fixes.

One of the first things you’ll probably notice is that VMware is now referring to ‘NSX for vSphere’ as ‘NSX Data Center for vSphere’. I’m not sure the name has a good ring to it, but we’ll go with that.

A few notable new features:

More HTML5 Enabled Features: It seems VMware is adding additional HTML5 functionality with each release now. In 6.4.2, we can now access the TraceFlow, User Domains, Audit Logs and Tasks and Events sections from the HTML5 client.

Multicast Support: This is a big one. NSX 6.4.2 now supports both IGMPv2 and PIM Sparse on both DLRs and ESGs. I hope to take a closer look at these changes in a future post.

MAC Limit Increase: Traditionally, we’ve always recommended limiting each logical switch to a /22 or smaller network to avoid exceeding the 2048 MAC entry limit. NSX 6.4.2 now doubles this to 4096 entries per logical switch.

L7 Firewall Improvements: Additional L7 contexts added including EPIC, Microsoft SQL and BLAST AppIDs.

Firewall Rule Hit Count: Easily see which firewall rules are being hit, and which are not with these counters.

Firewall Section Locking: Great to allow multiple people to work on the DFW simultaneously without conflicting with eachother.

Additional Scale Dashboard Metrics: There are 25 new metrics added to ensure you stay within supported limits.

Controller NTP, DNS and Syslog: This is now finally exposed in the UI and fully supported. As someone who is frequently looking at log bundles, it’ll be nice to finally be able to have accurate time keeping on controller nodes.

On the resolved issues front, I’m happy to report that 6.4.2 includes 21 documented bug fixes. You can find the full list in the release notes, but a couple of very welcome ones include issues 2132361 and 2147002. Those who are using Guest Introspection on NSX 6.4.1 should consider upgrading as soon as possible due to the service and performance problems outlined in KB 56734. NSX 6.4.2 corrects this problem.

Another issue not listed in the release notes – it may have been missed – but fixed in 6.4.2 is the full ESG tmpfs partition issue in 6.4.1. You can find more information on this issue in KB 57003 as well as in a recent post I did on it here.

Here are the relevant links for NSX 6.4.2 (Build 9643711):

I’m looking forward to getting my lab upgraded and will be trying out a few of these new features. Remember, if you are planning to upgrade, be sure to do the necessary planning and preparation.

Using the Upgrade Coordinator in NSX 6.4

If you’ve ever gone through an NSX upgrade, you know how many components there are to upgrade. You’ve got your NSX manager appliances, control cluster, ESXi host VIBs, edges, DLR and even guest introspection appliances. In the past, every one of these needed to be upgraded independently and in the correct order.

VMware hopes to make this process a lot more straight forward with the release of the new ‘Upgrade Coordinator’ feature. This is now included as of 6.4.0 in the HTML5 client.
The aim of the upgrade coordinator is to create an upgrade plan or checklist and then to execute this in the correct order. There are many aspects of the upgrade plan than can be customized but for those looking for maximum automation – a single click upgrade option exists as well.

It is important to note that although the upgrade coordinator helps to take some of the guess work out of upgrading, there are still tasks and planning you’ll want to do ahead of time. If you haven’t already, please read my Ten Tips for a Successful NSX Upgrade post.

Today I’ll be using the upgrade coordinator to go from 6.3.3 to 6.4.0 and walk you through the process.

Upgrading NSX Manager

Although the upgrade coordinator plan covers numerous NSX components, NSX manager is not one of them. You’ll still need to use the good old manager UI upgrade process as described on page 36 of the NSX 6.4 upgrade guide. Thankfully, this is the easiest part of the upgrade.

You’ll also notice that I can use the upgrade coordinator for my lab upgrade even though I’m at a 6.3.x release currently. This is because the NSX manager is upgraded first, adding this management plane functionality to be used for the rest of the upgrade.

Note: If you are using a Cross-vCenter deployment of NSX, be sure to upgrade your primary, followed by all secondary managers before proceeding with the rest of the upgrade.

upgco-1

Upgrading NSX Manager to 6.4.x should look very familiar as the process really hasn’t changed. Be sure to heed the warning banner about taking a backup before proceeding. For more info on this, please see my Ten Tips for a Successful NSX Upgrade post.

Continue reading “Using the Upgrade Coordinator in NSX 6.4”

Missing Labels in the HTML5 Plugin with NSX 6.4.

If you recently upgraded to NSX 6.4, you are probably anxious to check out the new HTML5 plugin. VMware added some limited functionality in HTML5, including the new dashboard, upgrade coordinator as well as packet capture and support bundle collection tools. After upgrading NSX manager, you may notice that the plugin does not look the way it should. Many labels are missing. Rather than seeing tab titles like ‘Overview’ and ‘System Scale’ you see ‘dashboard.button.label.overview’ and ‘dashboard.button.label.systemScale’:

html5labels-1

Obviously, things aren’t displaying as they should be, and some views – like the upgrade coordinator – are practically unusable:

html5labels-2

Continue reading “Missing Labels in the HTML5 Plugin with NSX 6.4.”

NSX 6.3.6 Now Available!

As of March 29th, the long anticipated NSX 6.3.6 release is now available to download from VMware. NSX 6.3.6 with build number 8085122 is a maintenance release and includes a total of 20 documented bug fixes. You can find details on these in the Resolved Issues section of the NSX 6.3.6 release notes.

Aside from bug fixes, there are a couple of interesting changes to note. From the release notes:

“If you have more than one vSphere Distributed Switch, and if VXLAN is configured on one of them, you must connect any Distributed Logical Router interfaces to port groups on that vSphere Distributed Switch. Starting in NSX 6.3.6, this configuration is enforced in the UI and API. In earlier releases, you were not prevented from creating an invalid configuration.”

Since confusion with multiple DVS switches is something I’ve run into with customers in the past, I’m happy to see that this is now being enforced.

Another great addition is an automatic backup function included in 6.3.6. From the public documentation:

“When you upgrade NSX Manager to NSX 6.3.6, a backup is taken and saved locally as part of the upgrade process. You must contact VMware customer support to restore this backup. This automatic backup is intended as a failsafe in case the regular backup fails.”

As part of the upgrade process, a backup file is saved to the local filesystem of the NSX Manager as an extra bit of insurance. It’s important to note, however, that this does not remove the need to backup prior to upgrading. Consider this the backup of last resort in case something goes horribly wrong.

Another point to note is that NSX 6.3.6 continues to be incompatible with upgrades from 6.2.2, 6.2.1 or 6.2.0. You can see VMware KB 51624 for more information, but don’t try it – it won’t work and you’ll be forced to restore from backup. Upgrading to 6.2.9 before going to 6.3.6 is the correct workaround. I covered more about this issue here in a recent post.

There are a number of great bug fixes included in 6.3.6 – far too many for me to cover here, but a couple that I’m really happy to see include:

“Fixed Issue 2035026: Network outage of ~40-50 seconds seen on Edge Upgrade. During Edge upgrade, there is an outage of approximately 40-50 seconds. Fixed in 6.3.6

This one is self-explanatory – not the expected amount of downtime to experience during an edge upgrade, so glad to see it’s been resolved.

“Fixed Issue 2058636: After upgrading to 6.3.5, the routing loop between DLR and ESG’s causes connectivity issues in certain BGP configurations. A routing loop is causing a connectivity issue. Fixed in 6.3.6”

I hope to write a separate post on this one, but in short, some loop prevention code was removed in 6.3.5, and because the AS PATH is stripped with private BGP autonomous systems, this can lead to loops. If you are running iBGP between your DLR and ESGs, this isn’t a problem, but if your AS numbers differ between DLR and ESG, you could run into this. In 6.4.0 a toggle switch was included to avoid stripping the AS PATH, so this is more of an issue in 6.3.5.

As always, if you are planning to upgrade, be sure to thoroughly go through the release notes. I’d also recommend taking a look through my recent post ‘Ten Tips for a Successful NSX Upgrade’.

Links and Downloads:

NSX 6.3.4 Now Available!

After only two months since the release of NSX 6.3.3, VMware has released the 6.3.4 maintenance release. See what’s fixed and if you really need to upgrade.

On Friday October 13th, VMware released NSX for vSphere 6.3.4. You may be surprised to see another 6.3.x version only two months after the release of 6.3.3. Unlike the usual build updates, 6.3.4 is a maintenance release containing only a small number of fixes for problems identified in 6.3.3. This is very similar to the 6.2.6 maintenance release that came out shortly after 6.2.5.

As always, the relevant detail can be found in the 6.3.4 Release Notes. You can also find the 6.3.4 upgrade bundle at the VMware NSX Download Page.

In the Resolved Issues section of the release notes, VMware outlines only three separate fixes that 6.3.4 addresses.

Resolved Issues

I’ll provide a bit of additional commentary around each of the resolved issues in 6.3.4:

Fixed Issue 1970527: ARP fails to resolve for VMs when Logical Distributed Router ARP table crosses 5K limit

This first problem was actually a regression in 6.3.3. In a previous release, the ARP table limit was increased to 20K, but in 6.3.3 the limit regressed back to previous limit of 5K. To be honest, not many customers have deployments to the scale where this would be a problem. A small number of very large deployments may see issues in 6.3.3.

Fixed Issue 1961105: Hardware VTEP connection goes down upon controller reboot. A BufferOverFlow exception is seen when certain hardware VTEP configurations are pushed from the NSX Manager to the NSX Controller. This overflow issue prevents the NSX Controller from getting a complete hardware gateway configuration. Fixed in 6.3.4.

This buffer overflow issue could potentially cause datapath issues. Thankfully, not very many NSX designs include the use of Hardware VTEPs, but if yours does and you are running 6.3.3, it would be a good idea to consider upgrading to 6.3.4.

And the final, but most likely to impact customer’s is listed third in the release notes:

Fixed Issue 1955855: Controller API could fail due to cleanup of API server reference files. Upon cleanup of required files, workflows such as traceflow and central CLI will fail. If external events disrupt the persistent TCP connections between NSX Manager and controller, NSX Manager will lose the ability to make API connections to controllers, and the UI will display the controllers as disconnected. There is no datapath impact. Fixed in 6.3.4.

I discussed this issue in more detail in a recent blog post. You can also find more information on this issue in VMware KB 2151719. In a nutshell, the communication channel between NSX Manager and the NSX Control cluster can become disrupted due to files being periodically purged by a cleanup maintenance script. Usually, you wouldn’t notice until the connection needed to be re-established after a network outage or an NSX manager reboot. Thankfully, as VMware mentions, there is no datapath impact and a simple workaround exists. Despite being more of an annoyance than a serious problem, the vast majority of NSX users running 6.3.3 are likely to hit this at one time or another.

My Opinion and Upgrade Recommendations

The third issue in the release notes described in VMware KB 2151719 is likely the most disruptive to the majority of NSX users. That said, I really don’t think it’s critical enough to have to drop everything and upgrade immediately. The workaround of restarting the controller API service is relatively simple and there should be no resulting datapath impact.

The other two issues described are not likely to be encountered in the vast majority of NSX deployments, but are potentially more serious. Unless you are really pushing the scale limits or are using Hardware VTEPs, there is likely little reason to be concerned.

I certainly think that VMware did the right thing to patch these identified problems as quickly as possible. For new greenfield deployments, I think there is no question that 6.3.4 is the way to go. For those already running 6.3.3, it’s certainly not a bad idea to upgrade, but you may want to consider holding out for 6.3.5, which should include a much larger number of fixes.

On a positive note, if you do decide to upgrade, there are likely some components that will not need to be upgraded. Because there are only a small number or fixes relating to the control plane and logical switching, ESGs, DLRs and Guest Introspection will likely not have any code changes. You’ll also benefit from not having to reboot ESXi hosts for VIB patches thanks to changes in the 6.3.x upgrade process. Once I have a chance to go through the upgrade in my lab, I’ll report back on this.

Running 6.3.3 today? Let me know what your plans are!

Re-deploying NSX Controllers During Upgrades

I’ve had this question come up a lot lately and there seems to be some confusion around whether or not NSX controllers need to be redeployed after upgrading them. The short answer to this question is really “it depends”. There are actually three different scenarios where you may want or need to delete and re-deploy NSX controllers as part of the upgrade process. Today, I’ll walk through these situations and the proper process to delete and re-deploy your controller nodes.

The Normal Upgrade Process

Upgrading the NSX Control Cluster is a very straight-forward process. After clicking the upgrade link, an automated process begins to upgrade the controller code, and reboot each cluster member sequentially.

controller-redeploy-3

Once the ‘Upgrade Available’ link is clicked, you’ll see each of the three controllers download the upgrade bundle, upgrade and then reboot before NSX moves on to the next one.

controller-redeploy-7

Once NSX goes through its paces, it’s usually a good idea to ensure that the control-cluster join status is ‘Join complete’ and that all three controllers agree on the Cluster UUID.

nsx-controller # show control-cluster status
Type Status Since
--------------------------------------------------------------------------------
Join status: Join complete 07/24 13:38:32
Majority status: Connected to cluster majority 07/24 13:38:19
Restart status: This controller can be safely restarted 07/24 13:38:48
Cluster ID: f2849ee2-7fb6-4aca-abf4-2ca176337956
Node UUID: f2849ee2-7fb6-4aca-abf4-2ca176337956

Role Configured status Active status
--------------------------------------------------------------------------------
api_provider enabled activated
persistence_server enabled activated
switch_manager enabled activated
logical_manager enabled activated
directory_server enabled activated

Because the underling structure of the VM itself doesn’t change, this sort of in-place code upgrade and reboot is sufficient and has minimal impact.

Scenario 1 – E1000 vNIC Replacement

The first scenario where you may want to redeploy the controllers involves a virtual hardware change that was introduced in NSX 6.1.5. NSX controllers deployed in 6.1.5 use the VMXNET3 vNIC adapter, whereas older versions had legacy Intel E1000 emulated vNICs. This change wasn’t well publicized and surprisingly it isn’t even found in the NSX 6.1.5 release notes.

I’ve seen quite a few customers go through upgrade cycles from 6.0 or 6.1 all the way to more recent 6.2.x or 6.3.x releases while retaining E1000 vNICs on their controllers. Although the E1000 vNIC adapter is generally pretty stable, there is at least one documented issue where the adapter driver suffers a hang and the controller is no longer able to transmit or receive. This problem is described in VMware KB 2150747.

That said, I personally would not wait for a problem to occur and would recommend checking to ensure your controllers are using VMXNET3, and if not, go through the redeployment procedure I’ll outline later in this post. Aside from preventing the E1000 hang problem, you’ll also benefit from the other improvements VMXNET3 has to offer like better offloading and lower CPU utilization.

Unfortunately, finding out if your controllers have E1000 or VMXNET3 adapters can be a tad tricky. You’ll find that your controllers are locked down and can’t be edited in the vSphere Web Client or the legacy vSphere Client.

controller-redeploy-1

As seen above, the ‘Edit Settings’ option is greyed out.

controller-redeploy-2

The summary page also doesn’t tell us much, so the easiest way to get the adapter type is to check from the ESXi command line.

First, let’s SSH into a host where one of the controllers live and then find the full path to the VMX file:

[root@esx0:~] cd /vmfs/volumes
[root@esx0:/vmfs/volumes] find ./ -name NSX_Controller*.vmx
./58f77a6f-30961726-ac7e-002655e1b06c/NSX_Controller_078fcf78-9a0c-491d-95a0-02e8b5175935/NSX_Controller_078fcf78-9a0c-491d-95a0-02e8b5175935.vmx

Next, I will look for the relevant vNIC adapter settings in the VMX file using the full path obtained in the previous command output:

[root@esx0:/vmfs/volumes] cat ./58f77a6f-30961726-ac7e-002655e1b06c/NSX_Controller_c97459f1-3845-436f-8e03-60ad3cbed9e4/NSX_Controller_c97459f1-3845-436f-8e03-60ad3cbed9e4.vmx |grep -i ethernet0.virtualDev
ethernet0.virtualDev = "vmxnet3"

The key setting in the VMX that we are interested in is ethernet0.virtualDev. As seen above, the type is vmxnet3 on my controllers as they were created from a freshly deployed 6.2.5 environment. If you see e1000 here, your controllers were deployed from a 6.1.4 or older setup and have never been re-deployed.

Scenario 2 – Updating the Disk Partitioning Layout

The second scenario would be if your controllers were initially deployed in a version of NSX prior to 6.2.3. Since 6.2.3 was pulled shortly after release, 6.2.4 would be more relevant starting point.

A statement you’ll find in the NSX 6.2.4 release notes summarizes this change well:

“…New installations of NSX 6.2.3 or later will deploy NSX Controller appliances with updated disk partitions to provide extra cluster resiliency. In previous releases, log overflow on the controller disk might impact controller stability. In addition to adding log management enhancements to prevent overflows, the NSX Controller appliance has separate disk partitions for data and logs to safeguard against these events. If you upgrade to NSX 6.2.3 or later, the NSX Controller appliances will retain their original disk layout.”

Again, it’s possible you may never run into a problem due to the old partitioning layout, but it’s always wise to take advantage of ‘optional’ resiliency enhancements like this. This is especially true for such a critical component of the NSX control-plane.

Although there isn’t a supported way to enter the root shell on a controller appliance, the ‘show status’ command will provide you with the partitioning layout. Here is the layout on a newer 6.2.5 controller with the newer partitioning:

nsx-controller # show status
Version: 4.0.6 (Build 48886)

Current Time: Fri, 25 Aug 2017 15:01:17 +0000
Uptime: 32 days 1 hour 23 minutes 16 seconds

Load Average: 0.10, 0.10, 0.13
Memory Usage: 3926484 kB (Total), 267752 kB (Free)
Disk Usage:
Filesystem                      1K-blocks    Used Available Use% Mounted on
/dev/sda1                         6596784 1181520   5057120  19% /
udev                              1953420       4   1953416   1% /dev
/dev/mapper/nvp-var               6593712 2349576   3886148  38% /var
/dev/mapper/nvp-var+cloudnet+data 3776568  147908   3417104   5% /var/cloudnet/data

Essentially, there are now three separate partitions for data instead of just one. Files for everything were just lumped together along with the Linux OS in a single partition previously. If some runaway log files filled the partition, key services would be impacted. By separating everything out, the key controller services like the zookeeper clustering service will still be able to write to disk.

I don’t have access to a pre-6.2.3 setup at the moment, but you can tell if your controller still uses the old partitioning layout by the absence of two partitions in the ‘show status output’. Both /dev/mapper/nvp-var and /dev/mapper/nvp-var+cloudnet+data only exist on controllers using the new partitioning layout.

Because disk partitioning is a pretty low-level, there was really no way to incorporate this into the automated upgrade process. To get the new layout, you’ll need to delete and re-deploy the controller appliances.

Scenario 3 – Upgrading to NSX 6.3.3

NSX 6.3.3 introduces a major change to the NSX controllers, replacing the underlying Linux OS with VMware’s new distribution dubbed Photon OS. The virtual hardware also changes slightly in 6.3.3 as the Photon OS based controllers require larger VMDK disks. Because this changes the entire foundation of the VM and is mandatory – unlike the vNIC and partitioning changes mentioned earlier – there is no way to perform in-place code upgrades. Each of the controllers needs to be deleted and re-deployed.

Thankfully, because of the mandatory nature of this change, VMware modified the upgrade process in 6.3.3 to automatically delete and re-deploy controllers for you.

From the NSX 6.3.3 release notes:

“In NSX 6.3.3, the underlying operating system of the NSX Controller changes. This means that when you upgrade to NSX 6.3.3, instead of an in-place software upgrade, the existing controllers are deleted one at a time, and new Photon OS based controllers are deployed using the same IP addresses.”

That said, no manual intervention is required when upgrading to 6.3.3. Controllers will be deleted and re-deployed automatically as part of the upgrade process. For more information, see the NSX public docs on the subject.

Some Warnings and Cautions

Before I go through the process of destroying and re-creating controller nodes, I really want to preface by saying that this process is potentially risky and should only be done during a maintenance window. It’s also very important that the process be done correctly to ensure you don’t run into any major problems. Below are some common pitfalls and other recommendations:

  1. Never just delete or remove the controller appliances from the vCenter inventory. NSX keeps track of the controllers in its database and doesn’t react well to having appliances yanked from under it. They must be deleted properly.
  2. Never deploy more than three controllers thinking you can just do a ‘cut over’. I.e. Don’t deploy six controllers and then delete the three old ones. A one to one replacement must be done and we never want fewer than two functional controllers in the cluster, and never more than three.
  3. If a controller fails to delete using the normal supported method, there is a reason. Don’t force the deletion without speaking to VMware technical support first. A common reason I’ve seen for this is a mismatched moref identifier for the appliance VM. If the NSX database thinks a controller is vm-73, but the actual VM is vm-75, the delete will fail. Removing controllers from the vCenter Inventory and re-adding them will cause this type of mismatch.
  4. It’s very important to validate that the control cluster health is good before proceeding to the next controller for deletion/re-deployment. Do not skip these checks and be patient with this process. Unless you have two fully functional controllers up and running in the cluster, you won’t have full control-plane functionality and a risk data-plane outage.
  5. If something goes wrong, you’ll still be okay if you have two controllers working in the cluster. Don’t just proceed in the interest of ‘moving forward’ because there is a good chance the other two will behave similarly. Contact VMware support if there is every any doubt.

A Quick Note on Force Deletion

While trying to delete a controller, you’ll be greeted by a ‘Forcefully Delete’ option. When selected, this option nukes everything related to the controller from the NSX database and NSX doesn’t care whether the VM appliance is successfully removed or not. This option should never used unless advised by VMware support for repairing specific cluster problems. As mentioned in the previous section, if a regular delete fails, there is always a specific reason. Using ‘Forcefully Delete’ to work around these problems can leave remnants behind and potentially cause problems with the cluster.

The warning presented by the NSX UI when you try to Force Delete a cluster node:

“Forcefully deleting a controller may result into NSX Controller cluster going down and the rest of the controllers may get disconnected, thereby resulting in problems like no majority and data inconsistency. Many operations like adding logical components will not be possible. If you still choose to delete the controller, it is recommended to also delete the rest of the controllers and recreate them.”

It’s also worth mentioning that the only time you’d need to forcefully delete a controller in a normal workflow is when deleting the last of three controllers. NSX will only delete the very last controller using the force option. Because we’re only removing one at a time, this should not apply here.

Controller Re-deployment Process

Again, you won’t need to use this process if you are upgrading to NSX 6.3.3 or later because the deletion and re-creation of appliances is handled in an automated manner. If you’d like to take advantage of a VMXNET3 adapter and/or the new partitioning layout in newer versions of NSX, please read on.

The overall goal here is to replace the NSX control cluster members one at a time, keeping in mind that as long as two controller nodes are online and healthy, the control-plane continues to function. In theory, you shouldn’t suffer any kind of control-plane or data-plane outage using this process.

**Edit 11/15/2017: As you may be aware, there have been a few new bugs discovered, including one that impacts the deployment/re-deployment of NSX controllers in 6.3.3 and 6.3.4. Please be sure to have a look at my post on the subject as well as the VMware KB before proceeding. If you are running 6.3.3, do not delete your controllers until you’ve implemented the workaround or patched. If you still have the old 6.3.3 upgrade bundle, you may not be able to upgrade.

Step 1 – Data collection and preparation

Before proceeding, we’ll need to collect some information about our current controller deployment. In order to deploy a controller, the following information is required:

  1. The vSphere Cluster that your controllers will live in.
  2. The datastore you want to use for your controllers.
  3. The network portgroup (standard or distributed) that your controllers are in.
  4. If you used a specific naming convention for your controllers, be sure to note it down.
  5. And finally, the IP address pool that’s used for the controllers. Note that when deleting controllers using this method, an IP will be freed up from the pool so even with just three IPs in a pool, this process should work.

Be sure to get the above information from the vSphere Web Client before proceeding so that you don’t have to go looking for it during the process.

Step 2 – Validate the control-cluster health

Before you begin the process, it’s very important to ensure you have a functional control cluster with all nodes connected to the cluster majority. As tempting as it may be, do not skip this check.

controller-redeploy-4

Checking in the UI is a good first place to look for obvious signs of trouble, but I would not rely on this method alone. If everything is green in the UI, log into each of the three controllers via SSH and run the show control-cluster status command:

nsx-controller # show control-cluster status
Type Status Since
--------------------------------------------------------------------------------
Join status: Join complete 08/25 15:26:19
Majority status: Connected to cluster majority 08/25 15:30:45
Restart status: This controller can be safely restarted 08/25 15:31:09
Cluster ID: f2849ee2-7fb6-4aca-abf4-2ca176337956
Node UUID: 309611b3-2441-4a1a-a12b-a5537e999c23

Role Configured status Active status
--------------------------------------------------------------------------------
api_provider enabled activated
persistence_server enabled activated
switch_manager enabled activated
logical_manager enabled activated
directory_server enabled activated

There are several key things you’ll want to validate before proceeding.

  1. The Join status must read ‘Join complete’. No other status is acceptable.
  2. The Majority status must read ‘Connected to cluster majority
  3. The Restart status must read ‘This controller can be safely restarted’.
  4. Each controller node must have the same ‘Cluster ID’.

If all three controllers look good, you can proceed.

Step 3 – Delete the first controller

Once we’ve confirmed the control cluster health is good, we can delete the first controller from the NSX UI. It doesn’t matter which one you do first, but in my example, I’ll start with controller-3 and work my way backwards.

To delete, simply select the ‘Management’ tab of the Installation section in the NSX UI and click the little red ‘X’ icon above.

controller-redeploy-9

As mentioned earlier, we want to use the normal ‘Delete’ option. Do NOT use ‘Forcefully Delete’.

controller-redeploy-10

NSX will execute several tasks related to the controller VM. First, it will power off the VM appliance, it will then delete it and remove all references of the controller in the database. It’s not unusual for this process to take 10 minutes or longer.

Once the controller has disappeared from the NSX ‘Management’ tab, it’s very important to check that the appliance itself was actually deleted from the vCenter inventory.

controller-redeploy-11

Check for both the successful power off and deletion tasks in the recent tasks pane and also confirm the VM is no longer present in the inventory.

Finally, we’ll want to check the cluster health from the other two surviving nodes using the same show control-cluster status command we used earlier. Ensure that both controllers look healthy.

I’d also recommend ensuring that the cluster is now only comprised of two nodes from the NSX controller node’s perspective. Just because NSX manager says there are two doesn’t necessarily guarantee the other controllers do. You can check this using the show control-cluster startup-nodes command:

nsx-controller # show control-cluster startup-nodes
172.16.10.43, 172.16.10.44

As seen above, my control cluster confirms only two members.

Step 4 – Replace the Deleted Controller.

Once the first controller has been deleted successfully and we’ve confirmed the health of the control cluster, we can go ahead and deploy a new one.

controller-redeploy-12

The process should be very straight forward and is the same as what was done during the initial deployment of NSX. Keep in mind that the name you specify is simply a label and that the moref identifier of the new controller will change.

controller-redeploy-13

NSX will report the new controller in the ‘Deploying’ status for some time, but you can monitor the tasks and events from the vSphere Web Client:

controller-redeploy-14

You can also watch the console of the new controller to confirm that it’s finished joining the cluster and ready for logins. It will usually be sitting a ‘Fetching initial configuration data’ for some time before it’s ready:

controller-redeploy-15

Once it’s powered up and ready, you can log-in via CLI and ensure that the ‘show control-cluster status’ output looks healthy as described earlier and that there are three startup-nodes again:

nsx-controller # show control-cluster status
Type Status Since
--------------------------------------------------------------------------------
Join status: Join complete 08/25 17:47:16
Majority status: Connected to cluster majority 08/25 17:47:13
Restart status: This controller can be safely restarted 08/25 17:47:14
Cluster ID: f2849ee2-7fb6-4aca-abf4-2ca176337956
Node UUID: f9a2d207-bf57-4f23-b075-1eefc58bfc8d

Role Configured status Active status
--------------------------------------------------------------------------------
api_provider enabled activated
persistence_server enabled activated
switch_manager enabled activated
logical_manager enabled activated
directory_server enabled activated

nsx-controller # show control-cluster startup-nodes
172.16.10.43, 172.16.10.44, 172.16.10.45

As seen above, my new controller is online and healthy. Most importantly it agrees with the other two controllers on the ID of the cluster and number of startup nodes.

You could also do a ‘show status’ on the controller to confirm that it has the new partitioning layout at this time as discussed earlier.

Step 5 – Rinse and Repeat.

It’s extremely important to verify the cluster health before proceeding with the deletion of the next cluster node. Aside from the checks in the previous section, this would also be a good time to do some basic connectivity tests. Make sure your distributed routers are functional and that your guests connected to logical switches are working normally.

If you delete the next controller while the cluster is in a bad state, there is a good chance you’ll be down to a single node and will be operating in a ‘read-only’ state. In this condition, any VTEP, ARP or MAC table changes in the environment – like those triggered by vMotions, etc – would fail to propagate. This is definitely not a situation you’d want to be in.

Once you are sure it’s safe to proceed, simply repeat steps 3 and 4 above for the remaining two controllers.

Conclusion

So there you have it. The process can be a bit of a nail-biting experience in a production environment, but if you take the appropriate precautions everything should work without a hitch. The reward for your patience will be a more resilient control cluster with virtual hardware configured as VMware intended.

Thanks for reading! If you have any questions, please feel free to post below.