During an upgrade of NSX-T from to i came across an issue in the UI.
When i clicked the update button, the screen was blank and was not showing any data, sometimes after a wait of half an hour or more the screen came through and i could proceed with the upgrade.

This is ofcourse not the way it should so i wanted to get rid of the issue.

Check Manager Cluster Nodes

First i wanted to check if all the cluster nodes were stable and the services were running AOK, so i ran the following command on all 3 cluster nodes:

get cluster status

All servers seem to be running fine, and didn’t show any anomalies, next i checked if there was maybe an old update plan stuck or something like that:

get node upgrade status
% Post reboot node upgrade is not in progress

But no luck with that either, now i tested what if we start the Upgrade from another manager node.

For that to be possible i needed to execute the following command on the manager node we wnat to become the orchestrator node:

set repository-ip

But after testing all nodes, no luck at all. The UI still gave me a blank screen on the Upgrade page.

Time to get support (Cause):

We raised an SR at VMware and within a few hours we got feedback.
This issue was probably caused by an inconsistent Corfu DB, that was possibly triggered by an action we did in the past an re-deployement a Manager Node after a failure.

You can identify a possible inconsistent Corfu DB by an high EPOCH number that is increasing in the /var/log/corfu/corfu-compactor-audit.log

2022-05-27T10:53:35.446Z INFO main CheckpointWriter - appendCheckpoint: completed checkpoint for fc2ada82-3ef8-335a-9fdb-c35991d3960c, entries(0), cpSize(1) bytes at snapshot Token(epoch=2888, sequence=1738972197) in 65 ms

2022-05-27T11:05:21.346Z INFO main CheckpointWriter – appendCheckpoint: completed checkpoint for fc2ada82-3ef8-335a-9fdb-c35991d3960c, entries(0), cpSize(1) bytes at snapshot Token(epoch=2921, sequence=173893455) in 34 ms


redeploy the manager nodes one-by-one…..

so here we go:

First we need to retrieve the UUID of the node we want to detach from the cluster.

get cluster status

Next run the command to detach the failed node from the cluster, from another cluster node.

detach node failed_node_uuid

The detach process might take some time, when the detaching process finishes, get the status of the cluster and check if there is indeed only 2 nodes present in the cluster.

Get cluster status

The Manager node is now detached, but the VM is still present in the vSphere Inventory, Power it down and Delete the VM. You can keep it ofcourse. But we are going to deploy a new Node with the exact same parameters, fqdn and ip. So best to disconnect the network interfaces in that case.

Now we can deploy a new Manager Node, we can do this in 2 ways.

1. From the UI

We can use the way if there is a compute manager configured where the Manager Node can be deployed.

Navigate to System ConfigurationAppliances and click Add NSX Appliance

Fill in the Hostname, IP/DNS and NTP settings and choose the Deployment size of the appliance.
In our Case this is Large and click Next.

Next fill in the configuration details for the new appliance and hit next

Followed by the credentials and the enablement of SSH and Root Access, after that hit install appliance.

Now be patient until the appliance will be deployed on the environment.

When the new appliance deployed successfully wait till all services become stable and all lights are green, check the cluster status on the CLI of the managers with:

get cluster status

If all services are stable and running on every node, you can detach the next one in line and start over until all appliances are redeployed.

2. Deploy with OVA

When you can’t deploy the new appliance from the UI, you can build it with the use of the OVA file. Download the OVA file from the VMware website:


and start the deploy from OVA in vCenter.

Select the Computer resource:

Review the details and go on to the configuration part:

Select the appropriate deployment size:

Select the Storage where the appliance needs to land:

Next select the management network:

Ans Customize the template by filling in the Passwords for the accounts, IP details etc.

Hit Next and review the configuration before you deploy the appliance!

When the ova deployed successfully Power On the VM and wait till it is booted completely, an extra reboot can be part of this.

Login to a cluster node of the NSX Manager Cluster and run the following command to get the cluster thumbprint. Save this thumbprint we need this later on.

get certificate api thumbprint

And run the get cluster config command to get the cluster ID:

get cluster config

Now open an SSH session to the new node and run the join command to join the new node to the existing cluster.

join <Manager-IP> cluster-id <cluster-id> username <Manager-username> password <Manager-password> thumbprint <Manager-thumbprint>

When the join operation is successful, wait for all services to restart.

You can check the cluster status on the UI select System > Appliances.
and check if all services are up.

Check the cluster config on the manager nodes by running:

Get cluster config


When you have a inconsistent corfu DB, in some cases the redeployment of all manager nodes can be the solution. Be aware that you only detach 1 node and then redeploy the new one and so on. always keep 2 ore more nodes in the cluster to keep it healthy.

Leave a comment

Your email address will not be published. Required fields are marked *