vSAN HCL DB Out of Date – Offline Update

During the upgrade pre-check test of the Workload domain I got the following error: vSAN HCL DB Out of Date.

Because my SDDC manager has no direct internet connection I needed to get the file on my jump host. This can be done by browsing to https://partnerweb.vmware.com/service/vsan/all.json and copy the entire content and create a new file with extension”.json”

This can also be done via the bundle-transfer util, and you also should be able to transfer the file to the SDDC manager with this util. How to get the tool and install it can be found on https://www.aaronrombaut.com/real-world-use-of-vmware-bundle-transfer-utility/

If you have the bundle-transfer util installed we can get the vSAN HCL file with the following command:

lcm-bundle-transfer-util.bat --vsanHclDownload --outputDirectory F:\

After I got the all.json file on my jump-host, I need to update it to the SDDC manager:

lcm-bundle-transfer-util.bat --vsanHclUpload --inputDirectory "F:\vsan\hcl\all.json" --sddcMgrFqdn sddc-l-01a.corp.local --sddcMgrUser vcf

But as you can see below this command gave me an error:

Fails due to not generating token, let’s see is we can upload the file by API, let’s use Postman for this.


Get Bearer token for the authentication with the SDDC manager:

  1. Select POST as action
  2. Fill in the api url: https://sddc-l-01a.corp.local/v1/tokens
  3. Under the Authorization tab select No Auth
  1. Under Headers create a new Key Content-Type with Value application/json
  1. Under Body select raw and JSON, in the field, put the sso admin and password for the SDDC Manager

{
"username" : "administrator@vsphere.local",
"password" : "XXXXXXX"
}

  1. Now hit the send button and wait for the output:

The bearer token is needed in the command to update the HCL DB on the SDDC manager:

Upload all.json to SDDC Manager via Postman:

  1. Select PUT as Action
  2. Use the following API URL: https://<sddc-fqdn>/v1/vsan-hcl/content
  3. Under Authorization select Bearer Token and Paste the Token from the previous step in the field.
  1. Under Headers add a new Key Content-Type with Value text/plain
  1. Under Body choose raw and JSON
  2. In the field copy the content of the all.json file
  3. Hit Send.

There is no output only a Status 200 OK

Let’s check if this particular pre-check error now is solved.

VICTORY!!

Quick Fix – Found zero ssd devices for SSD cache tier

When i added a host to an existing vSAN cluster via the SDDC manager the task failed with the following error: “Found zero ssd devices for SSD cache tier”

To quickly fix this we need to set the cache disk on the ESXi host to SSD, you can check the current value with the vdq -q command. As you can see in the picture below the disk I want to use for the cache is marked with a value of “0”, so it is not recognized as SSD drive.

In the past you had to set the the disk to SSD with SATP claim rules, but from version 7.x and 8.x there is a new and simpler command to do this. Run the following ESXCLI command and use the storage device ID and the -M option with value of true (or false to revert the change) to mark the device as an SSD.

esxcli storage hpp device set -d naa.6000c299027de72c68de829e23455e88 -M true

Aria Ops for Networks deployment fails from Aria Suite LifeCycle Manager.

In my lab I tried to deploy Aria Operations for Networks 6.12.1 (AON/vRNI) from Aria Suite LifeCycle 8.16 (ASLC/vRLCM). Before the deployment of AON I successfully deployed other products:

vIDM 3.3.7
Aria Operations for Logs 8.16 (vRLI)
Aria Operations 8.17.1 (vRops)

Attempted to deploy Aria Ops for Networks 6.12.1 but it failed with LCMVSPHERECONFIG1000016 error. 

-----------------------------------------------------------------------------------------------------------
java.io.IOException: com.vmware.vim.binding.vmodl.fault.SystemError
	at com.vmware.vrealize.lcm.drivers.vsphere65.vlsi.utils.ExceptionMappingUtils.mapAndThrowImportVAppExceptions(ExceptionMappingUtils.java:78)
	at com.vmware.vrealize.lcm.drivers.vsphere65.deploy.impl.BaseOvfDeploy.importOvf(BaseOvfDeploy.java:713)
	at com.vmware.vrealize.lcm.plugin.core.vsphere.tasks.DeployOvfTask.execute(DeployOvfTask.java:251)
	at com.vmware.vrealize.lcm.automata.core.TaskThread.run(TaskThread.java:62)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
	at java.base/java.lang.Thread.run(Unknown Source)
-----------------------------------------------------------------------------------------------------------

Error Code: LCMVSPHERECONFIG1000016
IO Exception occurred while performing the operation. Check the logs for more information.
Unexpected ioexception occurred.

——————————————————————————————————–

After some scavenging around, I ran into the known issues in the Release Notes from version 8.14

So let’s check which user is being used between the ASLC and vCenter:

Now login the vCenter and check if the privileges assigned to this account match the 2 from the release notes:

As you can see above the privileges are not configured on this role, let’s configure the 2 privileges:

Now we need to resync the account between vCenter and ASLC.

After this I Hit the retry button on the request in ASLC and the ovf is being deployed in vCenter.

VCF 5.0 – Remove vSAN datastore after VCF deployment failed

After a failed deployment of VCF 5.0, i was left with a vSAN Datastore on the first host in the cluster, and this was blocking a retry of the deployment.

In this state the vsanDatastore is unable to be deleted. If I try to delete it, the option is greyed out.

To delete the datastore and the partitions of the disks, we first need to SSH into the host and get the cluster.

We need the Sub-Cluster Master UUID, copy it to the clipboard. To leave the cluster the command is:

esxcli vsan cluster leave -u <Sub-Cluster Master UUID>

The ESXi UI now shows that the datastore is gone.

Let’s check the cli, if there still is a vSAN cluster or storage:

esxcli vsan cluster list
esxcli vsan storage list

No vSAN clusters or storage is configured on the host, after this the retry was successfully and the VCF management cluster was deployed.

VCF 5.0 – Error connecting to ESXi Host SSL certificate common name doesn’t match ESXi FQDN.

During a new lab deployment of VCF 5.0 i ran in to an small issue running the validation.

I deployed the hosts up front and made them available and unique before the validation. ran the following command to regenerate the certs and restart the services:

/sbin/generate-certificates
/etc/init.d/hostd restart && /etc/init.d/vpxa restart

Next i wanted to see what Common Name (CN) was on the certificate:

As you can see the Common Name (CN) contains only the hostname.

Next i changed the hostname on the ESXi hosts to have the complete FQDN.

Checked the hostname on the cli, and regenerated the certificates again:

After the regeneration of the certificate on the Host, you have to restart the services:

/etc/init.d/hostd restart && /etc/init.d/vpxa restart

Check the Certificate in the browser and now the Common Name had the FQDN in it, and the validation finished successfully.


VCF 5.0 – Retry Bring up fails on Generate input for SSH Keys

After a Failed VCF bring-up, I wanted to retry the bring-up. Luckily the error I encountered before was resolved but again I ran in to an issue during the retry.

Now the issue was with the import of the SSH Keys

Going through some internal resources i stumbled upon the solution, since this is a nested lab environment on top of VCD you have to reset the MAC address of the ESXi Host.

VCF 5.0 LAB Deployment Issue – Failed to migrate vmnics of ESXi host to Distributed vSwitch

During my latest deployment of VCF in my lab environment I ran in to the following issue.

Failed to migrate vmnics of host 192.168.11.12 to DVS sfo-m01-cl01-vds01 . Reason: Failed to migrate vmknic vmk0 to DvSwitch 50 22 42 8c d5 a1 d4 8f-6d 9e 8a 1e 93 ac 5b 9d Failed to migrate vmnics of host 192.168.11.12 to DVS sfo-m01-cl01-vds01 . Reason: Failed to migrate vmknic vmk0 to DvSwitch 50 22 42 8c d5 a1 d4 8f-6d 9e 8a 1e 93 ac 5b 9d Failed to migrate vmnics of host 192.168.11.12 to DVS sfo-m01-cl01-vds01 . Reason: Failed to migrate vmknic vmk0 to DvSwitch 50 22 42 8c d5 a1 d4 8f-6d 9e 8a 1e 93 ac 5b 9d Failed to migrate vmnics of host 192.168.11.12 to DVS sfo-m01-cl01-vds01 . Reason: Failed to migrate vmknic vmk0 to DvSwitch 50 22 42 8c d5 a1 d4 8f-6d 9e 8a 1e 93 ac 5b 9d

The error is pretty clear, the migration of vmk0 from the standard vSwitch to the Distributed vSwitch failed on esx02. I checked esx01 and on this host the migration was successfull.

I tried to manually migrating the vmk0 to the distributed vSwitch also ran in to an error in vCenter.
Right-click dvSwitch -> Add and Manage Hosts -> Manage Host Networking -> Select esx02

Click Next and leave the physical adapters as is, click next again.
On the next screen click on “Assign Port Group” next to vmk0.

Click on ASSIGN next to the management portgroup

Next, Next, Finish…..Task is running and fails after a few seconds.

Checking the Task Details on the ESX host:

After some investigation and searching internally within VMware resources and also ran into this blog article: https://mhvmw.wordpress.com/2023/03/17/issue-with-nested-vcf-4-5-deployment-lab-only/

it is a MAC address conflict when the esxi takes the mac of the physical nic for vmk0. 
By deleting and recreating the vmk0 interface you generate a new MAC address for vmk0.

Steps to check, delete and recreate vmk0 interface

Login via DCUI

Enable ESXi Shell

Next, Click ALT+F1 to access ESXi console and login as root.

Type the command:
esxcli network ip interface list

Make a note of the portgroup, in this case “Management network” and then remove the vmk0 with the following command:
esxcli network ip interface remove –-interface-name=vmk0

When vmk0 is deleted, we can immediately create a new interface with the same name and portgroup. This is done by the following command:
esxcli network ip interface add -–interface-name=vmk0 -p “Management Network”

To check if vmk0 is created again type the command:
esxcli network ip interface list

Click ALT+F2 to access ESXi DCUI and login to disable the ESXi shell.
Now we can configure the IP settings again via the DCUI

Go to Configure Management Settings -> IPv4 Configuration and set the static IPv4 configuration

Hit Enter then Esc and Yes to restart the management network

Now we can try to redeploy via cloudbuilder, after this the deployment went on succesfully

Compute Manager not visible in NSX after Upgrade of the 3 Manager Appliances

After a successful upgrade of NSX, after the last step the upgrade of the management plane the compute manager disappeared, let’s see how we can fix that!

When i try to add the vCenter it says it is already registered, let’s check with the API.

First do a API GET in Postman to get the compute manager id:

Output:

Now we have the compute manager id, we can check if it is registered and up:

Output:


As you can see the compute manager is registered and up, why is it not showing up in the UI?

Solution:

Login with the admin user by SSH, and run the following command.

start search resync inventory

Wait a few seconds and refresh the UI, now the Compute Manager is back!


Replace NSX-T expiring localmanager and globalmanager self-signed Certificates.

Recently we saw some warnings about expiring certificates in the NSX-T Global Manager and Local Manager.

When we clicked one of the alerts we got a small description and some API calls we can fire to apply new certificates.

In the Certificates overview (System > Certificates > Certificates), we could see that the certificates Issued to the Local Manager and Global Manager were expiring. The certification id’s were also corresponding to the ones in the alert (not the ones in my screenshots).

The API calls that were mentioned in the Alert description are for the renewal of certificate to the HTTP service (UI), not the Local/Global Manager certificates.
The VMware Docs don’t explain in good detail how to change these certificates, i couln’t find it.

Try it yourself:
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-50C36862-A29D-48FA-8CE7-697E64E10E37.html

The only give away i could find was in step 6: (NSX Federation and the service type).

So before we can replace the certificates, we need to create new Self Signed Certificates for the Local Manager, and the Global Manager.

Create CSR on GM/LM:

to create a CSR (Certificate Signing Request) on the Global or Local Manager go to: System > Certificates > CSRs and click on “Generate CSR“.

For the Global Manager do this via the Global Manager Appliance and for the Local Manager use the Local Manager Appliance, or use the drop down on the top of the screen to choose between your Global Manager or Local Managers.

Fill in all the fields and hit the GENERATE button, example below is for the Global Manager. For The local Manager just change the word global to local:

Now we can see a new CSR in the list, the next step is to self-sign the Global Manager CSR, select the CSR and under actions choose “Self Sign Certificate for CSR”

Choose your number of days:

Now we have a new Self-Signed certificate for the Global Manager in the certificates list, with this certificate we can replace the Principal Identity certificate for the Global Manager.

For the local manager certificates, follow the steps mentioned above on the local manager appliance.

Apply Self-Signed Certificate on the Global Manager

Before we can apply the SS certificate to the Global Manager we need to copy the certificate id, click on the ID, and then select the whole id in the pop-up and copy/paste it for later use:

Now we can Fire up Postman to apply the certificate by API:

  1. Change the ACTION drop down to POST
  2. Paste the following url to your Global Manager:

    Step 1 to 4

    5. Change to Body, select raw and from the drop-down choose JSON
    6. put the following JSON in the Body:

    {
    “cert_id”: “<new cert id>”,
    “service_type”: “GLOBAL_MANAGER”
    }

    And hit the SEND button, if you get back Status 200 OK, the API call was succesful:

    Apply Self-Signed Certificate on the Local Manager

    This procedure is almost the same as on the Global Manager. Go to the Local Manager Appliance and copy the new certificate id.

    Go to Postman:

    1. Change the ACTION drop down to POST
    2. Paste the following url to your Global Manager:

      Check the old expiring certificate

      Now since we are on version 3.1, we need to check by API if the old certificate is released so we know it isn’t used anymore before we delete it.
      Note: In Newer version you have a “Where Used” field in the certificate overview.

      Go to Postman:

      1. Change the action to GET
      2. Paste the following url to your Global Manager: (use the id of the expiring certificate)
        https://<global-manager>/api/v1/trust-management/certificates/dee8d78b-5e04-4deb-8d36-6b86f79f058b
        Set Authorization the same as the previous API Calls
      3. Select Body and set it to none
      4. Hit Send

      If the Certificate is used by a node look for the used_by part, when there is a node_id, the certificate is still in use and can’t be deleted. If it is empty, you can delete the Certificate in the UI, you can do this check on the new certificate to see if it is used by the same node.

          "used_by": [
              {
                  "node_id": "515f0642-b1eb-ef39-9acc-82f8760ab0b9",
                  "service_types": [
                      "GLOBAL_MANAGER"
                  ]
              }
          ],

      Sometimes the Certificates won’t release itself, so let’s release the damn thing:

      Release a Certificate

      Please keep in mind that you only release the certificate from the node_id if you are absolutly sure, if not please raise a ticket to VMware Support.

      1. login with the admin user to the manager with ssh
      2. then type st e, enter the root password and you are now at the shell
      3. Use the certificate id and the node_id from the previous step:
      4. now use the following API call to release the Certificate of the node_id:
        curl -k -X POST -H “Content-Type: application/json” -H ‘X-NSX-Username:admin’ -H ‘X-NSX-Groups:superuser’ -d ‘{“service_type”:”API”,”node_id”:”<node_id>“}’  “http://localhost:7440/nsxapi/api/v1/trust-management/certificates/<certificate-id>?action=release
      5. This should do it, you can check the certificate again with the previous step and look for the used_by, this should be emtpy now.
      curl -k -X POST -H "Content-Type: application/json" -H 'X-NSX-Username:admin' -H 'X-NSX-Groups:superuser' -d '{"service_type":"GLOBAL_MANAGER","node_id":""647defab-13c5-5g62-93e4-da4a345d666"}'  "http://localhost:7440/nsxapi/api/v1/trust-management/certificates/ea5771fb-161e-49dd-97d7-c1483d2790666691?action=release"
      

      If the used_by is empty, you can now safely remove the certificate.

      All the steps are almost identical for the Global and Local Manager, just change the service types for GLOBAL_MANAGER to LOCAL_MANAGER, etc.

      And the Alerts are now Resolved!

Design Consideration – For Bare Metal Edge with 4 PNIC in a NSX-T Federated Environment

During a failover test with the Bare Metal Edges we ran into an issue when pulling the plug on 1 of the TOR switches. (TOR-LEFT). During that test all BGPs on both Bare Metal edges went down. So no North-South routing anymore 🙁

Edge Setup:

The Bare Metal Edges were configured following the design guidelines from VMware:
https://nsx.techzone.vmware.com/resource/nsx-t-reference-design-guide-3-0#_Edge_Node_and_1

Indentifying the problem:

So why this behaviour? And what happens when we pull the plug on the other TOR switch (TOR-RIGHT).
After performing the test with the TOR-RIGHT, the BGPs connected to TOR Left stayed established. So it has something to do with switch TOR-LEFT?

After checking the configuration on the TOR-LEFT switch we didn’t identified something that could cause this issue. But what could it be? Edges were configured by VMware guidelines and were identical configuration wise.

So going through the logs was the next step in the process, and i stumbled upon this part in the log file:

2022-10-17T10:37:08.578Z Update device fp-eth0 state to DOWN
2022-10-17T10:37:08.578Z Self Node 00363d34-fcdd-11ea-8e07-e4434ba66042 status changed from Up to Down (RTEP device down)

Can it have something to do with the federated setup (RTEP), is the RTEP only connecting over fp-eth0?

Cause:

Again i went through the setup but now i also checked the fp-eth0 connections to the switches. On both BareMetal Edges the fp-eth0 was connected to the TOR-LEFT. So when we pulled the plug on that Switch it triggered the RTEP going down, which led to all BGP session going down.

This is expected behavior according to VMware!

Solution:

The solution to this issue was pretty simple after we identified the cause. We switched the connection on the second Bare Metal Edge, so the pnics connected to TOR LEFT will be on TOR-RIGHT and vice versa. The opposite of the first Bare Metal Edge.