After a failed deployment of VCF 5.0, i was left with a vSAN Datastore on the first host in the cluster, and this was blocking a retry of the deployment.
In this state the vsanDatastore is unable to be deleted. If I try to delete it, the option is greyed out.
To delete the datastore and the partitions of the disks, we first need to SSH into the host and get the cluster.
We need the Sub-Cluster Master UUID, copy it to the clipboard. To leave the cluster the command is:
During a new lab deployment of VCF 5.0 i ran in to an small issue running the validation.
I deployed the hosts up front and made them available and unique before the validation. ran the following command to regenerate the certs and restart the services:
After a Failed VCF bring-up, I wanted to retry the bring-up. Luckily the error I encountered before was resolved but again I ran in to an issue during the retry.
Now the issue was with the import of the SSH Keys
Going through some internal resources i stumbled upon the solution, since this is a nested lab environment on top of VCD you have to reset the MAC address of the ESXi Host.
During my latest deployment of VCF in my lab environment I ran in to the following issue.
Failed to migrate vmnics of host 192.168.11.12 to DVS sfo-m01-cl01-vds01 . Reason: Failed to migrate vmknic vmk0 to DvSwitch 50 22 42 8c d5 a1 d4 8f-6d 9e 8a 1e 93 ac 5b 9d Failed to migrate vmnics of host 192.168.11.12 to DVS sfo-m01-cl01-vds01 . Reason: Failed to migrate vmknic vmk0 to DvSwitch 50 22 42 8c d5 a1 d4 8f-6d 9e 8a 1e 93 ac 5b 9d Failed to migrate vmnics of host 192.168.11.12 to DVS sfo-m01-cl01-vds01 . Reason: Failed to migrate vmknic vmk0 to DvSwitch 50 22 42 8c d5 a1 d4 8f-6d 9e 8a 1e 93 ac 5b 9d Failed to migrate vmnics of host 192.168.11.12 to DVS sfo-m01-cl01-vds01 . Reason: Failed to migrate vmknic vmk0 to DvSwitch 50 22 42 8c d5 a1 d4 8f-6d 9e 8a 1e 93 ac 5b 9d
The error is pretty clear, the migration of vmk0 from the standard vSwitch to the Distributed vSwitch failed on esx02. I checked esx01 and on this host the migration was successfull.
I tried to manually migrating the vmk0 to the distributed vSwitch also ran in to an error in vCenter. Right-click dvSwitch -> Add and Manage Hosts -> Manage Host Networking -> Select esx02
Click Next and leave the physical adapters as is, click next again. On the next screen click on “Assign Port Group” next to vmk0.
Click on ASSIGN next to the management portgroup
Next, Next, Finish…..Task is running and fails after a few seconds.
it is a MAC address conflict when the esxi takes the mac of the physical nic for vmk0. By deleting and recreating the vmk0 interface you generate a new MAC address for vmk0.
Steps to check, delete and recreate vmk0 interface
Login via DCUI
Enable ESXi Shell
Next, Click ALT+F1 to access ESXi console and login as root.
Type the command: esxcli network ip interface list
Make a note of the portgroup, in this case “Management network” and then remove the vmk0 with the following command: esxcli network ip interface remove –-interface-name=vmk0
When vmk0 is deleted, we can immediately create a new interface with the same name and portgroup. This is done by the following command: esxcli network ip interface add -–interface-name=vmk0 -p “Management Network”
To check if vmk0 is created again type the command: esxcli network ip interface list
Click ALT+F2 to access ESXi DCUI and login to disable the ESXi shell. Now we can configure the IP settings again via the DCUI
Go to Configure Management Settings -> IPv4 Configuration and set the static IPv4 configuration
Hit Enter then Esc and Yes to restart the management network
Now we can try to redeploy via cloudbuilder, after this the deployment went on succesfully