How to Remove Inaccessible vSAN Objects in vSphere: Step-by-Step Guide

This post is about how to remove such an inaccessible object within vSAN.

Afbeelding met tekst, schermopname, nummer, Lettertype

Automatisch gegenereerde beschrijving

Open an SSH session to the vCenter and enter the command rvc localhost in the command line.

Navigate to the destinated vSAN cluster where you want to remove the inaccessible objects using cd and utilize ls to list in each step like this one:

Verify the state of vSAN objects using the command vsan.check_state -r . This check involves three steps:

  • Checking for inaccessible vSAN objects
  • Verifying invalid or inaccessible VMs, and
  • Checking for VMs for which VC/hostd/vmx are out of sync

During this check, as you can see in the next screenshot, there are four inaccessible objects with the same UUID as those listed in Virtual Objects within the vSphere Client.

Afbeelding met tekst, schermopname, Lettertype

Automatisch gegenereerde beschrijving

To remove them, open an SSH session to any ESXi in the cluster and use the following command /usr/lib/vmware/osfs/bin/objtool delete -u <UUID> -f replacing UUID with the one you want to remove. Afbeelding met tekst, schermopname, Lettertype

Automatisch gegenereerde beschrijving

After you remove all inaccessible objects and run the (vsan.checkstate -r .) once again, you should no longer see any inaccessible objects. Afbeelding met tekst, schermopname, software, Lettertype

Automatisch gegenereerde beschrijving

‘Ineligible for use by VSAN’ can’t be added to VSAN disk groups

I had the opportunity to test a Dell vSAN node. I had a older unattend install esxi iso.
This installed the ESXi OS on the wrong disk. After a correct install vSAN did not see this this disk ready for use for vSAN. Combining the following articles Dell VXRai vSAN Drives ineligible and identify-and-solve-ineligible-disk-problems-in-virtual-san/
I solved this problem with the following steps:

Step 1: Identify the Disk with vdq -qH

Step 2: Use partedUtil get “/dev/disks/<DISK>” to list all partitions:

partedUtil get “/dev/disks/t10.NVMe____Dell_Ent_NVMe_CM6_MU_3.2TB______________017D7D23E28EE38C”

Step 3: Use This disk has 2 partitions. Use the partedUtil delete “/dev/disks/<DISK>” <PARTITION> command to delete all partitions:

Afbeelding met tekst, Lettertype, schermopname

Automatisch gegenereerde beschrijving

Step 4:

When all partitions are removed, do a rescan:

~ # esxcli storage core adapter rescan –all

Afbeelding met tekst, schermopname, software, Computerpictogram

Automatisch gegenereerde beschrijving

Step 5: Claim Unused Disks

Afbeelding met tekst, software, Lettertype, nummer

Automatisch gegenereerde beschrijving

ESXi Unattend Install on Dell BOSS controller

I had the opportunity to test a Dell vSAN node. I had a older unattend install esxi iso.
This installed the ESXi OS on the wrong disk.

I hate to type a very complex password twice.
So automation is the key.
I love de ks.cfg install option

Sow following the following guide did not the trik:
https://www.dell.com/support/kbdoc/en-us/000177584/automating-operating-system-deployment-to-dell-boss-techniques-for-different-operating-systems

VMware ESXi Automated Install

This did not work:
install –overwritevmfs –firstdisk=”DELLBOSS VD”

After doing a manual install:
Afbeelding met tekst, schermopname

Automatisch gegenereerde beschrijving

What works:

# For Dell Boss Controller “Dell BOSS-N1″

install –overwritevmfs –firstdisk=”Dell BOSS-N1”

My HomeLab anno 2024

My home lab is manly used for testing new stuff released by VMware (70%) en Microsoft (20%) other stuff (10%)

For the Base I use my Home PC

Intel i5 12600k
128 GB Memory
2 x 2TB Samsung 980 and 990 Pro SSD.
Windows 11 Pro
VMware Workstation Pro

On my Home PC running
Server 2022 (Eval for DC)
ESXi801 (16 GB) (NSX Demo Cluster)
ESXi802 (16 GB) (NSX Demo Cluster)
ESXi803 (64 GB) (General Cluster) )
ESXi804 (64 GB) (General Cluster)
ESXi805 (24 GB) (Single Node vSAN Cluster)
ESXi806 (16 GB) (4 Node vSAN Cluster)
ESXi807 (16 GB) (4 Node vSAN Cluster)
ESXi808 (16 GB) (4 Node vSAN Cluster)
ESXi809 (16 GB) (4 Node vSAN Cluster)

ESXi701 (24GB) (General Cluster)
ESXi702 (24GB) (General Cluster)

In general cluster there a running the most VM’s. Also here I am testing Packer and Terraform.

Afbeelding met tekst, schermopname, software, Lettertype

Automatisch gegenereerde beschrijving

For a while I used a 2TB Samsung SSD a Storage for ESXi Server through Truenas
But I wanted a larger storage for all my VM’s.

After reading on William Liam blog Synology DS723+ in Homelab and Synology NFS VAAI Plug-in support for vSphere 8.0

So I did a nice upgrade. Afbeelding met tekst, schermopname, software, nummer

Automatisch gegenereerde beschrijving

I used not the original Synology Parts. Following parts works fine.
Kingston 16 GB DDR4-3200 notebook memory
WD Red SN700, 500 GB SSD
WD Red Pro, 8 TB

* For Read-Write caching you need 2 SSD devices.

For mouting the NFS shared I created a little powercli script.

https://github.com/WardVissers/VMware-Powercli-Public/blob/main/Add%20NFS%20DataStore%20Github.ps1

VCF 5.0 running inside Nested ESXi server with only 64GB Memory

So I interested to trying to deploy latest release of VMware Cloud Foundation (VCF) 5.0 on my Windows 11 Home PC witch have 128GB and 16 core intel cpu.

William Lee wrote a nice artikel about VMware Cloud Foundation 5.0 running on Intel NUC

Disclaimer: This is not officially supported by VMware, please use at your own risk.

Requirements:

  • VMware Cloud Builder 5.0 OVA (Build 21822418)
  • VCF 5.0 Licenses Through VMUG ADVANTAGE
  • Home PC (Not Special Hardware)
    – 128GB Memory
    – Intel 12600 CPU
    – 4TB of NVME Storage
  • Windows 11 with VMware Workstation 17

Setup

Virtual Machines

  • DC02 (Domain Controller, DNS Server) (4GB 2vcpu)
  • VCF-M01-ESX01 (ESXi 8.0 Update 1a) (64GBGB 1x140GB 2x600NVME 2x NIC) (Every Thin Provisiond)
  • VCF-M01-CB01 (4GB and 4CPU) Only needed through First Deploment

Network settings on my PC

  • 1 IP In my home network
  • 172.16.12.1 (To Fool Cloudbuilder)
  • 172.16.13.1 (To Fool Cloudbuilder)

Procedure:

Install en Configure ESXi

Step 1 – Boot up the ESXi installer from de iso mount and then perform a standard ESXi installation.

Step 2 – Once ESXi is up and running, you will need to minimally configure networking along with an FQDN (ensure proper DNS resolution), NTP and specify which SSD should be used for the vSAN capacity drive. You can use the DCUI to setup the initial networking but recommend switching to ESXi Shell afterwards and finish the require preparations steps as demonstrated in the following ESXCLI commands:

esxcli system ntp set -e true -s pool.ntp.org
esxcli system hostname set –fqdn vcf-m01-esx01.wardvissers.nl

Note: Use vdq -q command to query for the available disks for use with vSAN and ensure there are no partitions residing on the 600GB disks.
Don’t change time server pool.ntp.org.

To ensure that the self-signed TLS certificate that ESXi generates matches that of the FQDN that you had configured, we will need to regenerate the certificate and restart hostd for the changes to go into effect by running the following commands within ESXi Shell:

/bin/generate-certificates
/etc/init.d/hostd restart

Cloudbuilder Config

Step 3 – Deploy the VMware Cloud builder in a separate environment and wait for it to be accessible over the browser. Once CB is online, download the setup_vmware_cloud_builder_for_one_node_management_domain.sh setup script and transfer that to the CB system using the admin user account (root is disabled by default).

Step 4 – Switch to the root user and set the script to have the executable permission and run the script as shown below

su –
chmod +x setup_vmware_cloud_builder_for_one_node_management_domain.sh
./setup_vmware_cloud_builder_for_one_node_management_domain.sh

The script will take some time, especially as it converts the NSX OVA->OVF->OVA and if everything was configured successfully, you should see the same output as the screenshot above.

A screenshot of a computer

Description automatically generated

Step 4 – Download the example JSON deployment file vcf50-management-domain-example.json and and adjust the values based on your environment. In addition to changing the hostname/IP Addresses you will also need to replace all the FILL_ME_IN_VCF_*_LICENSE_KEY with valid VCF 5.0 license keys.

Step 5 – The VMnic in the Cloud Builder VM will acked als a 10GB NIC so I started the deployment not through powershell but normal way in Cloud Builder GUI.

Your deployment time will vary based on your physical resources but it should eventually complete with everything show success as shown in the screenshot below. (I have one retry for finish)
A screenshot of a computer

Description automatically generated A screenshot of a cloud support

Description automatically generated
Here are some screenshots VCF 5.0 deployment running on my home PC.

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated

Problems

Check this if you have problems logging in NSX:
https://www.wardvissers.nl/2023/07/26/nsx-endless-spinning-blue-cirle-after-login/

Next Steps.

1. Reploy with use of the Holo-Router https://core.vmware.com/resource/holo-toolkit-20-deploy-router#deploy-holo-router

2. Testing if can deploy Single Host VCF Workload Domain, on same way by following this blog post HERE! 😁
A screenshot of a computer

Description automatically generated

If I can start another 64GB ESXi Server.

Holodeck Toolkit Overview

Holodeck Toolkit 1.3 Overview

The VMware Cloud Foundation (VCF) Holodeck Toolkit is designed to provide a scalable, repeatable way to deploy nested Cloud Foundation hands-on environments directly on VMware ESXi hosts. These environments are ideal for multi-team hands on exercises exploring the capabilities of utilitizing VCF to deliver a Customer Managed VMware Cloud.

Graphical user interface, application

Description automatically generated

Delivering labs in a nested environment solves several challenges with delivering hands-on for a  product like VCF, including:  

  • Reduced hardware requirements: When operating in a physical environment, VCF requires four vSAN Ready Nodes for the management domain, and additional hosts for adding clusters or workload domains. In a nested environment, the same four to eight hosts are easily virtualized to run on a single ESXi host.   
  • Self-contained services: The Holodeck Toolkit configuration provides common infrastructure services, such as NTP, DNS, AD, Certificate Services and DHCP within the environment, removing the need to rely on datacenter provided services during testing.  Each environment needs a single external IP.
  • Isolated networking. The Holodeck Toolkit configuration removes the need for VLAN and BGP connections in the customer network early in the testing phase.  
  • Isolation between environments. Each Holodeck deployment is completely self-contained. This avoids conflicts with existing network configurations and allows for the deployment of multiple nested environments on same hardware or datacenter with no concerns for overlap. 
  • Multiple VCF deployments on a single VMware ESXi host with sufficient capacity. A typical VCF Standard Architecture deployment of four node management domain and four node VI workload domain, plus add on such as VMware vRealize Automation requires approximately 20 CPU cores, 512GB memory and 2.5TB disk.  
  • Automation and repeatability. The deployment of nested VCF environments is almost completely hands-off, and easily repeatable using configuration files.  A typical deployment takes less than 3 hours, with less than 15 min keyboard time.

Nested Environment Overview 

The “VLC Holodeck Standard Main 1.3” configuration is a nested VMware Cloud Foundation configuration used as the baseline for several Private Cloud operation and consumption lab exercises created by the Cloud Foundation Technical Marketing team. The Holodeck standard “VLC-Holo-Site-1” is the primary configuration deployed. The optional VLC-Holo-Site-2 can be deployed at any time later within a Pod.  VLC-Holo-Site-1 configuration matches the lab configuration in the VCF Hands-On Lab HOL-2246 and the nested configuration in the VCF Experience program run on the VMware Lab Platform. 

Each Pod on a Holodeck deployment runs an identical nested configuration. A pod can be deployed with a standalone VLC-Holo-Site-1 configuration, or with both VLC-Holo-Site-1 and VLC-Holo-Site-2 configurations active. Separation of the pods and between sites within a pod is handled at the VMware vSphere Standard Switch (VSS) level.  Each Holodeck pod connects to a unique VSS and Port Group per site.    A VMware vSphere Port Group is configured on each VSS and configured as a VLAN trunk.  

  • Components on the port group to use VLAN tagging to isolate communications between nested VLANs. This removes the need to have physical VLANs plumbed to the ESXi host to support nested labs.  
  • When the Holo-Site-2 configuration is deployed it uses a second VSS and Port Group for isolation from Holo-Site-1  

The VLC Holodeck configuration customizes the VCF Cloud Builder Virtual Machine to provide several support services within the pod to remove the requirement for specific customer side services. A Cloud Builder VM is deployed per Site to provide the following within the pod: 

  • DNS (local to Site1 and Site2 within the pod, acts as forwarder) 
  • NTP (local to Site1 and Site2 within the pod) 
  • DHCP (local to Site1 and Site2 within the pod) 
  • L3 TOR for vMotion, vSAN, Management, Host TEP and Edge TEP networks within each site 
  • BGP peer from VLC Tier 0 NSX Application Virtual Network (AVN) Edge (Provides connectivity into NSX overlay networks from the lab console)

The figure below shows a logical view of the VLC-Holo-Site-1 configuration within a Holodeck Pod. The Site-1 configuration uses DNS domain vcf.sddc.lab.

 Figure 1: Holodeck Nested Diagram

The Holodeck package also provides a preconfigured Photon OS VM, called “Holo-Router”, that functions as a virtualized router for the base environment. This VM allows for connecting the nested environment to the external world. The Holo-Router is configured to forward any Microsoft Remote Desktop (RDP) traffic to the nested jump host, known as the Holo-Console, which is deployed within the pod.

The user interface to the nested VCF environment is via a Windows Server 2019 “Holo-Console” virtual machine. Holo-Console provides a place to manage the internal nested environment like a system administrators desktop in a datacenter. Holo-Console is used to run the VLC package to deploy the nested VCF instance inside the pod. Holo-Console VM’s are deployed from a custom-built ISO that configures the following 

  • Microsoft Windows Server 2019 Desktop Experience with: 
  • Active directory domain “vcf.holo.lab” 
  • DNS Forwarder to Cloud Builder  
  • Certificate Server, Web Enrollment and VMware certificate template 
  • RDP enabled 
  • IP, Subnet, Gateway, DNS and VLAN configured for deployment as Holo-Console  
  • Firewall and IE Enhanced security disabled  
  • SDDC Commander custom desktop deployed 
  • Additional software packages deployed and configured 
  • Google Chrome with Holodeck bookmarks 
  • VMware Tools 
  • VMware PowerCLI 
  • VMware PowerVCF 
  • VMware Power Validated Solutions 
  • PuTTY SSH client 
  • VMware OVFtool 
  • Additional software packages copied to Holo-Console for later use 
  • VMware Cloud Foundation 4.5 Cloud Builder OVA to C:\CloudBuilder 
  • VCF Lab Constructor 4.5.1 with dual site Holodeck configuration
    • VLC-Holo-Site-1 
    • VLC-Holo-Site-2 
  • VMware vRealize Automation 8.10 Easy Installer

The figure below shows the virtual machines running on the physical ESXi host to deliver a Holodeck Pod called “Holo-A”. Notice an instance of Holo-Console, Holo-Router, Cloud Builder and four nested ESXi hosts.  They all communicate over the VLC-A-PG Port Group   

Figure 2: Holodeck Nested Hosts

Adding a second site adds an additional instance of Cloud Builder and additional nested ESXi hosts. VLC-Holo-Site-2 connects to the second internal leg of the Holo-Router on VLAN 20. Network access from the Holo-Console to VLC-Holo-Site-2 is via Holo-Router.

The figure below shows a logical view of the VLC-Holo-Site-2 configuration within a Holodeck Pod. The Site-2 configuration uses DNS domain vcf2.sddc.lab

 Figure 3: Holodeck Site-2 Diagram

Accessing the Holodeck Environment

User access to the Holodeck pod is via the Holo-Console.  Access to Holo-Console is available via two paths:

VLC Holodeck Deployment Prerequisites 

  • ESXi Host Sizing   
  • Good (One pod): Single ESXi host with 16 cores, 384gb memory and 2TB SSD/NVME 
  • Better (Two pod): Single ESXi host with 32 cores, 768gb memory and 4TB SSD/NVME 
  • Best (Four or more pods):  Single ESXi host with 64+ cores, 2.0TB memory and 10TB SSD/NVME 
  • ESXi Host Configuration: 
  • vSphere 7.0U3 
  • Virtual switch and port group configured with uplinks to customer network/internet  
  • Supports stand alone, non vCenter Server managed host and single host cluster managed by a vCenter server instance 
  • Multi host clusters are NOT supported
  • Holo-Build host 
  • Windows 2019 host or VM with local access to ESXI hosts used for Holodeck + internet access to download software. (This package has been tested on Microsoft Windows Server 2019 only) 
  • 200GB free disk space 
  • Valid login to https://customerconnect.vmware.com  
  • Entitlement to VCF 4.5 Enterprise for 8 hosts minimum (16 hosts if planning to test Cloud Foundation Multi region with NSX Federation) 
  • License keys for the following VCF 4.5 components
    • VMware Cloud Foundation
    • VMware NSX-T Data Center Enterprise
    • VMware vSAN Enterprise 
    • VMware vSphere Enterprise Plus 
    • VMware vCenter Server (one license)
    • VMware vRealize Suite Advanced or Enterprise
    • Note: This product has been renamed VMware Aria Suite
  • External/Customer networks required
    • ESXi host management IP (one per host) 
    • Holo-Router address per pod

Enable Virtualization-based Security on a Virtual Machine on Nested ESXi Server in VMware Workstation

First Step Shutdown ESXi Server enable Encryption
Graphical user interface, application

Description automatically generated

Second Add vTPM

Graphical user interface, application, Word

Description automatically generated

Boot ESXi Server(s)

Configure Key Providers (Add Native Key Provider)

Graphical user interface, text, application, website

Description automatically generated

A screenshot of a computer

Description automatically generated

Now you can add vTPM to you VM
Don’t forget to enable VBS

Graphical user interface, application

Description automatically generated

Create GPO SRV 2022 – Virtualization Based Security and I did Apply only to my Server 2022 Lab Environment
Graphical user interface, application

Description automatically generated

System Information on my Server 2022 Lab Server
A screenshot of a computer

Description automatically generated with medium confidence

Virtual Machine with Windows Server 2022 with KB5022842 (Feb 2023) installed and configured with secure boot will not boot up on vSphere 7 unless updated to 7.0u3k (vSphere 8 not affected)

Virtual Machine with Windows Server 2022 with KB5022842 (Feb 2023) installed en configured with secure boot enabled will not boot up on vSphere 7 unless updated to 7.0u3k (vSphere 8 not affected)

VMware Seciroty Violation message

In VM vmware.log, there is ‘Image DENIED’ info like the below:
2023-02-15T05:34:31.379Z In(05) vcpu-0 – SECUREBOOT: Signature: 0 in db, 0 in dbx, 1 unrecognized, 0 unsupported alg.
2023-02-15T05:34:31.379Z In(05) vcpu-0 – Hash: 0 in db, 0 in dbx.
2023-02-15T05:34:31.379Z In(05) vcpu-0 – SECUREBOOT: Image DENIED.

To identify the location of vmware.log files:

  1. Establish an SSH session to your host. For ESXi hosts
  2. Log in to the ESXi Host CLI using root account.
  3. To list the locations of the configuration files for the virtual machines registered on the host, run the below command:

#vim-cmd vmsvc/getallvms | grep -i “VM_Name

  1. The vmware.log file is located in virtual machine folder along with the vmx file.
  2. Record the location of the .vmx configuration file for the virtual machine you are troubleshooting. For example:

/vmfs/volumes/xxxxxxxx-xxxxxxx-c1d2-111122223333/vm1/vm1.vmx
/vmfs/volumes/xxxxxxxx-xxxxxxx-c1d2-111122223333/vm1/vmware.log

Resolution

This issue is resolved in VMware ESXi 7.0 U3k, released on February 21st 2023. Build 21313628
A screenshot of a computer

Description automatically generated with medium confidence
Notes:

  • Virtual machines running on any version of vSphere ESXi 8.0.x are not impacted by this issue
  • vSphere ESXi 6.7 is End of general Support. For more information, see The End of General Support for vSphere 6.5 and vSphere 6.7 is October 15, 2022.
  • If you already face the issue, after patching the host to ESXi 7.0 Update 3k, just power on the affected Windows Server 2022 VMs. After you patch a host to ESXi 7.0 Update 3k, you can migrate a running Windows Server 2022 VM from a host of version earlier than ESXi 7.0 Update 3k, install KB5022842, and the VM boots properly without any additional steps required.

Building NSX-T 3.1 Home lab Step 1

I’m doing a mini-series on my NSX-T home lab setup. It’s only for testing en knowledge about NXS-T.

With newer versions of NSX-T 3.1 and later a couple of enhancements have been made that makes the setup a lot easier, like the move to a single N-VDS with the ability to run NSX on a Virtual Distributed Switch (VDS) in vCenter with VDS version 7.0.

In NSX-T 3.11 we got the ability to have the Edge TEP on the same subnet as the hypervisor TEP. A nice write-up of this feature can be found here: https://www.virten.net/2020/11/nsx-t-3-1-enhancement-shared-esxi-and-edge-transport-vlan-with-a-single-uplink/

Lab environment

First let’s have a quick look at the lab environment:

Compute

I have 1 have one ESXi Server Dell Server R730. I use only one nic for Management en Virtual Machine Traffic.

Network

My home network consists of single VLAN

VLAN

Subnet

Role

Virtual Switch

0

192.168.150.0/24

Management/Virtual Machine Traffic

vSwitch0

 Also ensure you enable the required security settings to support nested virtualization:

Virtual Machines

I run a virtualized vSphere 7 Cluster on my host

Afbeelding met tekst

Automatisch gegenereerde beschrijving

The Distributed Virtual Switches are running version 7.0.0 which let’s us deploy NSX-T on the VDS directly.

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Afbeelding met tafel

Automatisch gegenereerde beschrijving

Preparations

Check out the NSX-T Data Center Workflow for vSphere for details and documentation on the process

IP Addresses and DNS records

Before deploying NSX-T in the environment I’ve prepared a few IP addresses and DNS records

Role

IP

NSX Manager

192.168.150.229

NSX-T Edge node 1

192.168.150.227

NSX-T Edge node 2 (currently not in use)

192.168.150.228

NSX-T T0 GW Interface 1

192.168.99.2

Note that I’ve reserved addresses for a second Edge which I’m not going to use at the moment.

Deploy NSX manager appliance

VMware documentation reference

The NSX manager appliance has been downloaded and imported the OVF to the cluster. I won’t go into details about this, I just followed the deployment wizard.

In my lab I’ve selected to deploy a small appliance which requires 4 vCPUs, 16 GB RAM and 300 GB disk space. For more details about the NSX Manager requirements look at the official documentation

Note that I’ll not be deploying a NSX Manager cluster in my setup. In a production environment you should naturally follow best practices and configure a cluster of NSX Managers

NSX-T deployment

Now let’s get rocking with our NSX-T setup!

We’ll start the NSX manager and prepare it for configuring NSX in the environment

Initial Manager config

After first login I’ll accept the EULA and optionally enable the CEIP

License

Next I’ll add the license.

Add license

Import certificate

Imported certificates

IP Pools

Our Endpoints will need IP addresses and I’ve set aside a subnet for this as mentioned. In NSX Manager we’ll add an IP pool with addresses from this subnet. (The IP pool I’m using is probably way larger than needed in a lab setup like this)

Afbeelding met tekst, schermafbeelding, monitor

Automatisch gegenereerde beschrijving

TEP pool

Compute Manager

With all that sorted we’ll connect the NSX manager to our vCenter server so we can configure our ESXi hosts and deploy our edge nodes.

Best is a specific service account for the connection

Afbeelding met tekst, monitor, schermafbeelding, scherm

Automatisch gegenereerde beschrijving

Compute Manager added

Fabric configuration

Now we’re ready for building out our network fabric which will consist of the following:

Transport Zones

Overlay

VLAN

Transport Nodes

ESXi Hosts

Edge VMs

Edge clusters

Take a look at this summary of the Key concepts in NSX-T to learn more about them.

Transport Zone

The first thing we’ll create are the Transport Zones. These will be used later on multiple occasions later on. A Transport Zone is used as a collection of hypervisor hosts that makes up the span of logical switches.

The defaults could be used, but I like to create my own.

Afbeelding met tekst, monitor, schermafbeelding, scherm

Automatisch gegenereerde beschrijving

Transport Zones

Uplink Profiles

Uplink profiles will be used when we configure our Transport Nodes, both Hosts and Edge VMs. The profile defines how a Host Transport node (hypervisor) or an Edge Transport node (VM) will connect to the physical network.

Again I’m creating my own profile and leave the default profiles be as they are.

Afbeelding met tekst, monitor, zwart, schermafbeelding

Automatisch gegenereerde beschrijving

Uplink profile

In my environment I have only one Uplink to use. Note that I’ve set the Transport VLAN to 0 which also corresponds with the TEP VLAN mentioned previously.

Transport Node Profile

Although not strictly needed, I’m creating a Transport Node profile which will let me configure an entire cluster of hosts with the same settings instead of having to configure each and every host

In the Transport Node profile we first select the type of Host switch. In my case I’m selecting the VDS option, which will let me select a specific switch in vCenter.

We’ll also add in our newly created Transport Zones

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Creating Transport Node profile

We’ll select our Uplink profile and our IP Pool which we created earlier, finally we can set the mapping between the Uplinks

vCenter View

Creating Transport Node profile

Configure NSX on hosts

With our Transport Node profile we can go ahead and configure our ESXi hosts for NSX

Configure cluster for NSX

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Select profile

After selecting the profile NSX Manager will go ahead and configure our ESXi hosts.

Hosts configuring

After a few minutes our hosts should be configured and ready for NSX

Afbeelding met tekst, schermafbeelding, monitor, scherm

Automatisch gegenereerde beschrijving

Hosts configured

Trunk segment

Next up is to create our Edge VMs which we will need for our Gateways and Services (NAT, DHCP, Load Balancer).

But before we deploy those we’ll have to create a segment for the uplink of the Edge VMs. This will be a Trunk segment which we create in NSX. Initially I created a Trunk portgroup on the VDS in vSphere, but that doesn’t work. The Trunk needs to be configured as a logical segment in NSX-T when using the same VLAN for both the Hypervisor TEPs and the Edge VM TEPs

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Trunk segment

Edge VM

Now we can deploy our Edge VM(s). I’m using Medium sized VMs in my environment. Note that the Edge VMs is not strictly necessary for the test we’ll perform later on with connecting two VMs, but if we want to use some services later on, like DHCP, Load balancing and so on we’ll need them.

Deploy edge VM

Deploy edge VM

Note the NSX config, where we set the switch name, the Transport Zones we created, the Uplink profile, the IP pool and finally we use the newly created Trunk segment for the Edge uplink

NSX Edge config

Edge cluster

We’ll also create an Edge cluster and add the Edge VM to it

Edge cluster

Summary

Wow, this was a lot of configuring, but that was also the whole point of doing this blog post. Stuff like this is learnt best while getting your hands dirty and do some actual work. And I learn even better when I’m writing and documenting it as well.

In the next blog post we’ll test the fabric to see if what we’ve done is working. We’ll also try to get some external connectivity to our environment.

Hopefully this post can help someone, if not it has at least helped me.

Thanks for reading!

Special thnx for https://rudimartinsen.com/2021/06/29/nsx-t-31-homelab/ for his blog post

Removal of SD card/USB as a standalone boot device option after vSphere 7.x

On the 16th of September, a KB article was published by VMware, which contains statements of Removal of SD card/USB as a standalone boot device option.

USB/SD is not the right choice going forward! Why? The volume of reads/writes to and from the OS-DATA partition continues to increase with every release, which means that the lower grade devices will simply wear out faster.

Options for future versions of ESXi ( after 7.x)

When you buy new hardware make sure to have a proper persistent storage device,
Like a Dell Boss Card card  or HPE OS boot Devices

Options for 7.x

What are the fixes that will help with SD/USB issues that customers are seeing?
https://kb.vmware.com/s/article/83963
https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u2c-release-notes.html
  (Please look for USB device and Storage stack issues fixed with this release)

Translate »