Virtual Machine with Windows Server 2022 with KB5022842 (Feb 2023) installed and configured with secure boot will not boot up on vSphere 7 unless updated to 7.0u3k (vSphere 8 not affected)

Virtual Machine with Windows Server 2022 with KB5022842 (Feb 2023) installed en configured with secure boot enabled will not boot up on vSphere 7 unless updated to 7.0u3k (vSphere 8 not affected)

VMware Seciroty Violation message

In VM vmware.log, there is ‘Image DENIED’ info like the below:
2023-02-15T05:34:31.379Z In(05) vcpu-0 – SECUREBOOT: Signature: 0 in db, 0 in dbx, 1 unrecognized, 0 unsupported alg.
2023-02-15T05:34:31.379Z In(05) vcpu-0 – Hash: 0 in db, 0 in dbx.
2023-02-15T05:34:31.379Z In(05) vcpu-0 – SECUREBOOT: Image DENIED.

To identify the location of vmware.log files:

  1. Establish an SSH session to your host. For ESXi hosts
  2. Log in to the ESXi Host CLI using root account.
  3. To list the locations of the configuration files for the virtual machines registered on the host, run the below command:

#vim-cmd vmsvc/getallvms | grep -i “VM_Name

  1. The vmware.log file is located in virtual machine folder along with the vmx file.
  2. Record the location of the .vmx configuration file for the virtual machine you are troubleshooting. For example:

/vmfs/volumes/xxxxxxxx-xxxxxxx-c1d2-111122223333/vm1/vm1.vmx
/vmfs/volumes/xxxxxxxx-xxxxxxx-c1d2-111122223333/vm1/vmware.log

Resolution

This issue is resolved in VMware ESXi 7.0 U3k, released on February 21st 2023. Build 21313628
A screenshot of a computer

Description automatically generated with medium confidence
Notes:

  • Virtual machines running on any version of vSphere ESXi 8.0.x are not impacted by this issue
  • vSphere ESXi 6.7 is End of general Support. For more information, see The End of General Support for vSphere 6.5 and vSphere 6.7 is October 15, 2022.
  • If you already face the issue, after patching the host to ESXi 7.0 Update 3k, just power on the affected Windows Server 2022 VMs. After you patch a host to ESXi 7.0 Update 3k, you can migrate a running Windows Server 2022 VM from a host of version earlier than ESXi 7.0 Update 3k, install KB5022842, and the VM boots properly without any additional steps required.

NSX-T 3.1 Home lab – Part 2

With all the Fabric configuration done we can test our setup.

I’m creating two overlay segments in NSX connected to a Tier-1 gateway, and after that we’ll create a Tier-0 gateway and connect the T1 gateway to it to get North/South connectivity to the overlay resources

Two VMs will be deployed, one VM in each of the two overlay segments

Create a Tier-1 Gateway

The Tier-1 Gateway will initially not be connected to a Tier-0 Gateway (I haven’t configured a T0 gw yet) or an Edge Cluster.

Tier-1 Gateway

Tier-1 Gateway

Create Logical segments

We need two logical segments, both using the Overlay Transport Zone. I’m defining different subnets on them, 10.0.1.0/24 and 10.0.2.0/24.

Afbeelding met tekst, monitor, schermafbeelding, televisie

Automatisch gegenereerde beschrijving

Segments

Add VMs to Logical segments

We have two Photon VMs which should be added to the logical segments.

Two Photon VMs

Test connectivity

Afbeelding met tekst

Automatisch gegenereerde beschrijving Afbeelding met tekst

Automatisch gegenereerde beschrijving

Now let’s verify that the two VMs can ping each other

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Don’t forget to enable the echo rule on the Windows Firewall….

Connectivity test

This shows that the overlay is working, and note again that the Edge VMs are not in use here.

External connectivity

Traffic is flowing between VMs running on Logical segments inside the NSX-T environment, but what if we want to reach something outside, or reach a VM inside a NSX-T overlay?

Then we need to bring a Tier-0 Gateway in to the mix.

The T-0 gateway can be configured with Uplinks that are connected to the physical network. This is done through a segment which can reach the physical network, normally through a VLAN.

To configure the uplink interfaces we need to have Edge VMs so finally we get to bring those into play as well.

Create segment for uplinks

First I’ll create a segment mapped to VLAN 99 in my lab. Note that I select the VLAN transport zone, and I do not connect the segment to a gateway

Create Uplink VLAN segment

Create Uplink VLAN segment

Create Tier-0 gateway

Now we’ll create a Tier-0 gateway, note that I now also select my Edge cluster.

Afbeelding met tekst, schermafbeelding, monitor

Automatisch gegenereerde beschrijving

Create T0 gateway

Static route

To be able to forward traffic out of the NSX-T environment the T0 gateway needs to know where to send queries for IPs it doesn’t control. Normally you would want to configure a routing protocol like BGP or OSPF so that the T0 gateway could exchange routes with the physical router(s) in your network.

I’ve not set up BGP or any other routing protocol on my physical router, so I’ve just configured a default static route that forwards to my physical router. The next hop is set to the gateway address for the Uplink VLAN 99, 192.168.99.1

Static route

Static route

Link T1 gateway to T0 gateway

We’ve done a lot of configuring now, but still we’ve not got connectivity in or out for our VMs. The final step is to connect the Tier-1 gateway to the Tier-0 gateway, and we’ll also activate Route Advertisement of Connected Segments and Service Ports

Afbeelding met tekst, schermafbeelding, monitor, zwart

Automatisch gegenereerde beschrijving

Tier-1 Gateway

Test connectivity

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Verify North/South connectivity

Yes!

Test Distributed Firewall

Let’s also do a quick test of the Distributed Firewall feature in NSX-T.

First we’ll create a rule blocking ICMP (ping) from any to my test vm and publish the rule

Afbeelding met tekst, schermafbeelding, monitor, binnen

Automatisch gegenereerde beschrijving ICMP firewall rule

ICMP firewall rule

Now let’s test pinging from from my pc to nested Windows 2016 server. With the rule not enabled en enabled.

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Ping blocked

Summary

Hopefully this post can help someone, if not it has at least helped me.

Now we have working environment so we can go testing some things.
Also scripting/automation against a nsx environment I will look in to!

Building NSX-T 3.1 Home lab Step 1

I’m doing a mini-series on my NSX-T home lab setup. It’s only for testing en knowledge about NXS-T.

With newer versions of NSX-T 3.1 and later a couple of enhancements have been made that makes the setup a lot easier, like the move to a single N-VDS with the ability to run NSX on a Virtual Distributed Switch (VDS) in vCenter with VDS version 7.0.

In NSX-T 3.11 we got the ability to have the Edge TEP on the same subnet as the hypervisor TEP. A nice write-up of this feature can be found here: https://www.virten.net/2020/11/nsx-t-3-1-enhancement-shared-esxi-and-edge-transport-vlan-with-a-single-uplink/

Lab environment

First let’s have a quick look at the lab environment:

Compute

I have 1 have one ESXi Server Dell Server R730. I use only one nic for Management en Virtual Machine Traffic.

Network

My home network consists of single VLAN

VLAN

Subnet

Role

Virtual Switch

0

192.168.150.0/24

Management/Virtual Machine Traffic

vSwitch0

 Also ensure you enable the required security settings to support nested virtualization:

Virtual Machines

I run a virtualized vSphere 7 Cluster on my host

Afbeelding met tekst

Automatisch gegenereerde beschrijving

The Distributed Virtual Switches are running version 7.0.0 which let’s us deploy NSX-T on the VDS directly.

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Afbeelding met tafel

Automatisch gegenereerde beschrijving

Preparations

Check out the NSX-T Data Center Workflow for vSphere for details and documentation on the process

IP Addresses and DNS records

Before deploying NSX-T in the environment I’ve prepared a few IP addresses and DNS records

Role

IP

NSX Manager

192.168.150.229

NSX-T Edge node 1

192.168.150.227

NSX-T Edge node 2 (currently not in use)

192.168.150.228

NSX-T T0 GW Interface 1

192.168.99.2

Note that I’ve reserved addresses for a second Edge which I’m not going to use at the moment.

Deploy NSX manager appliance

VMware documentation reference

The NSX manager appliance has been downloaded and imported the OVF to the cluster. I won’t go into details about this, I just followed the deployment wizard.

In my lab I’ve selected to deploy a small appliance which requires 4 vCPUs, 16 GB RAM and 300 GB disk space. For more details about the NSX Manager requirements look at the official documentation

Note that I’ll not be deploying a NSX Manager cluster in my setup. In a production environment you should naturally follow best practices and configure a cluster of NSX Managers

NSX-T deployment

Now let’s get rocking with our NSX-T setup!

We’ll start the NSX manager and prepare it for configuring NSX in the environment

Initial Manager config

After first login I’ll accept the EULA and optionally enable the CEIP

License

Next I’ll add the license.

Add license

Import certificate

Imported certificates

IP Pools

Our Endpoints will need IP addresses and I’ve set aside a subnet for this as mentioned. In NSX Manager we’ll add an IP pool with addresses from this subnet. (The IP pool I’m using is probably way larger than needed in a lab setup like this)

Afbeelding met tekst, schermafbeelding, monitor

Automatisch gegenereerde beschrijving

TEP pool

Compute Manager

With all that sorted we’ll connect the NSX manager to our vCenter server so we can configure our ESXi hosts and deploy our edge nodes.

Best is a specific service account for the connection

Afbeelding met tekst, monitor, schermafbeelding, scherm

Automatisch gegenereerde beschrijving

Compute Manager added

Fabric configuration

Now we’re ready for building out our network fabric which will consist of the following:

Transport Zones

Overlay

VLAN

Transport Nodes

ESXi Hosts

Edge VMs

Edge clusters

Take a look at this summary of the Key concepts in NSX-T to learn more about them.

Transport Zone

The first thing we’ll create are the Transport Zones. These will be used later on multiple occasions later on. A Transport Zone is used as a collection of hypervisor hosts that makes up the span of logical switches.

The defaults could be used, but I like to create my own.

Afbeelding met tekst, monitor, schermafbeelding, scherm

Automatisch gegenereerde beschrijving

Transport Zones

Uplink Profiles

Uplink profiles will be used when we configure our Transport Nodes, both Hosts and Edge VMs. The profile defines how a Host Transport node (hypervisor) or an Edge Transport node (VM) will connect to the physical network.

Again I’m creating my own profile and leave the default profiles be as they are.

Afbeelding met tekst, monitor, zwart, schermafbeelding

Automatisch gegenereerde beschrijving

Uplink profile

In my environment I have only one Uplink to use. Note that I’ve set the Transport VLAN to 0 which also corresponds with the TEP VLAN mentioned previously.

Transport Node Profile

Although not strictly needed, I’m creating a Transport Node profile which will let me configure an entire cluster of hosts with the same settings instead of having to configure each and every host

In the Transport Node profile we first select the type of Host switch. In my case I’m selecting the VDS option, which will let me select a specific switch in vCenter.

We’ll also add in our newly created Transport Zones

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Creating Transport Node profile

We’ll select our Uplink profile and our IP Pool which we created earlier, finally we can set the mapping between the Uplinks

vCenter View

Creating Transport Node profile

Configure NSX on hosts

With our Transport Node profile we can go ahead and configure our ESXi hosts for NSX

Configure cluster for NSX

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Select profile

After selecting the profile NSX Manager will go ahead and configure our ESXi hosts.

Hosts configuring

After a few minutes our hosts should be configured and ready for NSX

Afbeelding met tekst, schermafbeelding, monitor, scherm

Automatisch gegenereerde beschrijving

Hosts configured

Trunk segment

Next up is to create our Edge VMs which we will need for our Gateways and Services (NAT, DHCP, Load Balancer).

But before we deploy those we’ll have to create a segment for the uplink of the Edge VMs. This will be a Trunk segment which we create in NSX. Initially I created a Trunk portgroup on the VDS in vSphere, but that doesn’t work. The Trunk needs to be configured as a logical segment in NSX-T when using the same VLAN for both the Hypervisor TEPs and the Edge VM TEPs

Afbeelding met tekst

Automatisch gegenereerde beschrijving

Trunk segment

Edge VM

Now we can deploy our Edge VM(s). I’m using Medium sized VMs in my environment. Note that the Edge VMs is not strictly necessary for the test we’ll perform later on with connecting two VMs, but if we want to use some services later on, like DHCP, Load balancing and so on we’ll need them.

Deploy edge VM

Deploy edge VM

Note the NSX config, where we set the switch name, the Transport Zones we created, the Uplink profile, the IP pool and finally we use the newly created Trunk segment for the Edge uplink

NSX Edge config

Edge cluster

We’ll also create an Edge cluster and add the Edge VM to it

Edge cluster

Summary

Wow, this was a lot of configuring, but that was also the whole point of doing this blog post. Stuff like this is learnt best while getting your hands dirty and do some actual work. And I learn even better when I’m writing and documenting it as well.

In the next blog post we’ll test the fabric to see if what we’ve done is working. We’ll also try to get some external connectivity to our environment.

Hopefully this post can help someone, if not it has at least helped me.

Thanks for reading!

Special thnx for https://rudimartinsen.com/2021/06/29/nsx-t-31-homelab/ for his blog post

Windows Server Hybrid Administrator Associate certification

Microsoft announced the addition of the Windows Server Hybrid Administrator Associate certification to our portfolio, to be released in early December 2021. This new certification validates the skills of administrators working in hybrid environments.

Administrators in this role support their teams and organizations using Windows Server—both in the cloud and on-premises/

The Windows Server Hybrid Administrator Associate certification

You’ll need to pass two exams to earn this certification. These exams, Exam AZ-800 (beta): Administering Windows Server Hybrid Core Infrastructure and Exam AZ-801 (beta): Configuring Windows Server Hybrid Advanced Services, will be available in early December 2021.

Removal of SD card/USB as a standalone boot device option after vSphere 7.x

On the 16th of September, a KB article was published by VMware, which contains statements of Removal of SD card/USB as a standalone boot device option.

USB/SD is not the right choice going forward! Why? The volume of reads/writes to and from the OS-DATA partition continues to increase with every release, which means that the lower grade devices will simply wear out faster.

Options for future versions of ESXi ( after 7.x)

When you buy new hardware make sure to have a proper persistent storage device,
Like a Dell Boss Card card  or HPE OS boot Devices

Options for 7.x

What are the fixes that will help with SD/USB issues that customers are seeing?
https://kb.vmware.com/s/article/83963
https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u2c-release-notes.html
  (Please look for USB device and Storage stack issues fixed with this release)

SD Card Boot issue Solution in ESXi 7.x

Issue The host goes into an un-responsive state due to: “Bootbank cannot be found at path ‘/bootbank” and boot device is in an APD state.

This issue is seen due to the boot device failing to respond & enter APD state (All paths down). Some cases, Host goes to non-responsive state & shows disconnected from vCenter.

As of 7.0 Update 1, the format of the ESX-OSData boot data partition has been changed. Instead of using FAT it is using a new format called VMFS-L. This new format allows much more and faster I/O to the partition. The level of read and write traffic is overwhelming and corrupting many less capable SD cards.

The action plan for future resolution would be to replace the SD card/s with a capable device/disk. Per the best practices mentioned on Installation guide.

The version 7.0 Update 2 VMware ESXi Installation and Setup Guide, page 19, specifically says “As even read-only workloads can cause problems on low-end flash devices, you should install ESXi only on high-endurance flash media“.

https://docs.vmware.com/en/VMware-vSphere/7.0/vsphere-esxi-702-installation-setup-guide.pdf

You can also refer to the below KB:

Reference: https://kb.vmware.com/s/article/83376?lang=en_US

Resolution

VMware engineering has a fix that will be in the next release.  Planned for July 2021.

https://communities.vmware.com/t5/ESXi-Discussions/SD-Boot-issue-Solution-in-7-x/m-p/2852027

https://www.provirtualzone.com/vsphere-7-update-2-loses-connection-with-sd-cards-workaround/

Upcoming change (March 2020) – Microsoft to disable use of unsigned LDAP port 389

In March 2020, Microsoft is going to release a update which will essentially disable the use of unsigned LDAP which will be the default. This means that you can no longer use bindings or services which binds to domain controllers over unsigned ldap on port 389. You can either use LDAPS over port 636 or using StartTLS on port 389 but it still requires that you addd a certificate to your domain controllers. This hardening can be done manually until the release of the security update that will enable these settings by default.

How to add signed LDAPS to your domain controllers

You can read more about the specific change here –> https://support.microsoft.com/en-us/help/4520412/2020-ldap-channel-binding-and-ldap-signing-requirement-for-windows you can also read more here –> https://techcommunity.microsoft.com/t5/core-infrastructure-and-security/ldap-channel-binding-and-ldap-signing-requirements-update-now/ba-p/921536

After the change the following features will be supported against Active Directory.

clipboard_image_0.png

How will this affect my enviroment?

Clients that rely on unsigned SASL (Negotiate, Kerberos, NTLM, or Digest) LDAP binds or on LDAP simple binds over a non-SSL/TLS connection stop working after you make this configuration change. This also applies for 3.party solutions which rely on LDAP such as Citrix NetScaler/ADC or other Network appliances, Vault and or authentication mechanisms also rely on LDAP. If you haven’t fixed this it will stop working. This update will apply for all versions.

Windows Server 2008 SP2,
Windows 7 SP1,
Windows Server 2008 R2 SP1,
Windows Server 2012,
Windows 8.1,
Windows Server 2012 R2,
Windows 10 1507,
Windows Server 2016,
Windows 10 1607,
Windows 10 1703,
Windows 10 1709,
Windows 10 1803,
Windows 10 1809,
Windows Server 2019,
Windows 10 1903,
Windows 10 1909

How to check if something is using unsigned LDAP?

If the directory server is configured to reject unsigned SASL LDAP binds or LDAP simple binds over a non-SSL/TLS connection, the directory server will log a summary under eventid 2888 one time every 24 hours when such bind attempts occur. Microsoft advises administrators to enable LDAP channel binding and LDAP signing as soon as possible before March 2020 to find and fix any operating systems, applications or intermediate device compatibility issues in their environment.

You can also use this article to troubleshoot https://docs.microsoft.com/en-us/archive/blogs/russellt/identifying-clear-text-ldap-binds-to-your-dcs

Credits: https://msandbu.org/upcoming-change-microsoft-to-disable-use-of-unsigned-ldap-port-389/

Ultimate Cross vCenter Script

Last year i attend the Dutch VMUG (NLVMUG) i followed session from

Michael Wilmsen that was: Migrate your datacenter without downtime.

I must also move al lot of VM’s from different datacenters to other datacenters.
I use the script from Michael Wilmsen to move the VM’s. But along the way I counter some problems with this script. So I begon tweaking and tweaking and tweaking this script to create for me the ultimate Cross vCenter PowerCLI Script.

Coolfeatures:
– Info through Whattsapp (Default not enabled)
– Dryrun (Test Run)
– Logging
– Selection through GUI
– Multiple Nic support maximum of 4.
– Datastore en Host selection based on Free space en Free Memory
– Check of Destination Host or Datastore in Maintance
– Destination Store exist in Destination Cluster

MoveVM.ps1:
#Filename: MoveVM.ps1
#Author: M. Wilmsen / W. Vissers
#Source: http://virtual-hike.com/nlvmug-2018/
#Version: 2.0
#Date: 21-10-2018
#ChangeLog:
# V0.9 – M. Wilmsen First Version
# V1.0 – Fixed Multiple Nics to maximium of 4 nics
#      – Logfile name VM name
# V1.1 – Destination Cluster not the first Host
# V1.2 – Selected Destination host based on memory used
# V1.3 – Fixed folder location and VirtualPortGroup
# V1.4 – Fixed Datastore in Maintance
# V1.5 – Using Get-VICredentialStoreItem + Logpath Fixt
# V1.6 – Fixed Log in Hours in 24 uurs
# V1.7 – Fixed Using DatastoreCluster name based on Cluster name!
# V1.8 – Check if Destination has the same datastore
#          – Ask know for input
#          – VM selection with VMhost
#          – Fixed Ping Check
# v1.9 – Added Destination Store exist in Destination Cluster
# v2.0 – Fixed Destination Store exist in Destination Cluster
<#
.SYNOPSIS
Script to migrate a virtual machine
.DESCRIPTION
Script to migrate compute and storage from cluster to cluster. Log will be in current dir [VM]-[-timestamp].log

.EXAMPLE
MoveVM.ps1
#>
################################## INIT #################################################
#Set WebOperation timeout
# set-PowerCLIConfiguration -WebOperationTimeoutSeconds 3600
#Define Global variables
$location = “D:\xmovewhattsapp”
$LogPath = “.\”
$DataStoreClusterPrefix = “SAN-“
$SourceVC = Read-Host “Give Source vCenter”
$DestinationVC = Read-Host “Give Destination vCenter”
$DRSRecommendation = $true
$Dryrun = $false
$SendWhatsApp = $false
$WhatsAppNumbers = “0123456789”
$WhatsAppGroup = “Namehireyourwhattsgroup”
$instanceId = “23” #chang this line
$clientId = “demo@demo.nl” #change this line
$clientSecret = “Puthiersecretid” #change this line
################################## PASSWORD STORE ##############################################
#Username
# Check if credentials exist in credential store if not ask for credentials and put them in credential store

If ((Get-VICredentialStoreItem).host -notcontains $SourceVC) {New-VICredentialStoreItem -Host $SourceVC -User $env:USERNAME -Password ((get-credential).GetNetworkCredential().Password)}
If ((Get-VICredentialStoreItem).host -notcontains $DestinationVC) {New-VICredentialStoreItem -Host $DestinationVC -User $env:USERNAME -Password ((get-credential).GetNetworkCredential().Password)}

# Remove-VICredentialStoreItem * -Confirm:$false

################################## END INIT #################################################
################################## FUNCTIONS #################################################
#Define log function
Function LogWrite
{
    Param ([string]$logstring)
    #Add logtime to entry
    $LogTime = Get-Date -Format “MM-dd-yyyy_HH-mm-ss”
    $logstring = $LogTime + ” : ” + $logstring
    #Write logstring
    Add-content $LogFile -value $logstring
    Write-Host $logstring
}
#Define SendWhatsApp function
Function SendWhatsApp
{
   Param ([string] $message)
  
   if ( $SendWhatsApp ) {
     $LogTime = Get-Date -Format “MM-dd-yyyy_hh-mm-ss”
     $message = $logtime + ” : ” + $message
    
     foreach ( $number in $WhatsAppNumbers )
     {
        $jsonObj = @{‘group_admin’=$number;
                     ‘group_name’=$WhatsAppGroup;
                     ‘message’=$message;}
       Try {
         $res = Invoke-WebRequest -Uri “http://api.whatsmate.net/v2/whatsapp/group/message/$instanceId” `
                           -Method Post   `
                           -Headers @{“X-WM-CLIENT-ID”=$clientId; “X-WM-CLIENT-SECRET”=$clientSecret;} `
                           -Body (ConvertTo-Json $jsonObj)
         LogWrite “WhatsMate Status Code: ”  $res.StatusCode
         LogWrite $res.Content
       }
       Catch {
         $result = $_.Exception.Response.GetResponseStream()
         $reader = New-Object System.IO.StreamReader($result)
         $reader.BaseStream.Position = 0
         $reader.DiscardBufferedData()
         $responseBody = $reader.ReadToEnd();

        Write-host “Status Code: ” $_.Exception.Response.StatusCode
         Write-host $message
         }
     }
   }
}

function Get-VmSize($vm)
{
     #Initialize variables
     $VmDirs =@()
     $VmSize = 0
     $searchSpec = New-Object VMware.Vim.HostDatastoreBrowserSearchSpec
     $searchSpec.details = New-Object VMware.Vim.FileQueryFlags
     $searchSpec.details.fileSize = $TRUE
     Get-View -VIObject $vm | % {
         #Create an array with the vm’s directories
         $VmDirs += $_.Config.Files.VmPathName.split(“/”)[0]
         $VmDirs += $_.Config.Files.SnapshotDirectory.split(“/”)[0]
         $VmDirs += $_.Config.Files.SuspendDirectory.split(“/”)[0]
         $VmDirs += $_.Config.Files.LogDirectory.split(“/”)[0]
         #Add directories of the vm’s virtual disk files
         foreach ($disk in $_.Layout.Disk) {
             foreach ($diskfile in $disk.diskfile){
                 $VmDirs += $diskfile.split(“/”)[0]
             }
         }
         #Only take unique array items
         $VmDirs = $VmDirs | Sort | Get-Unique
         foreach ($dir in $VmDirs){
             $ds = Get-Datastore ($dir.split(“[“)[1]).split(“]”)[0]
             $dsb = Get-View (($ds | get-view).Browser)
             $taskMoRef  = $dsb.SearchDatastoreSubFolders_Task($dir,$searchSpec)
             $task = Get-View $taskMoRef
             while($task.Info.State -eq “running” -or $task.Info.State -eq “queued”){$task = Get-View $taskMoRef }
             foreach ($result in $task.Info.Result){
                 foreach ($file in $result.File){
                     $VmSize += $file.FileSize
                 }
             }
         }
     }
     return $VmSize
}
################################## END FUNCTIONS #################################################
#Login to vCenter servers
if (($global:DefaultVIServers).Name -notcontains $SourceVC -or $DestinationVC) {

#SourceVC
$ConnectVC = Connect-VIServer $SourceVC
$Message = “Connecting to ” + $ConnectVC  + ” as ” + $env:USERNAME
#Logwrite $Message
#DestionationVC
$ConnectVC = Connect-VIServer $DestinationVC
$Message = “Connecting ” + $ConnectVC + ” as ” + $env:USERNAME
#Logwrite $Message

# Disconnect-VIServer * -Confirm:$false

}
Set-Location $location

$cluster=Get-Cluster -Server $SourceVC  | Out-GridView -OutputMode Single -Title “Select Source Cluster”
$vmtomigrate =Get-Cluster $cluster -Server $SourceVC | Get-VM | Out-GridView -OutputMode Single -Title “Select VM”
$DestinationCluster = Get-Cluster -Server $DestinationVC | Out-GridView -OutputMode Single -Title “Select Destination Cluster”
$vmfolder=Get-folder -Server $DestinationVC | Out-GridView -OutputMode Single -Title “Select Folder”

#Main Script

    #Set $MigError to false befor migration
     $MigError = $false
     #Get VM variables
     $vm = get-vm $vmtomigrate
    
     #Define LogFile with time stamp
     $LogTime = Get-Date -Format “MM-dd-yyyy_hh-mm-ss”
    
     if([IO.Directory]::Exists($LogPath))
     {
     #Do Nothing!!
     }
     else
     {
     New-Item -ItemType directory -Path $LogPath
     }
     $LogFile = $LogPath+$VM+”-“+$LogTime+”.log”
    
     # LogWrite Gebruiker
    
     Logwrite $env:USERNAME

    # Get-VM Info   
    
     $VMHDDSize = Get-VmSize($vm)
     $VMHDDSize = [Math]::Round(($VMHDDSize / 1GB),2)

    Logwrite “Start Virtual Machine Move”
     #If WhatsApp make notice
     if ( $SendWhatsApp ) { LogWrite “Notifications will be send using WhatsApp to WhatsApp Group: $WhatsAppGroup” }
     #If DryRun make Notice
     if ( $Dryrun ) {
     Logwrite “Start move virtual machines $vm Disksize $VMHDDSize GB (DryRun)”
     SendWhatsApp “Start move virtual machines $vm Disksize $VMHDDSize GB(DryRun)”
     }
     else {
     Logwrite “Start move virtual machines $vm Disksize $VMHDDSize GB”
     SendWhatsApp “Start move virtual machines $vm Disksize $VMHDDSize GB”
     }
     $SourceCluster = get-vm $vm | Get-Cluster | select name
     $vmip = $vm  | Select @{N=”IP Address”;E={@($_.guest.IPAddress[0])}}
     $vmip = $vmip.”ip address”
     $VMHDDSize = Get-VmSize($vm)
     $VMHDDSize = [Math]::Round(($VMHDDSize / 1GB),2)
     $NetworkAdapter = Get-NetworkAdapter -VM $vm -Server $SourceVC
     $SourceVMPortGroup = Get-NetworkAdapter -vm $vm | Select NetworkName
     $switchname = $DestinationCluster
    

     $Datastore = Get-VM $vm | Get-DataStore -Server $sourceVC | Select @{N=”Name”;E={@($_.Name)}}
     $Datastore = $Datastore.Name
     $DatastoreExistinOthervCenter = Get-Cluster $DestinationCluster | Get-DataStore -Server $DestinationVC | ? {$_.Name -like “*$Datastore*”}

     if ($DatastoreExistinOthervCenter )
      {
      LogWrite  “Datastore exsist $DestinationCluster in  destination vCenter $DestinationVC “
      $destinationDatastore = $DatastoreExistinOthervCenter }
      Else
      {
      LogWrite  “Datastore does not exsist in $DestinationCluster destination vCenter $DestinationVC”
      # Select DataStore with the most free space and not in maintance
      $DatastoreCluster = “$DataStoreClusterPrefix”+”$DestinationCluster”
      $destinationDatastore = Get-DatastoreCluster $DatastoreCluster | Get-Datastore | Where {$_.State -ne “Maintenance”} | Sort-Object -Property FreeSpaceGB -Descending | Select-Object -First 1
      }

     $destinationDatastoreFreeSpace = $destinationDatastore | Select Name,@{N=”FreeSpace”;E={$_.ExtensionData.Summary.FreeSpace}}
      $destinationDatastoreFreeSpace = [Math]::Round(($destinationDatastoreFreeSpace.”FreeSpace” / 1GB),2)

    # Select the host with the less used memory
   
     $DestinationHost = Get-Cluster –Name $DestinationCluster –Server $DestinationVC | Get-VMhost -State Connected | Sort-Object -Property MemoryUsageGB | Select-Object -First 1
            
     # Change Here if you have a vm with multiple Network Cards (Remove the # for the multiple nics)
    
     if ($NetworkAdapter.Count-eq 1) {
         $DestinationVMPortgroup =@()
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic1”
      }
     elseif ($NetworkAdapter.Count-eq 2) {
         $DestinationVMPortgroup =@()
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic1”
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic2”
     }
     elseif ($NetworkAdapter.Count-eq 3) {
         $DestinationVMPortgroup =@()
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic1”
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic2”
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic3”
     }
     elseif ($NetworkAdapter.Count-eq 4) {
         $DestinationVMPortgroup =@()
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic1”
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic2”
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic3”
         $DestinationVMPortgroup += Get-VirtualPortGroup -Server $DestinationVC -Vmhost $DestinationHost | Out-GridView -OutputMode Single -Title “Select Nic4”
     }

    LogWrite “Start move: $vm”
     Logwrite “VM IP: $vmip”
     Logwrite “VM Disk Used (GB): $VMHDDSize”
     Logwrite “VM Folder: $vmfolder”
     Logwrite “Source vCenter: $SourceVC”
     Logwrite “VM Source Cluster: $SourceCluster”
     Logwrite “Destination vCenter: $DestinationVC”
     Logwrite “VM Destination Cluster: $DestinationCluster”
     Logwrite “Destination host: $DestinationHost”
     LogWrite “VM Source PortGroup: $SourceVMPortGroup”
     LogWrite “VM Destination Portgroup: $DestinationVMPortgroup”
     Logwrite “VM Destination Datastore: $destinationDatastore”
     LogWrite “Destination Datastore FreeSpace GB: $destinationDatastoreFreeSpace “
     if ( $Dryrun ) {
       $FreespaceAfterMigration = $destinationDatastoreFreeSpace – $VMHDDSize
       if ( $FreespaceAfterMigration -lt 0 ) { Logwrite “ERROR: Datastore $destinationDatastore does not have sufficient freespace! Virtual Machine needs $VMHDDSize. Only $destinationDatastoreFreeSpace available.” }
       else { Logwrite “Virtual Machine will fit on datastore $destinationDatastore. Freespace after migration is: $FreespaceAfterMigration GB” }
     }
    #Test if VM responsed to ping
    if ($vmip -eq $null) {
     LogWrite “Virtual Machine ip address not known”
     Logwrite “No ping check will be performed after moving the Virtual Machine”
     }
    else {
         Test-Connection -comp $vmip -quiet
         LogWrite “Virtual Machine $vm response to ping before being moved. Virtual machine will be checked after being moved”
         $PingVM = $true
     }
      
     #if ( $VMHDDSize -eq
     if ( -NOT $Dryrun) {
       #Migrate VM to cluster
       LogWrite “Move $vm to vCenter $DestinationVC and datastore $DestinationDatastore”
       Try {
         $Result = Move-VM -VM $vm `
                            -Destination $DestinationHost `
                            -Datastore $DestinationDatastore `
                            -NetworkAdapter $NetworkAdapter `
                            -PortGroup $DestinationVMPortgroup `
                            -ErrorAction Stop
           }
       Catch {
         $ErrorMessage = $_.Exception.Message
         LogWrite “ERROR: Move of $vm to cluster $DestinationHost failed!!!”
         Logwrite “ERROR: Move Status Code:  $ErrorMessage”
         SendWhatsApp “ERROR: Move of $vm failed!!! $ErrorMessage”
         $MigError = $true   
       }
       #Migrate VM to folder
       LogWrite “Move $vm to vCenter $vmfolder”
       Try {
         $VMtemp = get-vm $vm
         $Result = Move-VM -VM $vmtemp -InventoryLocation $vmfolder -ErrorAction Stop
           }
       Catch {
         $ErrorMessage = $_.Exception.Message
         LogWrite “ERROR: Move of $vm to folder $vmfolder failed!!!”
         Logwrite “ERROR: Move Status Code:  $ErrorMessage”
         SendWhatsApp “ERROR: Move of $vm failed!!! $ErrorMessage”
         $MigError = $true   
         }
       }
    
     $MigError = $false
     #Test if VM is running on destination cluster
     if ( -NOT $MigError -AND -NOT $Dryrun ) {
       LogWrite “Check $vm is registered in $DestinationVC”
       try {
         $CheckVM = get-vm -name $vm -server $DestinationVC -ErrorAction Stop
 
         if ( $CheckVM ) {
           Logwrite “$vm registered in $DestinationVC”
         }
         else {
           Logwrite “ERROR: $vm not found in $DestinationVC”
         }
       }
       catch {
         $ErrorMessage = $_.Exception.Message
         Logwrite “ERROR: $vm not found in $DestinationVC”
         Logwrite “ERROR: $ErrorMessage”
         SendWhatsApp “ERROR move: $vm not found in $DestinationVC”
       }
     }
     #Test is VM response to ping, if $PingVM = $True
     if ($PingVM) {
       if (Test-Connection -comp $vmip -quiet) {
         LogWrite “Virtual Machine $vm response to ping after move”
         SendWhatsApp “Virtual Machine $vm response to ping after move”
       } 
     }
     sleep 1
     SendWhatsApp “Finished move action: $vm from $SourceVC to $DestinationVC”
     Logwrite “Finished move action: $vm from $SourceVC to $DestinationVC”

if ($DRSRecommendation)
  {
   Get-DrsRecommendation -Cluster $DestinationCluster -Server $DestinationVC | Apply-DrsRecommendation
   Logwrite “DRS Recommendatation applyed”
  }
Else
  {
  Logwrite “No DRS Recommendatation applyed”
  Write-Host “No DRS Recommendatation applyed”
  }  
 

#Disconnect from vCenter servers
Logwrite “Disconnect from vCenter servers $SourceVC $DestinationVC”
Disconnect-viserver $SourceVC -Confirm:$false
Disconnect-viserver $DestinationVC -Confirm:$false
Logwrite “Finished moving virtual machines, exiting…..”
SendWhatsApp “Finished moving virtual machines, exiting…..”

VMware vSphere PowerCLI 11.0

VMware vSphere PowerCLI 11.0 New Features

New features available on  VMware vSphere PowerCLI 11.0 is to support the new all updates and release of VMware products , find the below following has been features,

  • New Security module
  • vSphere 6.7 Update 1
  • NSX-T 2.3
  • Horizon View 7.6
  • vCloud Director 9.5
  • Host Profiles – new cmdlets for interacting with
  • New Storage Module updates
  • NSX-T in VMware Cloud on AWS
  • Cloud module multiplatform support
  • Get-ErrorReport cmdlet has been updated
  • PCloud module has been removed
  • HA module has been removed

Now we will go through above mentioned new features to find what functionality it bring to PowerCLI 11.0

What is PowerCLI 11.0 New Security Module

The new security module brings more powerful automation features to PowerCLI 11.0 available  new cmdlets include the following

  • Get-SecurityInfo
  • Get-VTpm
  • Get-VTpmCertificate
  • Get-VTpmCSR
  • New-VTpm
  • Remove-VTpm
  • Set-VTpm
  • Unlock-VM

Also New-VM cmdlet has enhanced functionality with the security module functionality and it includes parameters like KmsCluster, StoragePolicy, SkipHardDisks etc which can be used while creating new virtual machines with PowerCLI .In addition to that  Set-VM, Set-VMHost, Set-HardDisk, and New-HardDisk cmdlets are added.

Host Profile Additions

There are few additions to the VMware.VimAutomation.Core module that will make managing host profiles from PowerCLI

  • Get-VMHostProfileUserConfiguration
  • Set-VMHostProfileUserConfiguration
  • Get-VMHostProfileStorageDeviceConfiguration
  • Set-VMHostProfileStorageDeviceConfiguration
  • Get-VMHostProfileImageCacheConfiguration
  • Set-VMHostProfileImageCacheConfiguration
  • Get-VMHostProfileVmPortGroupConfiguration
  • Set-VMHostProfileVmPortGroupConfiguration

Storage Module Updates

These new Storage Module updates specifically for VMware vSAN , the updates has predefined time ranges when using Get-VsanStat. In addition  Get-VsanDisk has additional new properites that are returned including capacity, used percentage, and reserved percentage. Following are the  cmdlets have been added to automate vSAN

  • Get-VsanObject
  • Get-VsanComponent
  • Get-VsanEvacuationPlan – provides information regarding bringing a host into maintenance mode and the impact of the operation on the data, movement, etc

Additionally  following modules have been removed

  • PCloud module
  • HA module

Download now and start using

Update-module VMware.Powercli

Useful Links
Translate »