Migrating ESXi Hosts with a vDS between vCenter servers using Powershell

Most vSphere admins are probably in the process of moving over to to vSphere 6.x or planning for it as end of general support for vSphere 5.5 is appraoching later this year.
I wrote about how to go about migrating from the Windows vCenter 5.5 to the VCSA 6.5 in an earlier post here, however that is not always an option. Some may choose to go with a greenfield deployment, which leaves you having to move your hosts over to the new environment.
There are a couple of different scenarios here, but in this post I am going to focus on what I believe to be a common one. That being, we have an existing vSphere 5.5 environment, utilizing Distributed Switches and a greenfield vSphere 6.5 environment which we want to move our hosts and VMs over to.

Now, there are a few posts out there that do this quite easily, but as they point out its not an officially supported method by VMware, KB here. The supported steps are outlined but as you can quickly see, that is a lot of clicking when you have a moderate to large number of hosts. Having to do this for a customer with a few hundred hosts, I wasnt even going to entertain the idea of doing it all manually, so I came up with a couple of scripts ( with the help of some code that was published by vmware community members).
I split up the scripts into two parts:

Part 1 – Migrating the host and VMs from the vDS to a newly created vSS
Part 2 – Moving the host between vCenter servers and migrating it to a vDS

I have made the assumptions that the vSphere 6.5 environment has already had the target constructs already created (i.e Clusters, vDS, Folders, etc). Also that the management kernel port already resides on a seperate standard switch.

I have not included any automation for host profiles and autodeploy, but I plan to put out another post which covers the environments which utilize those features.

Enough rambling, here are the scripts:
Part 1 here
Part 2 here

Lets take a look and breakdown the code:

Part 1:

First we have to define a couple of variables which outline what we want to target:

#The vCenter server to connect to
$srcvCenter = "VC01.karpslab.local"
#The target Host
$vmHost = "ESXi-01.karpslab.local"
#Name of the first physical adapter we want to target
$pNIC1 = "vmnic2"
#Name of the second physical adapter we want to target
$pNIC2 = "vmnic3"
#Name of the distributed switch we want to move away from
$vDS = "vDS-Cluster-01"
$vSS = "MigSwitch" #Name of the temporary vSwitch that we create
#name of the VMK1 port group
$vmk1pg = "vDS-Cluster-01-VL10-NFS"
#name of the VMK2 port group
$vmk2pg = "vDS-Cluster-01-VL20-vMotion1"
#name of the VMK3 port group
$vmk3pg = "vDS-Cluster-01-VL20-vMotion2"

It is really important to note that the VMKernel port number lines up with the correct port group here, otherwise you will run into problems. If you have more VMkernel ports, you can quite easily add extra variables here and in the section where we migrate them over later in the code.

Next the script connects us to the defined vCenter server, after which we grab the details of our vDS and its port groups and place them into variables.

$vDSobj = Get-VDSwitch -Name $vDS
$vDSpg = $vdsObj  Get-VDPortgroup

Now we create the new Standard vSwitch, using the MTU value we got form our vDS:

$vSSObj = New-VirtualSwitch -VMHost $vmHost -Name $vss -mtu $vDSObj.mtu

Here we step through each port group in the $vDSpg variable which contains the port groups that exist on the vDS. Firstly, we get the vlan id of the port group, then check if it is the uplink port group which is created with a vDS, if it is the uplink port group, we skip that and move on. Otherwise if it is a regular port group, we then create the respective port group on the vSS we created earlier.

foreach($pg in $vDSpg){
#Get port group VLAN ID
$pgVLAN = $pg.Extensiondata.Config.DefaultPortConfig.Vlan.VlanID
#Check if it is the uplink pg
If ($pg.IsUplink -eq "True"){Write-Host "Skipping Uplink PortGroup" -ForegroundColor yellow}
#If it is not the uplink pg, create it on the vSS
New-VirtualPortGroup -Name $pg.name -VirtualSwitch $vSSObj -VLanId $pgVLAN  Out-null
Write-Host "Created PortGroup $pg with Vlan ID $pgVLAN" -ForegroundColor Cyan

Now that we have the vSS configured the way we want it, we can begin to migrate the host from the vDS. Firstly we need to move one of the interfaces off the vDS and onto the vSS. In this scenario this will be “vmnic2”.

$pNIC1Obj = Get-VMHostNetworkAdapter -VMhost $vmhost -Physical -name $pNIC1 #Gets the NIC object
$pNIC1Obj  Remove-VDSwitchPhysicalNetworkAdapter -Confirm:$false #Removes adapter from vDS
Add-VirtualSwitchPhysicalNetworkAdapter -VirtualSwitch $vSSObj -VMHostPhysicalNic $pnic1Obj -Confirm:$false #adds adapter to vSS

Within the scripts I have placed blocks of code that asks the user to continue to the next step. This has been put in as there are parts where you might want to go and validate that things are workings as expected before proceeding. For example, if one of your interfaces has the incorrect VLAN configuration on the underlying switch port, you wont have much joy when you cut over a VMs network or an NFS kernel port. It does slow thing down a bit, but if you’re working on a production environment I think its a good practice. No-one wants to be the guy who brings down a business critical application because we messed up a vlan configuration or mtu size.

The while loop validates that the entry is a yes or a no. If no, the script exits.

$continue = Read-Host "Would you like to continue to migrating VM networks (Y/N)?"
while("Y","N" -notcontains $continue)
	$continue = Read-Host "Please enter Y or N"
if ($continue -eq "N")
Write-Host "Exiting Script" -ForegroundColor Red
}elseif ($continue -eq "Y")
Write-Host "Continuing to VM Network Migration" -ForegroundColor Green

So now our host should have a new vSS and one interface attached. Next we need to target the virtual machines and ensure they start communicating over the vSS and the uplink we associated with it. Before we do that, its a good idea to set the cluster DRS mode to manual. This is done with the below snippet:

#Set Cluster DRS Setting to Manual
$VMhostObj = Get-VMHost $VMhost
$ClusterObj = Get-Cluster -Name $VMhostObj.Parent
Write-Host "Setting DRS for Cluster $clusterObj to Manual" -ForegroundColor Cyan
Set-Cluster -Cluster $clusterObj -DrsAutomationLevel Manual -Confirm:$false Out-Null

Let’s move on to re-configuring the VMs to the vSS ports:

We need to get a list of the virtual machines that are currently running on the host and put them in a variable.

$VMlist = $VMhostObj | get-VM

We now loop through each vm in the list and do two things:

  1. Get the details of the current network adapter attached to the VM
  2. Set the VMs network adapter to the port group on the vSS. We do this by getting the portgroup object from the standard switch, which is the same name as the port group of the vDS.

The below code will prompt you when modifying each VM, but this can be avoided by adding -Confirm:$false at the end of the command. I suggest using a tool such as pinginfoview to validate that the VM networks cutover properly, that way you can get ontop of it quickly, should there be any issues. I have not tested this on virtual machines with mutliple interfaces (yet).

foreach ($VM in $VMlist){
$VMnic = Get-NetworkAdapter $vm
$VMnic | Set-NetworkAdapter -PortGroup (Get-VirtualPortGroup -VMhost  $VMHost -Standard -Name $vmnic.NetworkName)
Write-Host "Migrated $VM network to $vSS on $VMhost" -ForegroundColor Cyan

The next step is to cutover the VMkernel ports, along with the last adapter that remains on the vDS. I couldn’t find a way to do one at a time as you do when moving the other direction (vSS to vDS), but I did come across this post from @Lamw here which cuts them all over along with the adapter. I have modified it slightly for this scenario. This is also the section where you will need to add or remove vmkernel interfaces to suit your environment:

#Get pNic and swtich objects
$pNIC2Obj = Get-VMHostNetworkAdapter -VMhost $vmhost -Physical -name $pNIC2
$vSSObj = Get-VirtualSwitch -VMhost $VMhost -Name $vSS

#Get VMK ports to migrate
$vmk1 = Get-VMHostNetworkAdapter -VMhost $vmhost -VMKernel -name vmk1
$vmk2 = Get-VMHostNetworkAdapter -VMhost $vmhost -VMKernel -name vmk2
$vmk3 = Get-VMHostNetworkAdapter -VMhost $vmhost -VMKernel -name vmk3

#get VMK port groups to migrate to
$vmk1pgObj = Get-virtualportgroup -virtualswitch $vssObj -name $vmk1pg
$vmk2pgObj = Get-virtualportgroup -virtualswitch $vssObj -name $vmk2pg
$vmk3pgObj = Get-virtualportgroup -virtualswitch $vssObj -name $vmk3pg

#create array of VMKports and VMKPortGroups
$vmkArray =@($vmk1,$vmk2,$vmk3)
$vmkpgArray =@($vmk1pgObj,$vmk2pgObj,$vmk3pgObj)

#Move physical nic and VMK ports from vDS to vSS
Write-Host "Migrating $vmhost from $vds to $vss" -ForegroundColor Cyan
Add-VirtualSwitchPhysicalNetworkAdapter -VirtualSwitch $vssObj -VMHostPhysicalNic $pNIC2Obj -VMHostVirtualNic $vmkarray -VirtualNicPortgroup $vmkpgarray  -Confirm:$false

Finally, we remove the host from the vDS:

$vdsObj | Remove-VDSwitchVMHost -VMHost $vmhost -Confirm:$false 

If all goes to plan, we will now have the target host entirely on the vSS and all the VMs are still running without anyone noticing.

Now lets breakdown the script that does Part 2:

Lines 30 – 143 have two functions defined which have been writted by http://kunaludapi.blogspot.com.au. The first function Get-VMFolderPath, does exactly that, this is so we know which folder the VM was in prior to moving it. The Move-VMtoFolderPath funciton takes the output from the previous function and moved the specified list of VMs to the appropriate folder. As mentioned earlier, it is assumed that the folder structure is already created. There are a number of scripts out there that can help you export / import folder structures in vCenter.

The next part of the script defines the same variables as the previous one, although we have added the $dstvCenter and $dstCluster variables which define where we want to move the host.

Before we move the host, we need to get the VM folder list with this snippet of code:

$VMhostObj = Get-VMhost $VMhost
$VMlist = $VMhostObj | get-VM
$VMFolders = $VMlist | Get-VMFolderPath

Now, to remove the host from the current vCenter:

First we need to set it to a disconnected state, then remove it from vCenter

Set-VMhost $vmhost -State "Disconnected" | out-null
Remove-VMhost $vmhost -server $srcvCenter -Confirm:$false

Next we want to disconnect from the source vCenter and connect to the destination vCenter:

#disconnect from source vCenter
disconnect-viserver $srcvCenter -confirm:$false
#Connect to $dst vCenter
$dstVCCreds = Get-Credential -Message "Enter credentials for $dstvCenter"
if (Connect-VIServer -Server $dstvCenter -Credential $dstVCCreds -ErrorAction SilentlyContinue -WarningAction SilentlyContinue -force) {
  Write-Host "Connected to $dstvCenter" -ForegroundColor green
else {
  Write-Host "Could not connect to vCenter server $dstvCenter" -ForegroundColor Red
  Write-host "Error:" -ForegroundColor red $Error[0]

Now that we’re connected we can add the host to the new vCenter, as we need a root password to do so, we get the users input here. This can be easily stored in a variable. See my previous post on how, otherwise there are plently of blogs out there explaining it. We also need to specify our target cluster that the host is going to be a member of.

$ESXcreds = Get-Credential -Username root -Message "Enter the root password for $vmhost"
$location = Get-Cluster $dstCluster
Add-VMhost -Server $dstvCenter -Name $vmHost -Location $location -Credential $ESXcreds | out-null

Once this host is added, all the virtual machines should be added to the inventory along with it and land in the Discovered Virtual Machines folder, we will sort this out further in the script. The next thing we need to do is to step the host into the vDS so we can migrate our VMKernel ports and VM networks over. Once again, it is assumed that you’ve already imported or re-created your vDS in the new vCenter.

#add host to vDS
$vdsObj = Get-VDSwitch $vDS
$vdsObj | Add-VDSwitchVMHost -VMHost $vmhost | Out-Null
#Migrate first adapter to vDS
$pNIC1Obj = Get-VMHostNetworkAdapter -VMhost $vmhost -Physical -name $pNIC1
$vdsObj | Add-VDSwitchPhysicalNetworkAdapter -VMHostNetworkAdapter $pNIC1Obj -Confirm:$false

As there is better support for the vDS when it comes to moing VMkernel ports than the vSS in powershell, we can add one VMKernel port at a time. We get the port group object from the vDS and the VMkernel port we wish to move and then migrate it using the Set-VMhostNetworkAdapter cmdlet.

$vmk1pgObj = Get-VDPortgroup -name $vmk1pg -VDSwitch $vdsObj
$vmk1 = Get-VMHostNetworkAdapter -Name vmk1 -VMHost $vmhost
Set-VMHostNetworkAdapter -PortGroup $vmk1pgObj -VirtualNic $vmk1 -confirm:$false | Out-Null

We repeat this for each VMKernel port and prompt to continue between each one, so that we can validate that everytihng is ok. Now, this could be put into a for loop to go through each adapter, but it was easier to do it this way (not necessarily better).

Once all the VMKernel ports have moved over, we can now move the VM networking back to the vDS. This is done muc the same way as we did before, although we are using the -Distributed switch in the Get-VirtualPortGroup cmdlet.

$VMhostObj = Get-VMhost $VMhost
$VMlist = $VMhostObj | get-VM
Write-Host "Now migrating VM networks from $vss to $vds" -ForegroundColor Yellow
foreach ($VM in $VMlist){
$VMnic = Get-NetworkAdapter $vm
$VMnic | Set-NetworkAdapter -PortGroup (Get-VirtualPortGroup -VMhost  $VMHost -Distributed -Name
$vmnic.NetworkName) -Confirm:$false | out-null
Write-Host "Migrated $VM network to vSS on $VMhost" -ForegroundColor Cyan

Note that we had to get the $VMlist again as the vm object ids are now difference since we have changed vCenter servers.

Next we swing the last adapter over to the vDS:

$pNIC2Obj = Get-VMHostNetworkAdapter -VMhost $vmhost -Physical -name $pNIC2
$vdsObj | Add-VDSwitchPhysicalNetworkAdapter -VMHostNetworkAdapter $pNIC2Obj -Confirm:$false

If all went well, we can move onto putting the virtual machines into their correct folder location, thanks to the functions we touched on earlier, it’s an easy one liner:

$VMfolders | Move-VMtoFolderPath

Now all we need to do is cleanup after ourselves and delete the vSS off the host:

$vssObj = Get-VirtualSwitch -VMhost $VMhost -Name $vss
Remove-VirtualSwitch $vssObj -confirm:$False

This is probably not the most efficient code to do these steps, it can easily be modified to loop through each host in a cluster, or even a vCenter server, however that could also spell disaster. Use it at your own risk with your own discretion. It definitely beats doing it manually. Oh yeah, don’t forget to let your backup team and resolver groups know that the VMs have moved to a new vCenter.

I hope this helps you with your migrations and has been a bit informative, as opposed to just giving you a script to execute.


Schedule Automatic VCSA Backups

Whilist evaluating the backup and restore methods for a 6.5 deployment, I came across the PowerShell functions Brian Graf put together to backup perform a file based backup of the VCSA via the new VAMI RESTful API. Thanks Brian!

Grab the script here: https://github.com/vmware/PowerCLI-Example-Scripts/tree/master/Modules/Backup-VCSA

After some initial testing in the lab, the script worked a treat. Logically for me the next step was to schedule the script to run from a management server. If you’ve seen the script, you would ahve noticed that the password needs to be stored in a variable, as a specifc format as Brian had called out. For me this wasn’t going to work, the secuirty folk would have beat me across the head if I had passwords written in plain text within the script. Having used the New-VICredentialStoreItem command in the past to save the credentials, I figured there should be a way to do something similar with the password variables in the script. After some google-fu, here is what I put together to get it working.

Firstly, we need to setup a couple things before running the script:

The script requires you to authenticate against the VAMI for the VCSA (or the PSC) using the SSO domain credentails, so saving these credentials is first. Luckily enough there is a PowerCLI commandlet that will do this for us. It is as simple as below:

Connect-CisServer -User "administrator@vsphere.local" -Password "bla" -SaveCredentials

The important thing to note here is that this should be run in the context of the account that you wish to run the scheduled task under.

Next we need to create two encrypted files containing the passwords, one for the backup encryption password and the other for the backup target. This can be done with the below command:

“YourSuperSecretPassword" | ConvertTo-SecureString -AsPlainText -Force | ConvertFrom-SecureString | Out-File "D:\ScriptsBackupVCSA.vma” 

Ok, so we have our encrypted passwords. As mentioned before, we now need to pull these into variables in the script so that we can make utilize Brian’s backup function.

For the backup target location, I am using the vMA I had available in the environment. Here is how we pull in that credential into a variable.

$getVmaPass = Get-Content “D:ScriptsBackupVCSA.vma”

Next we need to convert the varaible to a secure string format:

$SecurePassword = ConvertTo-SecureString $getVmaPass

Now we decrypt it into plain text:

$BSTR =[System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($SecurePassword)
$LocationPassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)

Now if you run $LocationPassword in your PowerCLI session, you will see your password stored as a string. Now as Brian calls out, the API needs it in a particular format, which is “VMware.VimAutomation.Cis.Core.Types.V1.Secret”.

I have just added Out-Null to the end of the line so that the password is not spat into the PowerCLI output / transcript.
[VMware.VimAutomation.Cis.Core.Types.V1.Secret]$LocationPassword ” Out-Null
Now we just repeat the same script, but changing the variables to pull in the backup encruption password:

$getBackupPass = Get-Content "D:ScriptsBackupVCSA.bu"
$SecurePassword = ConvertTo-SecureString $getBackupPass
$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($SecurePassword)
$BackupPassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
[VMware.VimAutomation.Cis.Core.Types.V1.Secret]$BackupPassword | Out-Null

We now have our two variables that we can now pass through to make our API call. Please note that this wont stop anyone from who knows what theyre doing to decrpt the files, however its better than keeping the variables in plain text.

Below is the entire script all put together which you can place into a scheduled task. You can use this on an external PSC also, you just need to change he backup type from -FullBackup to -CommonBackup.


  Perform Backup of the VMware Vitual Center Server Appliance


  Performs a file based backup of the VCSA or External PSC
  This script utilizes the Backup-VCSA module found here:

  BackupVCSA module has been placed in the appropriate modules directory
  SSO Credentials for the appliance have been saved in the context of the acccount that is running the script:
  "Connect-CisServer  -User "administrator@vsphere.local" -Password "bla" -SaveCredentials"
  Passwords for backup and appliance have been stored as secure string to file using:
  "secret" " ConvertTo-SecureString -AsPlainText -Force  ConvertFrom-SecureString " Out-File "D:\some\dir\secret.file"

.INPUTS none

  Backup files will be stored on the vMA:


  Version:        1.0

  Author:        vKARPS

  Creation Date:  26/10/17




#Start Transcript
Start-Transcript -path  D:\Scripts\BackupVCSA_prd.log -Force

#Connect to VAMI on VCSA

#Get credential for vMA
$getVmaPass = Get-Content "D:\Scripts\BackupVCSA_prd.vma"
$SecurePassword = ConvertTo-SecureString $getVmaPass
$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($SecurePassword)
$LocationPassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
[VMware.VimAutomation.Cis.Core.Types.V1.Secret]$LocationPassword | Out-Null

#Get Credential for Backup
$getBackupPass = Get-Content "D:\Scripts\BackupVCSA_prd.bu"
$SecurePassword = ConvertTo-SecureString $getBackupPass
$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($SecurePassword)
$BackupPassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
[VMware.VimAutomation.Cis.Core.Types.V1.Secret]$BackupPassword | Out-Null

#Set Comment for the backup
$Comment = "VCSA Backup $((Get-Date).ToString('yyyy-MM-dd-hh-mm'))"

#Setup Backup Target
$LocationType = "SCP"
$location = "$((Get-Date).ToString('yyyy-MM-dd-hh-mm'))"
$LocationUser = "vi-admin"

#Initiate backup -CommonBackup is configuration only as PSC does not contain performance statistics
Backup-VCSAToFile -BackupPassword $BackupPassword -LocationType $LocationType -Location $location -LocationUser $LocationUser -LocationPassword $LocationPassword -Comment $Comment -ShowProgress -FullBackup

#Set variables to 0
$getVmaPass, $getBackupPass,$SecurePassword,$BSTR,$BackupPassword,$LocationPassword,$LocationType,$location,$LocationUser,$Comment = 0

#Disconnect from VAMI
Disconnect-cisserver -confirm:$false

Ive noticed a little bug with the editor I am using, the ” (pipe) between commandlets it being switched to a ” in the code. I will look at fixing this later in the week. I should really get around to setting up a GitHub account hey…

If youre reading this and cringing and know a better way (especially in storing the passwords), please get in touch as I would be keen to learn something from you.

Anyway, hope you find this helpful.

PowerCLI not connecting to vCenter in another domain

I came across an odd issue the other day where my PowerCli session would not authenticate to a vCenter in another domain within the forest, although I could get to the one on my local domain. Here’s the error I was getting:

[Connect-viserver vcsa02.deathstar.local
Connect-viserver : 13/10/2017 10:17:35 AM Connect-VIServer Could not resolve the requested VC server.
Additional Information: There was no endpoint listening at https://vcsa02.deathstar.local/sdk that could accept the
message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details.
At line:1 char:1
+ Connect-viserver vcsa02.deathstar.local
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (:) [Connect-VIServer], ViServerConnectionException
+ FullyQualifiedErrorId : Client20_ConnectivityServiceImpl_Reconnect_NameResolutionFailure,VMware.VimAutomation.ViCo

Some head banging ensued after ruling out the regular DNS and vCenter service functionality, I soon realised that I could connect via another workstation so I knew it was something with the workstation PowerShell instance.

Turns out, I forgot that the day before I was playing with the powershell proxy settings so that I could get out to the interwebz and update to the latest PowerCLI release.

Get-PowerCLIConfiguration returned ‘UseSystemProxy’:

[10:55:38]# get-powercliconfiguration

Scope ProxyPolicy DefaultVIServerMode InvalidCertificateAction DisplayDeprecationWarnings W
----- ----------- ------------------- ------------------------ -------------------------- -
Session UseSystemProxy Multiple Unset True -
User Multiple
AllUsers NoProxy

After setting the proxy configuration to ‘no proxy’ in PowerShell with the below command I was able to connect again.
Set-PowerCliConfiguration -proxypolicy noproxy

Updating php 7.0 to php 7.1 Ubuntu 16.04

Disclaimer: I come from a Windows background, my expertise in linux is quite slim.

In my home lab I had a requirement to update a software package which dropped support for php 7.0, which meant I needed to upgrade to 7.1 prior to upgrading the application.

Here are the steps that worked for me:

First stop the web server, in my case apache.

sudo service apache2 stop

Get a list of the current php packages you are using

dpkg -l | grep php

Add the repo, install the base packages, followed by the extra ones you were using with 7.0 ( my packages may differ from yours)

sudo add-apt-repository ppa:ondrej/php

sudo apt-get install php7.1 php7.1-common
sudo apt-get install php7.1 php7.1-cli php7.1-common libapache2-mod-php7.1 php7.1-mysql php7.1-curl php7.1-mcrypt php7.1-json php7.1-mbstring php7.1-bcmath php7.1-zip php7.1-intl php7.1-opcache php7.1-readline

Reconfigure apache2 for php 7.1 and restart the service

sudo a2enmod php7.1

sudo service apache2 restart

check that php 7.1 is now running

php -v

PHP 7.1.8-2+ubuntu16.04.1+deb.sury.org+4 (cli) (built: Aug 4 2017 13:04:12) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.1.0, Copyright (c) 1998-2017 Zend Technologies with Zend OPcache v7.1.8-2+ubuntu16.04.1+deb.sury.org+4, Copyright (c) 1999-2017, by Zend Technologies

remove your old 7.0 packages

sudo apt purge php7.0-cl php7.0-bcmath php7.0-common php7.0-curl php7.0-intl php7.0-json php7.0-mbstring php7.0-mcrypt php7.0-mysql php7.0-opcache php7.0-readline php7.0-zip

vSphere Replication ‘Solution user detail’ certificate is invalid

I came across an odd ball today when upgrading a customers vSphere Replication appliance from 5.8.1 to 6.0. For those who have done a VR in place upgrade before, you will know that there is really not much to it; Mount ISO, Install Update, Reboot, done. When VR 6.0 was introduced an additional step is required after you perform the upgrade, you had to go in and register it with your lookup service. All this requires is the lookup service url and the SSO administrator credentials and you’re done as per the documentation here.

Instead I was presented with this error message:

'Solution user detail' certificate is invalid - certificateException java.security.cert.CertificateExpiredException: NotAfter: Fri Jun 17 23:01:55 UTC 2016


Its probably worth noting that this has come off the back of a vCenter upgrade to 6.5, so ‘solution user’ automatically got me looking at SSO solution users in the web client and the vCenter extension manager to validate that the certificate thumbprints matched up.

Sure enough, the thumbprint that was registered with vCenter matched the one of the vSphere Replication appliance. My google-fu didnt get me any furhter either, next thing that came to mind was ssh and logs. I saw the secuiry tab and figured I would enable ssh from there…nope, no enable ssh option there dummy. What I did see on the secuirty tab was this.

Right, expity date matches the error message!

So here is what you need to do:

Head back over on the configuration page there is a “Install a new SSL Certificate” section where you can generate and install a self signed or your own certificate.

Hit the Generate and Install button, validate the warning that it will overwrite the existing one and let it do its thing.



Once it is done, you will be prompted by the following message.


Reload your brower like it states, even though it seems as if does it when it takes you do the login page.

Once you’re back in, populate your Lookup Service URL and SSO Administrator credentials at the configuration tab and hit Save and Restart service. If all went well, you should get the below message.


Migrating Windows vCenter 5.5 to VCSA 6.5

So it’s that time again, evaluating how us vSphere admins are going to get our customers/employers up to the latest and greatest version. If you’ve ever done upgrades from previous versions to 5.5, you will immensely appreciate the work that the engineering team over at VMware has done to help us migrate over to 6.5. The Migration Assistant is really amazing and does a lot of the heavy lifting.

In saying that, there are a few things you’re going to have to validate and prepare for if you want to have a smooth migration. I recently went through the steps in a lab environment and will outline the steps I went through to both prepare for the migration and the migration itself. Yes – there may be blogs out there that go through the process already. However, I found that they only covered basic deployment models and this is also an opportunity for me to have some long overdue blog writing practice. Hopefully someone finds this useful.

If there is anything that I’ve missed, messed up or fat fingered please let me know in the comments below.

In the my scenario, vCenter was separated between two windows servers. One held the SSO, Inventory and update manager and web client services, while the other held vCenter. Autodeploy is also in the picture, but on it’s own server. Now, if you’ve been good and read through the upgrade documentation, you will notice that as the configuration currently stands, you won’t be able to start the migration. Here are links to useful aricles that got me started with planning out the upgrade.

Important info before upgrading to 6.5: KB2147548
Best Practices in migrating to vCenter 6.5: KB2147686
FAQ’s about the migraiton (Moving to 6.0 but still relevant and helpful): KB2146439
Preparing for migration blog
William Lam has a great collection of most links here:

Update: @emad_younis has a good summary here

Before I started anything else, I wanted to make sure my current deployment was re-aligned to a model which was supported by the migration assistant. The first thing that I needed to do was to get the Inventory service and update manger moved off the server with SSO installed and moved over to the vCenter server.

Moving Inventory Service

Moving the inventoy service is quite simple and this KB article outlines it quite well.
Tip #1 Make sure you have the right version of the 5.5 installer mounted. If this doesnt match your vCenter server you will have a bad time (I know this as I have made this mistake myself and spent too much time that I wish to admit in troubleshooting why my vCenter wouldnt talk to my inventory service after the move).

You may want to consider backing up your inventory service database and restoring it onto the vCenter server. See this link will give you instructions on how to do so. Tip #2 Make sure you have enough space available on the drive which you are creating the backup file on. The database in my environment was about 8GB, which almost filled up my system drive, oops.

Once you’ve re-installed it you will just need to re-point your vCenter to the inventory service with the below command.

cd C:\Program Files\VMware\Infrastructure\VirtualCenter Server\isregtool
register-is.bat vCenter_Server_URL Inventory_Service_URL Lookup_Service_URL
register-is.bat https://vc01.vkarps.local:443/sdk https://vc01.vkarps.local:10443 https://vcsso01.vkarps.local:7444/lookupservice/sdk  

Moving Update Manager

Next, we need to move Update Manager, This article sums up the process quite well. Tip #3 Remember that the DSN needs to be 32bit.

Current and Target State

So once things have been shuffled around, it should look something like this:

VM1: Single Sign On
VM2: vCenter, Inventory Service, Update Manager, Web Client

After the upgrade, VM1 will become the PSC and VM2 will become the VCSA.

Prep tasks

Before you go any further, make yourself a check list to ensure youve got everything ready for the migraiton. These are some of the things that I picked up during the planing phase and had in my check list. They are not in any particular order:

  • Give your colleagues and other teams who use vCenter a heads up, chances are that they are unfamiliar with the web client
  • Test your local administrative credentials on the source Windows servers, in case of roll back
  • Have your vCenter service account and SSO admin passwords handy
  • Validate that you have permissions to domain join permisions.
  • Validate that your vSphere component versions are supported by the migration (Hosts earlier than 5.5 are not) 
  • List out any 3rd party extensions to vCenter or anything that interfaces with it and their versions to confirm their support for 6.5. Ie. Backup software, storage plugins or orchestration software. It is likely that you will need to update these also. 
  • Prepare your sql database guide 
  • Check the size of your database to estimate the size your VCSA needs to be here
  • Estimate your migration time to 6.5 ( Will help with determining and justifying your outage window) KB214620
  • NTP is configured throughout the entire vSphere environment and is in sync.
  • Forward and reverse DNS records work for vSphere components
  • Target vCenter Cluster DRS setting cannot be set to Fully Automated, partial or manual will do.
  • Verify that certificate checking is enabled in vCenter here
  • Lockdown mode must be disabled on at least one host in the cluster which you’re deploying the appliances to
  • The vCenter service account has ‘replace process level token’ rights on the computer object or OU that your vCenter server resides. Policy Path: “Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights Assignment\Replace a process level token”
  • If you’re using DHCP for your vCenter address, ensure that the port group or vSwitch accepts mac address changes. ( This was not the case in my scenario but something worth pointing out)
  • Validate that you have enough free disk space on your source server as it will stage data as it extracts it. The Migration Assistant will call this out when it does it’s checks. This size will vary on your database size.
  • Fill out the required installation requirements which vmware has provided here
  • If you have another vCenter server in the mix with it’s own SSO domain, you may consider consolidating SSO domains prior to going to 6.5 as you won’t be able to later KB2033620.
  • Backups, backups and more backups

Deploying new PSC appliance

Alright, time to get the PSC appliance deployed so we can have it ready for migrating the SSO service.
On VM1, which hosts the SSO service, mount the 6.5 VCSA iso and launch the Migration Assistant. The migration assistant is located in X:\migration-assistant\VMware-Migration-Assistant.exe.

A console window will show up and ask you for your SSO password, afterwards it will begin to perform some pre-checks.

You should see something similar to the above. If you’ve missed anything in the pre-checks or something doesn’t stack up, it will let you know. I found that when I first ran this I didn’t have enough storage space for the migration. It should prompt you to enter another location for the migration data.
Have a read through it, as you may miss something important, I found its quite informative and notifies you of certain extensions you may have that will not work after the migration. If you’ve been paying attention, you will notice this screenshot if from a server which had the web client installed on it which will not work after the SSO service is migrated. (This is from another lab environment, but I was happy to not have the web client available during the migration). If all is well, you will see “Waiting for migration to start…”

Next from a workstation or server, mount the same ISO but this time launch the ui-installer (I haven’t had a play with the cli-installer, but that’s an option too for the ones who want to script the migration). Note that this can be run from Unix and OSX machines too!

Click on migrate and proceed to the next step.

Click Next, sign your life away and hit Next again.

Enter in the SSO server fqdn and the SSO Administrator credentials

Next you will be prompted to verify the thumbprint, which you can find in the Migration Assistant console. Make sure this matches up.

Next we will be defining where we wish to deploy our PSC to. Enter in your vCenter server name and credentials to an account which has administrative access.

Select the folder which you wish to place the new appliance

Select your target cluster/ESXi host

Supply your target appliance name and a root password. Note that the new appliance will inherit the OS name of the source server. This might not conform to your naming standard in the enterprise but in my experience I’ve found that it’s easier to have the VM object name the same as the name in the OS in large environments. The installer won’t let you name the VM object the same as your source server as it already exists, so you can either rename your source VM object or rename the appliance VM object later and perform a svMotion to make sure the vm files are consistent.

Select the datastore you wish to deploy to and check whether or not you want it thin provisioned

Now you will need to populate the temporary IP details that the appliance will use during the initial parts of the migration.

Next you will be presented with a summary page outlining the the steps we just went through. Make sure it all stacks up and hit Finish. The wizard will now deploy your new Platform Service Controller.

If all went well, you will see the below screen and have a newly deployed PSC appliance.


Migrating to PSC

So far we have deployed the target PSC appliance but have not migrated anything. In stage 2 of the migration wizard we will be doing just that. When you hit next, so that the wizard can go off and connect to the source SSO server we specified earlier.

You will be prompted to enter in credentials of an AD account which has permissions to join the domain

Choose if you want to opt in or out of the Customer Experience Improvement Program

Review the summary and check the box at the bottom confirming that you have backed up your data.

When you click next, you will be presented with warning advising you that the soruce machine will be powered off during this process. You will have an outage from this point, so if you’re doing production make sure you are in your outage window before clicking OK.

Now the migration will kick off and hopefully finish succesfully in a short while. After you get the success message you will be given some URLs for your new PSC. They should something as follows:

You should now be able to hit the PSC client at https://pscname:443/psc

Deploying VCSA

Now that we have a new external PSC, it is now time to move on to getting vCenter migrated over to 6.5. Make sure that you have upgraded all your SSO servers prior to upgrading vCenter.

Firstly we need to deploy the new vCenter appliance which is quite similar to the steps we took with the PSC appliance. Launching the same ISO and starting the wizard as with the PSC, although this time around the wizard will prompt for the SSO and vCenter service account credentials. You might see a little more going on as the wizard performs it’s health checks as it is more than likely that you have some extensions hooked in with your vCenter (i.e.Update Manager, Backup or Storage software). If you’ve done your prep there should be no suprises here, right? :).
Once the migration assistant has finished and you’re happy to proceed, launch the UI installer from another workstation as in the previous steps. Enter in the required details for the source, target, name, etc. We will skip ahead to the deployment size part.

In this step you will need to select the deployment size of your new VCSA. Note that you can pick a storage size seperately from the compute resrouces. Go back to your checklist and see if your existing database size will fit within the default storage size for the deployment you’ve selected. In my case I needed to increase the size.

The next few steps are the same as before, specify a target datastore, temp IP details and wait for the wizard to finish the deployment. If it all went to plan, you should get a successful completion message. Go on and check out the appliance management page at https://tempIPyouProvided:5480.

Migrating to VCSA

Here is the part where most of the sitting around will happen while the wizard does the heavy lifting. From the previous completion page, click Continue and then next at the Stage 2 screen.

The wizard will now go through and do some pre-migration checks. If you got alerted about extensions in the migration assistant, you will get similar ones in the migration wizard. Have a good read of that’s there as there might be a few things that you may need to fix prior to proceeding. After you’ve acknowledged the warnings you will be prompted to specify AD Domain credentials.

Next you need to decide which data you want to copy over, in my case I wanted to copy all the historical data for no other reason but curiosity. If you have any auditing/compliance requirements in your environment, it might be a good idea to select the same.

The next page gives you a summary of the settings we chose along with a checkbox confirming you’ve performed all the relevant backups. Note that from here onwards the migration will start and shortly after your vCenter will be unavailable. So once again, if you’re in production make sure you’re within your outage window before proceeding. As before, the wizard will warn you of this:

Now it’s time for a well deserved coffee, so go off for a while and enjoy yourself. This part can take a few hours, depending on your source and your database size. You should have a rough idea from the calculations you did in the prep tasks. 🙂

Once it’s done without any errors you should have a smile on your face and be looking at a screen like below:

The new 6.5 VCSA should be up and running with all your configuration in tact. The wizard will display links to your new vcsa management pages:

vSphere Client: https://vcsa.vkarps.local:443/vsphere-client
HTML5 Client ( Partial features available): https://vcsa.vkarps.local:443/ui
Appliance Management:https://vcsa.vkarps.local:5480

Be sure to get familiar with all the clients, especially the HTML5 client which will give you a good idea as to what you will see more of in the future. Don’t forget to poke around the appliance management client. It now gives insight into what’s going on within the vPostgres database which I’m sure you will appreciate.


I’m hoping that everything went well in your migration, however things go do wrong. Fortunately this is one of the easiest rollbacks I’ve had to do in my experience. Due to the nature of the way the migration is done, rolling back is a matter of switching off the appliances and rejoining your windows machines to the network.

Here are the steps that I followed to rollback:

1. Power off the VCSA and PSC appliances
2. Power on your Virtiual Center and SSO servers
3. Logon as Local administrator
4. Ensure the VM is connected to the network
5. In an elevated powershell run the following command to reset the computer password on each server:

Reset-ComputerMachinePassword -Server <YourDomainController> -Credential <yourAdminusername>

6.Reboot both servers
7. Go do your health checks to ensure all services have started and that your VC deployment is back to where you left it.


The migration to 6.5 is one of the easiest ones to date in my opinion and I hope that I’ve done a good job in outlining what’s required and will help you out in your migration. Feel free to give me any type of feedback in the comments section below.

Get on it and #migrate2vcsa.

Host profile removing vmk0 management port

Up until recently I had very minimal exposure and experience with host profiles and had the pleasure of getting better acquainted with the feature. There is a well documented “bug” with the host profiles & auto deploy where the vMotion kernel port takes the vmk0 port and the host disconnects from vCenter (eg. here & here) however my problem was slightly different and the many blog posts I found on the problem did not solve it for me.

Here is what went down.

I was targeting a host to upgrade from 5.1 to 5.5 which is deployed via Auto deploy, I pointed the host at the 5.5 image, rebooted it – all good. I made some slight tweaks to the config, updated the answer file and applied the profile. On the subsequent reboot the host came back up with part of its new config and then I watched vCenter apply the remaining settings. What happened next was that I noticed the task was attempting to reconnect with the host. Flicking over to the IMM showed that the host had now been re-configured with the vMotion kernel port IP address and soon enough vCenter marked the host as disconnected.

Some head scratching ensued and many hours went by as I tried to get to the source of the problem. From furious googling to validating the host profiles against other clusters and a number of pointless reboots. I found myself at a bit of a “chicken or the egg” scenario as the host profile prevented me from modifying the management network IP and I couldn’t manage the host from vCenter :(. I soon discovered that I had a brief window in where the host was connected to vCenter and where I could remove the host profile from it. After two reboots I was back to a host without the host profile attached, where I could again manage the host.

I decided it was time to configure the host manually and create a new host profile. It all went swimmingly and my host was now complaint, woo hoo! Happy days… until I went to re-build the next host in the cluster. As I went to apply the profile, the summary page stated something along the lines of “Remove vmk0 from vSwitch 0”. Damn, same issue!

The same headaches eventuated and I wasted another couple of hours. After some deliberating with colleagues I figured I would create the vmkernel ports manually and then try applying the profile. This time around, no messages about removing vmk0, I felt a slight hint of confidence and rebooted the host. This time around the host came back complaint and configured as per the host profile, finally!

So in conclusion, create all your VM Kernel ports prior to applying the host profile.

This may be common knowledge to some, but it wasn’t to me and I didn’t see it called out anywhere through my searches, although I may have missed it.

Thanks for reading!