VxRail – Basic vSAN UI Overview

The following post provides a basic UI overview of how the VxRail Appliance disk groups are configured and viewed from both the VxRail Manager and the vSphere Client.

The VxRail Appliance vSAN disk groups are configured in one of two ways:
Hybrid – single SSD disk for caching and one or more HDD disks for capacity.
All-flash – single SSD disk for caching and one or more SSD disks for capacity.
The amount of storage available to the vSAN datastore is based on the capacity drives.

A VxRail node allows for multiple disk groups (please refer to official VxRail docs for specifics, as the quantity of disk groups differ per VxRail model) which in turn provides multiple cache drives per node thus potentially improving performance. In this example each VxRail Appliance node has two All-flash disk groups, each node in the cluster is required to have the same storage configuration.

vSphere Client UI

From the vSphere client click on the cluster and navigate to the ‘configure – vSAN – General’ from here we can see that the vSAN cluster for this VxRail appliance comprises of 24 disks in total (4x 13G servers).

vxrailvsanui1

‘vSAN – Disk Management’ displays both the Disk Groups and the disks associated with each Disk Group. Taking the example below of the first ESXi host in the cluster, we can see the VxRail node has a total of 6 disks contributing storage to the vSAN cluster, comprising of two disk groups with three disks in each disk group. Continue reading

vSphere – Migrating A VM To A New VMFS Datastore (CLI: VMKFSTOOLS)

You may encounter a scenario where a vCenter server is not part of a solution and SVMotion is not an option to migrate a VM from one VMFS datastore to another. In this case you may use the vSphere VI Client datastore browser and copy/move the VM data files from one datastore to another or in the case outlined here you may use the CLI approach to migrate a specific VM to another datastore.

In this example a second RAID1 Mirror has been added to a standalone DELL Server and a new VMFS datastore has been created labelled ‘datastore2’:
MigrateVMToNewDS2

Note: Before proceeding ensure no snapshots are present on the VM being migrated.

1. Log into the ESXi host as ‘root’ via SSH.

2. List all VMs present on the ESXi host:
vim-cmd vmsvc/getallvms
MigrateVMToNewDS3
You can also use: esxcli vm process list

List the inventory ID of the virtual machine ‘MartinWIN7’ which is being migrated to the new datastore:
vim-cmd vmsvc/getallvms |grep MartinWIN7
MigrateVMToNewDS4
We can see from the output that the inventory ID for this VM is ‘9’
Check if VMID ‘9’ has a snapshot: vim-cmd vmsvc/get.snapshot 9
If necessary remove the snapshot: vim-cmd vmsvc/snapshot.remove 9

3. Shutdown the virtual machine ‘MartinWIN7’ VMID ‘9’:
Check the power state of the virtual machine with the following command:
vim-cmd vmsvc/power.getstate 9
MigrateVMToNewDS5
Shutdown the virtual machine with the command:
vim-cmd vmsvc/power.shutdown 9
MigrateVMToNewDS6

Alternative Power-off cmds:
vim-cmd vmsvc/power.off vmid
Or using esxcli: esxcli vm process kill –w world_id
How to gather the world_id: esxcli vm process list

4. Unregister the VM from the ESXi host:
vim-cmd vmsvc/unregister 9
MigrateVMToNewDS7

5. List the VMFS Volumes available on the ESXi host:
cd /vmfs/volumes/
ls -l

MigrateVMToNewDS8

6. Change directory to the VMFS volume where the VM currently resides (‘datastore1’) and gather the required information such as vmdk and vmx file names:
cd /vmfs/volumes/datastore1/
ls -l

MigrateVMToNewDS9
You can view the actual amount of space used by the vmdk files on ‘datastore1’ by using the cmd: du -ah
If the size displayed by a *.vmdk was zero this would imply the vmdk was a Thin disk and the *-flat.vmdk would display the actual used space of the Thinly provisioned vmdk, something similar to the following:
MigrateVMToNewDS24

cd /vmfs/volumes/datastore1/MartinWIN7/
ls -l

As you can see below each VM Disk has a flat file and a descriptor file, for example the virtual machine ‘MartinWIN7’ has a disk named MartinWIN7.vmdk and a corresponding MartinWIN7-flat.vmdk file.
MigrateVMToNewDS10

7. Change directory to the new VMFS volume where the VM will be migrated to and create a new folder for the VM files:

cd /vmfs/volumes/datastore2/
mkdir MartinWIN7
ls -l

MigrateVMToNewDS11

8.Using the ‘vmkfstools’ command to clone the VM to ‘datastore2’, once the cloning process has completed successfully then we can delete the original VM on ‘datastore1’. The ‘-i’ option used with vmkfstools creates a copy of a virtual disk, using the following syntax:
vmkfstools -i src dst
Where src is the current vmdk location (‘datastore1’) and dst is the destination (‘datastore2’) where you would like the vmdk file copied to.

You can chose the Disk format by using the -d –diskformat suboption. The 3 choices of disk format are:
zeroedthick (default) – all space is allocated during creation but only zeroed on first write, referred to a lazy zeroed.
eagerzeroedthick – all space is allocated and fully zeroed during creation.
thin – only the required space is allocated the remainder is allocated and zeroed over time on demand.

vmkfstools -i src dst -d –diskformat [zeroedthick|thin|eagerzeroedthick] -a –adaptertype [buslogic|lsilogic|ide]

Checking the Disk format using ‘vmkfstools -t0’ before cloning ‘MartinWIN7.vmdk’:
vmkfstools -t0 /vmfs/volumes/datastore1/MartinWIN7/MartinWIN7.vmdk
Example Output, the ‘VMFS Z‘ indicates that it is lazy zeroed (zeroedthick):
MigrateVMToNewDS22
You may also use ‘vmkfstools -D’ to check the Disk Format:
vmkfstools -D /vmfs/volumes/datastore1/MartinWIN7/MartinWIN7_1.vmdk
Example Output, the ‘tbz 0‘ indicates that it is eagerzeroedthick:
MigrateVMToNewDS23

The following example illustrates cloning the contents of the virtual disk ‘MartinWIN7.vmdk’ from /datastore1/MartinWIN7 to a virtual disk file with the same name on the /datastore2/MartinWIN7 file system:
vmkfstools -i “/vmfs/volumes/datastore1/MartinWIN7/MartinWIN7.vmdk” “/vmfs/volumes/datastore1/MartinWIN7/MartinWIN7.vmdk” -d zeroedthick -a LSILogic

MigrateVMToNewDS13

Cloning the second disk ‘MartinWIN7_1.vmdk’:
vmkfstools -i “/vmfs/volumes/datastore1/MartinWIN7/MartinWIN7_1.vmdk” “/vmfs/volumes/datastore2/MartinWIN7/MartinWIN7_1.vmdk” -d zeroedthick -a LSILogic

Monitoring progress:
MigrateVMToNewDS15

9. To copy (cp) the virtual machine configuration ( .vmx) file to the new folder, run the command:

cp “/vmfs/volumes/datastore1/MartinWIN7/MartinWIN7.vmx” “/vmfs/volumes/datastore2/MartinWIN7/MartinWIN7.vmx”

10. Next register the newly cloned virtual machine on ‘datastore2’ named ‘MartinWIN7’ on the ESXi host using the vim-cmd solo/registervm cmd:
vim-cmd solo/registervm /vmfs/volumes/datastore2/MartinWIN7/MartinWIN7.vmx MartinWIN7

List the inventory ID of the new virtual machine with the command:
vim-cmd vmsvc/getallvms |grep MartinWIN7

Power-on the virtual machine with VMID 10:
vim-cmd vmsvc/power.on 10

MigrateVMToNewDS16

You will receive a prompt on the vi client, chose the option ‘I copied it‘:
MigrateVMToNewDS17

Displaying the VM IP address (if you wish to RDP and confirm VM status):
vim-cmd vmsvc/get.guest 10 |grep -m 1 “ipAddress = \””
MigrateVMToNewDS18

11. If all looks well and you are happy the VM is operating normally then you may delete the old VM directory. Delete Directory and all contents:
cd /vmfs/volumes/datastore1/
rm -r MartinWIN7OLD/

MigrateVMToNewDS21
MigrateVMToNewDS20

Useful VMware KB’s:
Cloning and converting virtual machine disks with vmkfstools (1028042)
Cloning individual virtual machine disks via the ESX/ESXi host terminal (1027876)
Determining if a VMDK is zeroedthick or eagerzeroedthick (1011170)
Performing common virtual machine-related tasks with command-line utilities (2012964)

Removing PowerPath/VE From vSphere Host(s) using ESXCLI/PowerCLI

In the event you have a requirement to uninstall PowerPath/VE from a vSphere host. The first series of steps outline how to remove PP/VE from a single host. If you need to remove PP/VE from a cluster of hosts then please see the PowerCLI section below.

1. Firstly enable SSH on the Host in order to issue the ESXCLI cmds:
pp_remove_3

2. Retrieve the names of the Powerpath packages installed on the vSphere host by typing:
# esxcli software vib list
# esxcli software vib list | grep power
remove_pp_from_esx_1
Note: VIB stands for vSphere Installation Bundle

3. Using the package names remove the PowerPath/VE package by issuing the following command:
# esxcli software vib remove -n powerpath.cim.esx -n powerpath.plugin.esx -n powerpath.lib.esx
remove_pp_from_esx_2

4. After a successful uninstall reboot the host:
Enter Maintenance Mode: esxcli system maintenanceMode set –enabled=true
Reboot Host: esxcli system reboot –reason=”Uninstalling Powerpath/VE”

5. Exit Maintenace Mode on completion of Reboot:
Exit Maintenance Mode: esxcli system maintenanceMode set –enabled=false

Using PowerCLI To Remove Powerpath From Cluster Hosts
1. Connect to vCenter and enable SSH on all hosts in your specified cluster:
Connect-VIServer -Server ‘vCenter_IP’ -User ‘administrator@vsphere.local’ -Password ‘Password’
Get-Cluster YourClusterName | Get-VMHost | ForEach {Start-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”})}

2. Confirm the presence of PP/VE on the cluster hosts:
$hosts = Get-Cluster YourClusterName | Get-VMHost
forEach ($vihost in $hosts)
{
$esxcli = get-vmhost $vihost | Get-EsxCli
$esxcli.software.vib.list() | Where { $_.Name -like "*powerpath*"} | Select @{N="VMHost";E={$ESXCLI.VMHost}}, Name, Version
}

3. Remove PP/VE from all the hosts in the cluster:
$hosts = Get-Cluster YourClusterName | Get-VMHost
forEach ($vihost in $hosts)
{
$esxcli = get-vmhost $vihost | Get-EsxCli
$PPVE=$esxcli.software.vib.list() | Where { $_.Name -like "*powerpath*"}
$PPVE | ForEach { $esxcli.software.vib.remove($false,$true,$false,$true,$_.Name)}
}

4. Enter each host in maintenance mode and reboot (use with caution!):
$hosts = Get-Cluster YourClusterName | Get-VMHost
forEach ($vihost in $hosts)
{
$esxcli = get-vmhost $vihost | Get-EsxCli
$esxcli.system.maintenanceMode.set($true)
$esxcli.system.shutdown.reboot(10,"UninstallingPP")
}

5. Disable SSH on all hosts in the cluster:
Get-Cluster YourClusterName | Get-VMHost | ForEach {Stop-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”}) -Confirm:$FALSE}

Notes:
Useful cmds to check status of LUN Connectivity:

Check VMKernel Log for PowerPath Errors: cat /var/log/vmkernel.log | grep PowerPath
2014-07-10T04:42:00.320Z cpu8:33448)ALERT: PowerPath:MpxRecognize failed. Path vmhba1:C0:T0:L255 not claimed
2014-07-10T04:42:00.320Z cpu8:33448)ALERT: PowerPath:Could not claim path vmhba1:C0:T0:L255. Status : Failure
2014-07-10T04:42:00.320Z cpu8:33448)WARNING: ScsiPath: 4693: Plugin 'PowerPath' had an error (Failure) while claiming path 'vmhba1:C0:T0:L255'. Skipping the path.
2014-07-10T04:42:00.320Z cpu8:33448)ScsiClaimrule: 1362: Plugin PowerPath specified by claimrule 260 was not able to claim path vmhba1:C0:T0:L255. Busy

esxcfg-mpath –l command
~ # esxcfg-mpath -l command
fc.20000024ff3dd55b:21000024ff3dd55b-fc.500009730025c000:500009730025c1a5-
Runtime Name: vmhba1:C0:T0:L255
Device: No associated device
Device Display Name: No associated device
Adapter: vmhba1 Channel: 0 Target: 0 LUN: 255
Adapter Identifier: fc.20000024ff3dd55b:21000024ff3dd55b
Target Identifier: fc.500009730025c000:500009730025c1a5
Plugin: (unclaimed)
State: dead
Transport: fc

esxcli storage core path list fc.20000024ff3dd55b:21000024ff3dd55b-
~ # esxcfg-mpath -l command
fc.20000024ff3dd55b:21000024ff3dd55b-fc.500009730025c000:500009730025c1a5-
Runtime Name: vmhba1:C0:T0:L255
Device: No associated device
Device Display Name: No associated device
Adapter: vmhba1 Channel: 0 Target: 0 LUN: 255
Adapter Identifier: fc.20000024ff3dd55b:21000024ff3dd55b
Target Identifier: fc.500009730025c000:500009730025c1a5
Plugin: (unclaimed)
State: dead
Transport: fc