EMC VNXe – Shutdown Procedure


Shutdown via UNISPHERE using the Service Account

VNXe-Shutdown

Please read all notes provided in the ‘More Information..’ section highlighted in the above image before proceeding with shutdown.

Shutdown process as documented in the ‘More Information..’ section:
1. From Unisphere, select Settings > Service System.
2. Enter the Service password to access the Service System page.
3. Under Service Actions, select Shut Down System.
4. Click Execute service action to shut down the storage processors (SPs).
5. In the Service Confirmation dialog box, click OK.
6. Check the status of the shutdown process by looking at the SP LED indicators. The shutdown process is complete when all the Storage Processor Power LEDs are flashing green, the SP Status Fault LED is solid amber, the network management port LEDs are on, and all other Storage Processor LEDs are off.


Shutdown via SSH using the Service Account:

Shutdown command:
svc_shutdown –system-halt

service@(none) spb:~> svc_shutdown --system-halt
###############################################################################
WARNING: This action will shut down the system and you will have to manually
bring it back up afterwards.
###############################################################################
Enter "yes" if want to proceed with this action: yes
Normal Mode
1
1
Peer shutdown now in progress
System shutdown now in progress

EMC VMAX – Data Device (TDAT) Draining From A Thin Pool

Draining of a Data Device (TDAT) from a VMAX Thin Pool is a non-disruptive activity, meaning TDAT’s can be removed from a Thin Pool without the need to unbind any TDEV’s with allocated extent’s residing on the TDAT. There may be many reasons why you wish to perform such an action, in my case it was to re-allocate the TDAT’s to another Pool helping to reuse space in order to improve efficiency. Another example is where you may wish to replace a drive(s) with a newer model (higher capacity required) and you need to move off any Production data that resides on the existing drives in preparation for the replace operation.

The Draining and removal process is essentially a 3 phase operation:
1. Disabling the TDAT effectively initiates Draining on the device. Once the TDAT gets disabled within the Pool, used tracks on the device get moved to the remaining enabled devices in the Pool non-disruptively.
2. On completion of the Draining process the TDAT device enters a disabled state.
3. Once in a disbaled state the TDAT can be removed from the Thin Pool.

Performing the Drain Operation Via Unisphere
Navigating to Thin Pool ‘Boot-Pool’ the 8xTDAT volumes are displayed. The objective in this example is to demonstrate removing 25% of the devices (0903-0904). Fisrtly choosing volume 0904 and hitting the Disable Button:
Uni3
This immediately places the volume into a Draining state and any Data present on the device is balanced across the remaining enabled TDAT’s (08FD-0903) within the Thin Pool:
Uni4
The progress of the Drain is visible from Unisphere as the volume %used, GBused and GBfree are updated during the transition of the extents to other TDAT’s:
Uni5
Once the data volume completes the Drain process and displays an Inactive state then it is safe to hit Remove.
Uni6

Performing the Drain Operation Via Symcli
List details of the Thin Pool, providing data device information:
symcfg show -pool Boot-Pool -thin -all
CLI

Use the symconfigure command Disabling the data device 0903:
symconfigure -cmd “disable dev 0903 in pool BOOT-Pool, type=thin;” commit
Again viewing the Pool detail using the symcfg show -pool command, we can monitor the progress of the Drain operation:
CLI3

Removing the data device once the Drain operation is complete:
symconfigure -cmd “remove dev 0903 from pool BOOT-Pool, type=thin;” commit
CLI5

EMC VMAX – Avoid Corruption of Unisphere Performance Database by AV Scans

Just a note of caution on how to prevent Anti Virus Scans from causing a possible corruption of the Unisphere for VMAX Performance Database.

If using the Unisphere for VMAX Performance feature then please exclude the following directories by editing your Anti Virus scan settings:

  1. The data directory and all its subdirectories (<InstallDirectory>\EMC\SMAS\jboss\standalone\data\msq\data)
  2. The temp directory (<InstallDirectory>\EMC\SMAS\jboss\standalone\data\msq\temp)

 

Note From EMC:

Failing to follow this advice may lead to corruption in the Performance database.

 

EMC Symmetrix VMAX – Viewing Port Details

When building a Vblock720 I use the following information to assist in Zoning and Masking of UCS Blades.

In order to gather the wwn’s of all the VMAX front-end director ports the following command is quite useful. (See screenshot below)

symcfg -sid XXX list -fa all

vmax_fa_all1

I also find the following command very helpful:
symcfg -sid XXX list -fa all -port
This will give you the list of all Front-End adapters on the VMAX displaying both online and connection status details. From the screenshot below you can see that FA-5E P0 and P1 are both online and P0 is connected (in our case it is connected to an Cisco MDS 9513 Multilayer Director). You can also see that while both FA-7H ports are online neither are connected to a port on the MDS. FA-7G both ports are online and both are connected to ports on MDS.

vmax_fa_port

In order to view the online status of all the Back-end director ports:
symcfg -sid XXX list -da all
From the output of this command you can also view the number of hyper volumes per port and how they are distributed accross the backend.

vmax_da_all

If you wish to display the online status of both Front-end and Backend ports through a single command:
symcfg -sid XXX list -dir all

VMAX_DIR_ALL

View Port status and connection status of RDF Ports:
symcfg list -RA ALL -PORT

RDF_PORT

List logins shows hosts logged into the port specified:
symaccess -sid XXX -dirport 1E:0 list logins

Using Unisphere for VMAX:

Front-End director ports:
Uni_FA

Backend director ports:
Here you see the 8 Director10 ports; physically there is only 2 backend ports on the director using QSFP it branches off to 4 connections to the system disk enclosures.
UNI_DA

RDF ports:
RDF_PORT_UNI

EMC VNXe Configuration Using Unisphere CLI (Part 1)

This is the first in a series of blog posts on configuring VNXe using the command line. All the configurations here will be performed using “uemcli” which can be downloaded here . If you prefer to use the GUI interface then Henri has a very good series of blog posts here. The following scripts defined here are very useful if like me you need to configure VNXe systems on a weekly basis. VNXe is the base storage for the Vblock VB100 series and also used as the shared storage for management hosts in the VB300 and VB700 series.

The configuration steps outlined in Part 1 will be the following:
• Accept License Agreement
• Change Admin Password
• Create a New User
• Change the Service Password
• Commit IO Modules
• Perform a Healthcheck
• Code Upgrade
• Create a Storage Pool
• Add Hot Spare
• DNS Configuration
• NTP Configuration

Accept License Agreement
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /sys/eula set -agree yes

Change Admin Password
First run the show command to get the –id of the user account to change. In this case we are changing the Admin password which will have an ID of user_admin:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /user/account show
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /user/account -id user_admin set -passwd NewPassword -oldpasswd Password123#

Create a New User
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /user/account create -name newUser -type local -passwd Password -role administrator
The role for the account can be:
• administrator — Administrator
• storageadmin — Storage Administrator
• operator — Operator (view only)

Change the Service Password
The Service password is used for performing service actions on the VNXe.
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /service/user set -passwd newPassword -oldpasswd Password123#

Commit IO Modules
The following commits all uncommitted IO modules:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /env/iomodule commit
The following command displays a list of system IO modules:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /env/iomodule show

Perform a Healthcheck
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /sys/general healthcheck

Code Upgrade
In a dual SP VNXe this will be an NDU. Services will failover between SP’s during upgrade.
Perform a Healthcheck of the system prior to upgrade and resolve any issues first.
Firstly we upload the new code to the VNXe using the -upload switch before creating the upgrade session:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# -upload -f PathToSoftware\VNXe-MR4-upgrade-2.4.0.20932-RETAIL.tgz.bin.gpg upgrade
The following command displays details about the installed system software and details about the uploaded upgrade candidate.We also need to run this command to get the -candId of the uploaded upgrade candidate:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /sys/soft/ver show
Now we create a session to upgrade the system software using candidate CAND_1:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /sys/soft/upgrade create -candId CAND_1
Status of Upgrade:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /sys/soft/upgrade show
Confirm software version:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /sys/soft/ver show

Create a Storage Pool
View the Storage profile. Storage profiles are preconfigured settings for configuring storage pools based on Raid type, capacity and stripe length. We will choose a storage profile that best suits the server workload:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/config/profile show
Next view details about disk groups on the system. We will need the disk group ID to create the pool from:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/config/dg show
Here we create the “VMWARE-NFS” pool for vmware using 5 disks from the disk group disk_group_1 and using storage_cap_0 profile:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/config/pool create -name VMWARE-NFS -descr “VMware NFS Pool” -storProfile storage_cap_0 -diskGroup disk_group_1 -drivesNumber 5 -resType vmware -usage datastore
View the Pool configuration:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/config/pool show -detail

Add Hot Spare to the pool
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /stor/config/dg -id disk_group_1 set -spares 1

DNS Configuration
The following command adds two DNS servers to the domain dcr.com. The servers are grouped by domain under the ID dcr.com:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /net/dns/domain create -name “dcr.com” -nameServer “10.0.0.2, 10.0.0.3”
List all DNS server domains:
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /net/dns/domain show

NTP Configuration
The following creates an NTP server record
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /net/ntp/server create -server 10.0.0.4
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /net/ntp/server show

In the next post (part 2) I will show how to script the iSCSI and NFS server configurations and creation of datastores for each.