EMC VNX2 – Drive Layout (Guidelines & Considerations)

Applies only to VNX2 Systems.

CHOICES made in relation to the physical placement of Drives within a VNX can have an impact on how the VNX performs. The intention here is to shed some light on how to best optimize the VNX by placing Drives in their best physical locations within the VNX Array. The guidelines here deal with optimising the Back-End system resources. While these considerations and examples may help with choices around the physical location of Drives you should always work with a EMC certified resource in completing such an exercise.

VNX2Layout1

Maximum Available Drive Slots
You cannot exceed the maximum slot count, doing so will result in drives becoming unavailable. Drive form factor and DAE type may be a consideration here to ensure you are not exceeding the stated maximum. Thus the max slot count dictates the maximum drives and the overall capacity a system can support.
VNX2Layout2

BALANCE
BALANCE is the key when designing the VNX drive layout:

Where possible the best practice is to EVENLY BALANCE each drive type across all available back-end system BUSES.This will result in the best utilization of system resources and help to avoid potential system bottlenecks. VNX2 has no restrictions around using or spanning drives across Bus 0 Enclosure 0.
VNX2Layout3

DRIVE PERFORMANCE
These are rule of thumb figures which can be used as a guideline for each type of drive used in a VNX2 system.
Throughput (IOPS) figures are based on small block random I/O workloads:
VNX2Layout4

Bandwidth (MB/s) figures are based on large block sequential I/O workloads:
VNX2Layout5

Recommended Order of Drive Population:

1. FAST Cache
2. FLASH VP
3. SAS 15K
4. SAS 10K
5. NL-SAS

Physical placement should always begin at Bus0 Enclosure0 (0_0) and the first drives to get placed are always the fastest drives as per the above order. Start at the first available slot on each BUS and evenly balance the available Flash drives across the first slots of the first enclosure of each bus beginning with the FAST Cache drives. This ensures that FLASH Drives endure the lowest latency possible on the system and the greatest RoI is achieved.

FAST CACHE
FAST Cache drives are configured as RAID-1 mirrors and again it is good practice to balance the drives across all available back-end buses. Amount of FAST Cache drives per B/E Bus differs for each system but ideally aim for no more than 8 drives per bus (Including SPARE), this is due to the fact that FAST Cache drives are extremely I/O Intensive and placing more than the recommended maximum per Bus may cause I/O saturation on the Bus.
VNX2Layout6

Note: Do not mix different drive capacity sizes for FAST Cache, either use all 100GB or all 200GB drive types.

Also for VNX2 systems there are two types of SSD available:
• ‘FAST Cache SSDs’ are single-level cell (SLC) Flash drives that are targeted for use with FAST Cache. These drives are available in 100GB and 200GB capacities and can be used both as FAST Cache and as TIER-1 drives in a storage pool.
• ‘FAST VP SSDs’ are enterprise Multi-level cell (eMLC) drives that are targeted for use as TIER-1 drives in a storage pool (Not supported as ‘FAST Cache’ drives). They are available in three flavors 100GB, 200GB and 400GB.

More detailed post on FAST Cache: ‘EMC VNX – FAST Cache’

DRIVE FORM FACTOR
Drive form factor (2.5″ | 3.5“) is an important consideration. For example if you have a 6 BUS System with 6 DAE’s (one DAE per BUS) consisting of 2 x 2.5” Derringer DAEs and 4 x 3.5” Viper DAEs as follows:
VNX2Layout7

MCx HOT SPARING CONSIDERATIONS
Best practice is to ensure 1 spare is available per 30 of each drive type. When there are different drives of the same type in a VNX, but different speeds, form factors or capacities, then these should ideally be placed on different buses.

Note: Vault drives 0_0_0 – 0_0_3 if 300GB in size then no spare is required, but if larger than 300G is used and user luns are present on the Vault then a spare is required in this case.

While all un-configured drives in the VNX2 Array will be available to be used as a Hot Spare, a specific set of rules are used to determine the most suitable drive to use as a replacement for a failed drive:

1. Drive Type: All suitable drive types are gathered.
2. Bus: Which of the suitable drives are contained within the same bus as the failing drive.
3. Size: Following on from the Bus query MCx will then select a drive of the same size or if none available then a larger drive will be chosen.
4. Enclosure: This is another new feature where MCx will analyse the results of the previous steps to check if the Enclosure that contains the actual Failing drive has a suitable replacement within the DAE itself.

See previous post for more info: ‘EMC VNX – MCx Hot Sparing’


Drive Layout EXAMPLE 1:

VNX 5600 (2 BUS)

VNX2Layout8
FAST Cache:
1 X Spare, 8 x FAST Cache Avail.
8 / 2 BUSES = 4 FAST Cache Drives Per BUS
1 x 2.5” SPARE Placed on 0_0_24
1 X Spare, 20 x Flash VP Avail.
———————————
Fast VP:
20 / 2 BUSES = 10 Per BUS
10 x 3.5” Placed on BUS 0 Encl 1
10 x 2.5” Placed on BUS 1 Encl 0
1 X 2.5” SPARE Placed on 1_0_24
———————————
VNX2Layout9


Drive Layout EXAMPLE 2:

VNX 5800 (6 BUS)
VNX2Layout10

VNX2Layout11


Drive Layout EXAMPLE 3:

VNX 8000 (16 BUS)

VNX2Layout12

VNX2Layout12a

Useful Reference:
EMC VNX2 Unified Best Practices for Performance

EMC VNX – Registering RecoverPoint Initiators

The VNX will have been previously zoned to the RPAs at this stage. For example purposes the config below will have the RPA1-port-3 Zoned to the VNX SP-A&B Port 4 on Fabric-A and RPA1-port-1 Zoned to the VNX SP-A&B Port 5 on Fabric-B. Note: In a synchronous RP solution all 4 RPA ports should be zoned.

Parameters as follows:
Initiator Type = RecoverPoint Appliance (-type 31)
Failover Mode = 4 (ALUA – this mode allows the initiators to send I/O to a LUN regardless of which VNX Storage Processor owns the LUN)
RPA1_IP = IP Address of RPA1
RPA1_NAME = Appropriate name for RPA1 (E.g. RPA1-SITE1)

RPA WWNs can be recognized in the SAN by their 50:01:24:81:….. prefix.

Example:
Create a storage group for all RPAs on Site1:
naviseccli -User sysadmin -Password password -Scope 0 -h SP_IP storagegroup -create -gname RPA-Site1-SG

##############
## FABRIC A: ##
##############

RPA1-Port-3 initiator registered to both VNX SP A&B Port 4:

naviseccli -User sysadmin -Password sysadmin -Scope 0 -h SP_IP storagegroup -setpath -gname RPA-Site1-SG -hbauid 50:01:24:80:00:64:1C:E3:50:01:24:81:00:64:1C:E3 -type 31 -ip RPA1_IP -host RPA1_NAME -sp a -spport 4 -failovermode 4 -o

naviseccli -User sysadmin -Password sysadmin -Scope 0 -h SP_IP storagegroup -setpath -gname RPA-Site1-SG -hbauid 50:01:24:80:00:64:1C:E3:50:01:24:81:00:64:1C:E3 -type 31 -ip RPA1_IP -host RPA1_NAME -sp b -spport 4 -failovermode 4 -o

##############
## FABRIC B: ##
##############

RPA1-Port-1 initiator registered to both VNX SP A&B Port 5:

naviseccli -User sysadmin -Password sysadmin -Scope 0 -h SP_IP storagegroup -setpath -gname RPA-Site1-SG -hbauid 50:01:24:80:00:64:1C:E1:50:01:24:81:00:64:1C:E1 -type 31 -ip RPA1_IP -host RPA1_NAME -sp a -spport 5 -failovermode 4 -o

naviseccli -User sysadmin -Password sysadmin -Scope 0 -h SP_IP storagegroup -setpath -gname RPA-Site1-SG -hbauid 50:01:24:80:00:64:1C:E1:50:01:24:81:00:64:1C:E1 -type 31 -ip RPA1_IP -host RPA1_NAME -sp b -spport 5 -failovermode 4 -o

Registered Initiators displayed in Unisphere:

VNX-RP-INIT1

EMC VNX – SMI-S Configuration & Discovery

The following are some configuration notes for configuring SMI-S to allow communication with the ‘VNX Storage Processors’, SMI-S can then be leveraged by for example VCE Vision or ViPR to configure/report on the VNX array. Before proceeding ensure you have the both VNX Storage Processor A&B IP addresses to hand, the SMI-S host will use these IP’s to allow for out-of-band communication over IP with the VNX. EMC SMI-S provider is included as part of ‘Solutions Enabler with SMIS’ install package which can be downloaded from ‘support.emc.com’.

Begin by installing SMI-S Provider, ensuring you select the ‘Array provider’ (Windows does not require Host provider) and chose the option for SMISPROVIDER_COMPONENT:
VisionVMAX1

From the windows services.msc console check that both the ‘ECOM’ and ‘storsrvd’ services are set to automatic and in a running state:
VisionVMAX2
Check that EMC storsrvd daemon is installed and running from a Windows cmd prompt using stordaemon.exe:
stordaemon install storsrvd -autostart
stordaemon start storsrvd
stordaemon.exe list

VisionVMAX3
Or using the SC (service control) command you can query/start/config the ECOM and storsrvd services:
sc query ECOM.exe
sc query storsrvd
sc start ECOM.exe
sc start storsrvd
sc config ECOM.exe start=auto
sc config storsrvd start=auto

VisionVMAX4

Run netstat -a and check the host is listening on ports 5988 5989:
VisionVMAX5

UPDATE ENVIRONMENT VARIABLES:
Add the SYMCLI installation directory path (DRIVE:\Program Files\EMC\ECIM\ECOM\BIN) to the list of system paths:
VisionVMAX2a
Or use the windows CLI to add the SYMCLI and ECOM directories to the PATH environment variable:

setx /M PATH "%PATH%;C:\Program Files\EMC\SYMCLI\bin;C:\Program Files\EMC\ECIM\ECOM\bin"

If experiencing issues such as the ECOM service failing to start it is worth rebooting the mgmt server at this stage.

ECOM SERVER: ADD A NEW SMI-S Provider User
Provided all the validations are successful then proceed to login to the ECOM server and create the user you would like to use for (Vision/ViPR) connectivity:
Open https://localhost:5989/ecomconfig
Login with default credentials of: admin / #1Password
VisionVMAX6a

Select the option to add a new user and create the Vision user with administrator role and scope local:
Visionvmax7ab
VisionVMAX8ab

Windows Firewall
If the Windows firewall is enabled then rules will need to be created to allow ECOM ports TCP 5988&5989 and SLP port UDP 427. For example using the windows command line netsh to create rules for SLP and ECOM:

netsh advfirewall firewall add rule name="SLP" dir=in protocol=UDP localport=427 action=allow

netsh advfirewall firewall add rule name=”ECOM” dir=in protocol=TCP localport=5988-5989 action=allow

netsh advfirewall firewall show rule name=SLP
netsh advfirewall firewall show rule name=ECOM

VisionVMAX11

Discover and Add the VNX using TestSMIProvider:
Confirm communication to the VNX from the SMI-S host by running the navicli getagent cmd on both VNX Storage Processors from the Element Manager cmd prompt:
naviseccli -h SPA-IP getagent
choose option 2 if prompted
naviseccli -h SPB-IP getagent
choose option 2 if prompted

Or using credentials:
naviseccli -h SPIP -user sysadmin -password sysadmin -scope 0 getagent

Open a Windows cmd prompt session as admin user, if the environment variable has not been set then you will need to ‘cd’ to ‘cd D:\Program Files\EMC\SYMCLI\bin
symcfg auth add -host SPA_IP -username sysuser -password syspw
symcfg auth add -host SPB_IP -username sysuser -password syspw

Create a text file, for example called SPIP.txt that contains the IP addresses for SP A&B. Then run the following commands to discover and list the VNX:
symcfg discover -clariion -file D:\spip.txt
symcfg list -clariion

Again from a Windows cmd prompt session as admin user, if the environment variable has not been set then you will need to ‘cd’ to c:\Program Files\EMC\ECIM\ECOM\BIN. Type TestSMIProvider.exe at the prompt, from here chose all defaults except for the Vision user and password created through the ECOM console:
VisionVMAX9

At the prompt type ‘addsys’ to confirm connectivity between the VNX Array and the SMI-S Host:


(localhost:5988) ? addsys
Add System {y|n} [n]: y

ArrayType (1=Clar, 2=Symm) [1]:
One or more IP address or Hostname or Array ID

Elements for Addresses
IP address or hostname or array id 0 (blank to quit): SPA_IP
IP address or hostname or array id 1 (blank to quit): SPB_IP
IP address or hostname or array id 2 (blank to quit):
Address types corresponding to addresses specified above.
(1=URL, 2=IP/Nodename, 3=Array ID)
Address Type (0) [default=2]:
Address Type (1) [default=2]:
User [null]: sysuser
Password [null]: syspw
++++ EMCAddSystem ++++
OUTPUT : 0
Legend:0=Success, 1=Not Supported, 2=Unknown, 3=Timeout, 4=Failed
5=Invalid Parameter
4096=Job Queued, 4097=Size Not Supported
Note: Not all above values apply to all methods – see MOF for the method.

System : //SPA_IP/root/emc:Clar_StorageSystem.CreationClassName=”Clar_Stora
geSystem”,Name=”CLARiiON+CKM00100000123″

In 12.468753 Seconds

Please press enter key to continue…

At the prompt type ‘dv‘ to confirm connectivity between the VNX and SMI-S Host:
VisionVMAX10

For any troubleshooting please refer to: ‘C:\Program Files\EMC\ECIM\ECOM\log’

Note: When configuring VCE Vision please ensure to use the ‘SMI-S Host’ IP address for VNX Block entries in the Vblock.xml configuration file, the NAS portion of the VNX uses the Control Station IP addresses for communication which have ECOM configured by default.

How to remove VNX systems using SMI-S “remsys” command:

  1. Log into the SMI-S Provider server
  2. Open a command prompt (cmd).
  3. Change (cd) to C:\Program Files\EMC\ECIM\ECOM\bin
  4. Run TestSmiProvider.exe
  5. Enter ein
  6. Enter symm_StorageSystem
  7. Copy the line that specifies the VNX system you want to remove:
    Clar_StorageSystem.CreationClassName=”Clar_StorageSystem”,Name=”CLARiiON+CKM001xxxxxxxx”
  8. Enter remsys
  9. Enter Y
  10. Paste the line specifying the VNX system you want to remove that you copied in the preceding step.
  11. Enter Y
  12. Run a dv command to confirm the VNX system has been removed.

Built with EMC SMI-S Provider: V4.6.2
Namespace: root/emc
repeat count: 1
(localhost:5988) ? remsys
Remove System {y|n} [n]: y
System’s ObjectPath[null]: Clar_StorageSystem.CreationClassName=”Clar_StorageSys
tem”,Name=”CLARiiON+CKM001xxxxxxxx
About to delete system Clar_StorageSystem.CreationClassName=”Clar_StorageSystem”
,Name=”CLARiiON+CKM001xxxxxxxx
Are you sure {y|n} [n]: y

 

EMC VNX – Batch Enabler Installation (By Software Suite)

————————————————————————————–
Reference:: VNX® Command Line Interface Reference for Block:

The naviseccli ndu command -install function transfers one or more SP driver packages
from a user-accessible file system to the system private storage LUN (PSM). Media should
be present before you issue this command.

Preinstallation validation checks identify unsupported or unsafe installation conditions.
You initiate the validation checks functionality when you issue the ndu -install command.
The validation checks run in the background, prior to installing the software. If a validation
check fails, the CLI displays the error and terminates the installation. You can choose to
display all validation checks as the functionality executes by specifying the -verbose switch,
otherwise the CLI only displays failures that prevent installation.

When you install new SP software using the CLI, the only way to determine when the
installation is finished is to issue periodic ndu -status commands until the CLI shows the
operation is completed.

The software prompts for information as needed; then it installs or upgrades the specified
software packages and restarts the SPs. The SPs then load and run the new packages. After
successful installation, it deletes the files from the system.
You can install more than one package with one ndu command.

————————————————————————————–

Software suites available for VNX2 and their associated Enablers:

VNXEnablers

ENABLER INSTALL PROCEDURE USING CLI

Check the list of all ENABLERS currently installed on the VNX:
naviseccli -h SP_IP ndu -list

A series of rule checks need to be performed in advance and correct any rule failures before proceeding:
naviseccli -h SP_IP ndu -runrules -listrules

Your configuration will run the following rules
===============================================
Host Connectivity
Redundant SPs
No Thin Provisioning Transitions
Version Compatibility
No Active Replication I/O
Acceptable Processor Utilization
Statistics Logging Disabled
No Transitions
All Packages Committed
Special Conditions
No Trespassed LUNs
No System Faults
No Interrupted Operations
No Incompatible Operations
FAST Cache Status
No Un-owned LUNs

Run through the Pre-installation rules to ensure the success of this software upgrade:
naviseccli -h SP_IP ndu -runrules -verbose

A common result is a warning for tresspassed LUNs:
RULE NAME: No Trespassed LUNs
RULE STATUS: Rule has warning.
RULE DESCRIPTION: This rule checks for trespassed LUNs on the storage system.
A total of 1 trespassed LUNs were found.
RULE INSTRUCTION: If these LUNs are not trespassed back, connectivity will be disrupted.

To remediate this rule failure and change the Current Owner you will need to execute a trespass command on the LUN using navicli or by right clicking on the LUN in Unisphere and click the trespass option:
naviseccli -h SP_IP trespass lun 1
If changing on multiple LUNs then running the trespass mine command from the SP will trespass all the LUNs that the SP has DEFAULT ownership of. For example to trespass LUNs with Default Ownership of ‘SP B’ but which are currently owned by ‘SP A’:
naviseccli -h SPB_IP trespass mine

Statistics Logging Disabled : Rule failed.
naviseccli -h SP_IP setstats -off

Local Protection Suite Enablers:
naviseccli -h SP_IP ndu -runrules “SnapViewEnabler-01.01.5.002-xpfree.ena” “VNXSnapshot-01.01.5.001.ena” “RPSplitterEnabler-01.01.5.002.ena” -verbose

naviseccli -h SP_IP ndu -install “SnapViewEnabler-01.01.5.002-xpfree.ena” “VNXSnapshot-01.01.5.001.ena” “RPSplitterEnabler-01.01.5.002.ena” -delay 360 -force -gen -verbose

Remote Protection Suite Enablers:
naviseccli -h SP_IP ndu -runrules “MirrorViewEnabler-01.01.5.002-xpfree.ena” “MVAEnabler-01.01.5.002-xpfree.ena” “RPSplitterEnabler-01.01.5.002.ena” -verbose

naviseccli -h SP_IP ndu -install “MirrorViewEnabler-01.01.5.002-xpfree.ena” “MVAEnabler-01.01.5.002-xpfree.ena” “RPSplitterEnabler-01.01.5.002.ena” -delay 360 -force -gen -verbose


FAST Suite Enablers:
naviseccli -h SP_IP ndu -runrules “FASTCacheEnabler-01.01.5.008.ena” “FASTEnabler-01.01.5.008.ena” -verbose

naviseccli -h SP_IP ndu -install “FASTCacheEnabler-01.01.5.008.ena” “FASTEnabler-01.01.5.008.ena” -delay 360 -force -gen -verbose

Additional Enabler Software:
naviseccli -h SP_IP ndu -runrules “ThinProvisioning-01.01.5.008.ena” “CompressionEnabler-01.01.5.008.ena” “DeduplicationEnabler-01.01.5.001.ena” “DataAtRestEncryptionEnabler-01.01.4.001-armada54_free.ena” -verbose

naviseccli -h SP_IP ndu -install “ThinProvisioning-01.01.5.008.ena” “CompressionEnabler-01.01.5.008.ena” “DeduplicationEnabler-01.01.5.001.ena” “DataAtRestEncryptionEnabler-01.01.4.001-armada54_free.ena” -delay 360 -force -gen -verbose

Monitoring the progress of the installation:
naviseccli -h SP_IP ndu -status
Is Completed: NO
Status: Activating software on primary SP
Operation: Install

naviseccli -h SP_IP ndu -status
Is Completed: NO
Status: Completing install on secondary SP
Operation: Install

naviseccli -h SP_IP ndu -status
Is Completed: YES
Status: Operation completed successfully
Operation: Install

naviseccli -h SP_IP ndu -list -name -RPSplitterEnabler

Commit Required: NO
Revert Possible: NO
Active State: YES
Is installation completed: YES
Is this System Software: NO

Re-enable stats logging:
naviseccli -h SP_IP setstats -on

If uninstall required:
naviseccli -h SP_IP ndu -messner -uninstall -RPSplitterEnabler -delay 360
Uninstall operation will uninstall
-RPSplitterEnabler
from both SPs Set NDU delay with interval time of 360 secs.Do you still want to continue. (y/n)? y

EMC VNX – RecoverPoint Enabler Installation

Installing the RecoverPoint Enabler using NAVICLI:

Check the list of all ENABLERS currently installed on the VNX:
naviseccli -h SP_IP ndu -list

A series of rule checks need to be performed in advance and correct any rule failures before proceeding:
naviseccli -h SP_IP ndu -runrules -listrules

Your configuration will run the following rules
===============================================
Host Connectivity
Redundant SPs
No Thin Provisioning Transitions
Version Compatibility
No Active Replication I/O
Acceptable Processor Utilization
Statistics Logging Disabled
No Transitions
All Packages Committed
Special Conditions
No Trespassed LUNs
No System Faults
No Interrupted Operations
No Incompatible Operations
FAST Cache Status
No Un-owned LUNs

Run through the Pre-installation rules to ensure the success of this software upgrade:
naviseccli -h 10.73.113.40 ndu -runrules -verbose

A common result is a warning for tresspassed LUNs:
RULE NAME: No Trespassed LUNs
RULE STATUS: Rule has warning.
RULE DESCRIPTION: This rule checks for trespassed LUNs on the storage system.
A total of 1 trespassed LUNs were found.
RULE INSTRUCTION: If these LUNs are not trespassed back, connectivity will be disrupted.

To remediate this rule failure and change the Current Owner you will need to execute a trespass command on the LUN using navicli or by right clicking on the LUN in Unisphere and click the trespass option:
naviseccli -h SP_IP trespass lun 1
If changing on multiple LUNs then running the trespass mine command from the SP will trespass all the LUNs that the SP has DEFAULT ownership of. For example to trespass LUNs with Default Ownership of ‘SP B’ but which are currently owned by ‘SP A’:
naviseccli -h SPB_IP trespass mine

Statistics Logging Disabled : Rule failed.
naviseccli -h SP_IP setstats -off

Confirm all rule checks for RPSplitterEnabler are met:
naviseccli -h SP_IP ndu -runrules c:\VNX\Enablers\RPSplitterEnabler-01.01.5.002.ena -verbose

Running install rules...
===============================================
Version Compatibility : Rule passed.
Redundant SPs : Rule passed.
Acceptable Processor Utilization : Rule passed.
No Trespassed LUNs : Rule passed.
No Transitions : Rule passed.
No System Faults : Rule passed.
All Packages Committed : Rule passed.
Special Conditions : Rule passed.
Statistics Logging Disabled : Rule passed.
Host Connectivity : Rule passed.
No Un-owned LUNs : Rule passed.
No Active Replication I/O : Rule passed.
No Thin Provisioning Transitions : Rule passed.
No Incompatible Operations : Rule passed.
No Interrupted Operations : Rule passed.
FAST Cache Status : Rule passed.

Install the RPSplitterEnabler:
naviseccli -h SP_IP ndu -install “c:\VNX Enablers\RPSplitterEnabler-01.01.5.002.ena” -delay 360 -force -gen -verbose

Name of the software package: -RecoverpointSplitter
Already Installed Revision NO
Installable YES
Disruptive upgrade: NO
NDU Delay: 360 secs

Monitoring the progress of the installation:
naviseccli -h SP_IP ndu -status
Is Completed: NO
Status: Activating software on primary SP
Operation: Install

naviseccli -h SP_IP ndu -status
Is Completed: NO
Status: Completing install on secondary SP
Operation: Install

naviseccli -h SP_IP ndu -status
Is Completed: YES
Status: Operation completed successfully
Operation: Install

naviseccli -h SP_IP ndu -list -name -RPSplitterEnabler

Commit Required: NO
Revert Possible: NO
Active State: YES
Is installation completed: YES
Is this System Software: NO

Re-enable stats logging:
naviseccli -h SP_IP setstats -on

If uninstall required:
naviseccli -h SP_IP ndu -messner -uninstall -RPSplitterEnabler -delay 360
Uninstall operation will uninstall
-RPSplitterEnabler
from both SPs Set NDU delay with interval time of 360 secs.Do you still want to continue. (y/n)? y

Installing the RecoverPoint Enabler via Unisphere Service Manager (USM):

Enabler_Install_USM1

Enabler_Install_USM2

Enabler_Install_USM3

Enabler_Install_USM4

Enabler_Install_USM5

Enabler_Install_USM6

Enabler_Install_USM7

Enabler_Install_USM8

Enabler_Install_USM9

Enabler_Install_USM10

Enabler_Install_USM11

Enabler_Install_USM12

Enabler_Install_USM13

Enabler_Install_USM14

EMC VNX – List of Useful NAS Commands

Verify ‘NAS’ Services are running:
Login to the Control Station as ‘nasadmin’ and issue the cmd /nas/sbin/getreason from the CS console. The reason code output should be as follows (see detailed list of Reason Codes below):
10 - slot_0 primary control station
11 - slot_1 secondary control station
5 - slot_2 contacted
5 - slot_3 contacted

Complete a full ‘Health Check’:
/nas/bin/nas_checkup
Location of output:
#check Log:# cd /nas/log/
ls
cat /nas/log/checkup-rundate.log

Confirm the EMC NAS version installed and the model name:
/nasmcd/bin/nas_version
/nas/sbin/model

Stop ‘NAS’ Services:
/sbin/service nas stop
Start ‘NAS’ Services:
/sbin/service nas start

Check running status of the ‘DATA Movers’ and view which slot is active/standby:
nas_server -info -all

Verify connectivity to the VNX storage processors (SPs) from the Control Stations:
/nas/sbin/navicli -h SPA_IP domain -list
/nas/sbin/navicli -h SPB_IP domain -list

Confirm the VMAX/VNX is connected to the NAS:
nas_storage -check -all
nas_storage -list

View VNX NAS Control LUN Storage Group details:
/nas/sbin/navicli -h SP_IP storagegroup -list -gname ~filestorage

List the disk table to ensure all of the Control Volumes have been presented to both Data Movers:
nas_disk -list

Check the File Systems:
df -h

server_sysconfig server_2 -virtual
/nas/bin/nas_storage –info –all

View trunking devices created (LACP,Ethc,FSN):
server_sysconfig server_2 -virtual
Example view interface name “LACP_NAS”:
server_sysconfig server_2 -virtual -info LACP_NAS
server_2 :
*** Trunk LACP_NAS: Link is Up ***
*** Trunk LACP_NAS: Timeout is Short ***
*** Trunk LACP_NAS: Statistical Load Balancing is IP ***
Device Local Grp Remote Grp Link LACP Duplex Speed
------------------------------------------------------------------------
fxg-1-0 10002 51840 Up Up Full 10000 Mbs
fxg-1-1 10002 51840 Up Up Full 10000 Mbs
fxg-2-1 10002 51840 Up Up Full 10000 Mbs
fxg-2-0 10002 51840 Up Up Full 10000 Mbs

Check Code Levels:
List the datamovers: nas_server -list
Check the DART code installed on the Data Movers: server_version ALL
Check the NAS code installed on the Control Station: nas_version

View Network Configuration:
To display parameters of all interfaces on a Data Mover, type:
Control Station: /sbin/ifconfig (eth3 is the mgmt interface)
Data Movers: server_ifconfig server_2 -all
cat ifcfg-eth3

View VNX SP IP Addresses from the CS console:
grep SP /etc/hosts | grep A_
grep SP /etc/hosts | grep B_

Verify Control Station Comms:
/nas/sbin/setup_enclosure -checkSystem

Confirm the unified FLAG is set:
/nas/sbin/nas_hw_upgrade -fc_option -enable

Date & Time:
Control Station: date
Data Movers: server_date ALL

Check IP & DNS info on the CS/DM:
nas_cs -info
server_dns ALL

Log Files:
Log file location: /var/log/messages
Example of NAS services starting successfully:
grep -A10 “Starting NAS services” /var/log/messages*
Output:
Dec 8 19:07:27 emcnas_i0 S95nas: Starting NAS services
Dec 8 19:07:46 emcnas_i0 EMCServer: nas_mcd: MCD will monitor CS IPMI connection.
Dec 8 19:08:46 emcnas_i0 EMCServer: nas_mcd: slot 0 missed 10 heartbeats from slot 1.
Dec 8 19:08:50 emcnas_i0 EMCServer: nas_mcd: Install Manager is running on slot 0, skipping slot 1 reboot
Dec 8 19:08:50 emcnas_i0 EMCServer: nas_mcd: Slot 0 becomes primary due to timeout
Dec 8 19:08:52 emcnas_i0 mcd_helper: All NBS devices are up
Dec 8 19:09:08 emcnas_i0 kernel: kjournald starting. Commit interval 5 seconds

Check the Data Mover Logs:
server_log server_2

Failing over a Control Station:
Failover:
/nas/sbin/./cs_standby -failover
Takeover:
/nasmcd/sbin/./cs_standby -takeover
Or reboot:
nas_cs –reboot

Shutdown control station:
/sbin/shutdown -h now

Power off CS1 from CS0:
/nas/sbin/t2reset pwroff -s 1

List on which VMAX3 directors each CS and DM are located:
nas_inventory -list

List Datmover PARAMETERS:
/nas/bin/server_param server_2 -info
/nas/bin/server_param server_3 -info
/nas/bin/server_param server_2 -facility -all -list
/nas/bin/server_param server_3 -facility -all -list

Determine the failover status of the Blades (Datamovers):
/nas/bin/nas_server -info –all

Initiate a manual failover of server_2 to the standby Datamover:
server_standby server_2 -activate mover

List the status of the Datamovers:
nas_server -list

Review the information for server_2:
nas_server -info server_2
All DMs: nas_server -info ALL

Shutdown Datamover (Xblade):
/nas/bin/server_cpu server_2 -halt now

Power on the Datamover (Xblade):
/nasmcd/sbin/t2reset pwron -s 2

Restore the original primary Datamover:
server_standby server_2 -restore mover

To monitor an immediate cold restart of server_2:
server_cpu server_2 -reboot cold -monitor now
A cold reboot or a hardware reset shuts down the Data Mover completely before restarting, including a Power on Self Test (POST).

To monitor an immediate warm restart of server_2:
server_cpu server_2 -reboot -monitor now
A warm reboot or a software reset performs a partial shutdown of the Data Mover, and skips the POST after restarting. A software reset is faster than the hardware reset.

Clean Shutdown:
Shutdown Control Stations and DATAMovers:
/nasmcd/sbin/nas_halt -f now
finished when:exited on signal IS APPEARS

Powerdown Entire VNX Including Storage Processors:
nas_halt –f –sp now

Check if Product Serial Number is Correct:
/nasmcd/sbin/serial -db_check
Remove inconsistency between the db file and the enclosures:
/nasmcd/sbin/serial -repair

List All Hardware components by location:
nas_inventory -list -location
nas_inventory -list -location | grep “DME 0 Data Mover 2 IO Module”

Use location address to view specific component details:
nas_inventory -info “system:VNX5600:CKM001510001932007|enclosure:xpe:0|mover:VNX5600:2|iomodule::1”

List of Reason Codes:
0 – Reset (or unknown state)
1 – DOS boot phase, BIOS check, boot sequence
2 – SIB POST failures (that is, hardware failures)
3 – DART is loaded on Data Mover, DOS boot and execution of boot.bat, boot.cfg.
4 – DART is ready on Data Mover, running, and MAC threads started.
5 – DART is in contact with Control Station box monitor.
6 – Control Station is ready, but is not running NAS service.
7 – DART is in panic state.
9 – DART reboot is pending or in halted state.
10 – Primary Control Station reason code
11 – Secondary Control Station reason code
13 – DART panicked and completed memory dump (single Data Mover configurations only, same as code 7, but done with dump)
14 – This reason code can be set for the Blade for any of the following:
• Data Mover enclosure-ID was not found at boot time
• Data Mover’s local network interface MAC address is different from MAC address in configuration file
• Data Mover’s serial number is different from serial number in configuration file
• Data Mover was PXE booted with install configuration
• SLIC IO Module configuration mismatch (Foxglove systems)
15 – Data Mover is flashing firmware. DART is flashing BIOS and/or POST firmware. Data Mover cannot be reset.
17 – Data Mover Hardware fault detected
18 – DM Memory Test Failure. BIOS detected memory error
19 – DM POST Test Failure. General POST error
20 – DM POST NVRAM test failure. Invalid NVRAM content error (checksum, WWN, etc.)
21 – DM POST invalid peer Data Mover type
22 – DM POST invalid Data Mover part number
23 – DM POST Fibre Channel test failure. Error in blade Fibre connection (controller, Fibre discovery, etc.)
24 – DM POST network test failure. Error in Ethernet controller
25 – DM T2NET Error. Unable to get blade reason code due to management switch problems.

EMC VNX – ArrayConfig & SPCollect (Powershell Script)

——————————————————————-
Reference: VNX® Command Line Interface Reference for Block:

SPCollect : The naviseccli spcollect command selects a collection of system log files and places them in a single .zip file on the system. You can retrieve the file from the system using the managefiles command. Important: The SPCollect functionality can affect system performance (may degrade system performance).

ArrayConfig : The arrayconfig -capture command queries the system for its configuration along with I/O port configuration information. When issued, the command will capture a system’s essential configuration data. The information is formatted and stored on the client workstation. This generated file can be used as a template to configure other systems or rebuild the same system if the previous configuration is destroyed.
——————————————————————-

Using native Navi commands integrated with Powershell this script automates the process of backing up the current VNX configuration along with the latest SPCollect log files. You will just need to Complete a few simple user entries:
◊ SP ‘A’&’B’ IP Addresses
◊ Username & Password
◊ Backup Directory
The script will automatically create a sub-directory in the backup location provided. For example if you input a backup directory of C:\VNX this will result in a backup location of C:\VNX\VNXserial_timeDate

Example Script Input:
SPCScript_1

Expected Script Output:
SPCScript_2

The backup directory location will automatically open on completion of the script:
SPCScript_3

Download HERE and remove .doc extension replacing with .ps1! Or in full text format below:

 

############################
#
# Reference: VNX CLI Docs
# Script: VNX BACKUPS
# 	
# Date: 2015-01-23 14:30:00										 			 
#
# Version Update:                                         
# 1.0 David Ring            	
#					 
############################

######## Banner ########
Write-Host " "
Write-Host "#######################################"
Write-Host "## VNX Configuration & LOGS Backup  ##"
Write-Host "#######################################"


### VNX SP IP's, User/PW & Backup Location ###
$SPAIP = Read-Host 'IP Address for Storage Processor A:'
$SPBIP = Read-Host 'IP Address for Storage Processor B:'
$User = Read-Host 'VNX Username:'
$Password = Read-Host 'VNX Password:'
$BackupLocation = Read-Host "Backup Location:(A sub-dir with the current Time & Date will be created):"

$ArrayConfig = (naviseccli -user $User -password $Password -scope 0 -h $SPAIP getagent | Select-String "Serial No:")
$ArrayConfig = $ArrayConfig -replace “Serial No:”,“”
$ArrayConfig = $ArrayConfig -replace “           ”,“”

$BackupLocation = (join-path -Path $BackupLocation -ChildPath ($ArrayConfig +"_"+ "$(date -f HHmmddMMyyyy)"))	
IF(!(Test-Path "$BackupLocation")){new-item "$BackupLocation" -ItemType directory | Out-Null}
$BackupLocation =  "`"$BackupLocation`""


Write-Host "Storage Processor 'A':" $SPAIP
Write-Host "Storage Processor 'B':" $SPBIP
Write-Host "VNX Username:" $User
Write-Host "VNX Password:" $Password
Write-Host "VNX Serial Number:" $ArrayConfig
Write-Host "Backup Location Entered:" $BackupLocation

Start-Sleep -s 10


$BackupName = $ArrayConfig+"_"+$(date -f HHmmddMMyyyy)+".xml" ; naviseccli -user $User -password $Password -scope 0 -h $SPAIP arrayconfig -capture -output $BackupLocation"\"$BackupName

Write-Host $ArrayConfig "Configuration Data Has Been Backed Up In XML Format!"

Start-Sleep -s 5

### Gather & Retrieve SP Collects for both Storage Processors ###
Write-Host "Now Generating Fresh Storage Processor 'A' & 'B' Collects!"
$GenerateSPA = naviseccli -user $User -password $Password -scope 0 -h $SPAIP spcollect -messner  
$GenerateSPB = naviseccli -user $User -password $Password -scope 0 -h $SPBIP spcollect -messner
Start-Sleep -s 10


### Storage Processor 'A' LOG Collection ###

## WHILE SP_A '*RUNLOG.TXT' FILE EXISTS THEN HOLD ...RESCAN EVERY 90 SECONDS ##
Do {
$listSPA = naviseccli -user $User -password $Password -scope 0 -h $SPAIP managefiles -list | select-string "_runlog.txt"
$listSPA
Start-Sleep -s 90
Write-Host "Generating Log Files For Storage Processor 'A' Please Wait!"
}
While ($listSPA -like '*runlog.txt')  

Write-Host "Generation of SP-'A' Log Files Now Complete! Proceeding with Backup."

Start-Sleep -s 15

$latestSPA = naviseccli -user $User -password $Password -scope 0 -h $SPAIP managefiles -list | Select-string "data.zip" | Select-Object -Last 1
$latestSPA = $latestSPA -split "  "; $latestSPA=$latestSPA[6]
$latestSPA
$BackupSPA = naviseccli -user $User -password $Password -scope 0 -h $SPAIP managefiles -retrieve -path $BackupLocation -file $latestSPA -o

Start-Sleep -s 10


### Storage Processor 'B' LOG Collection ###

## WHILE SP_B '*RUNLOG.TXT' FILE EXISTS THEN HOLD ...RESCAN EVERY 15 SECONDS ##
Do {
$listSPB = naviseccli -user $User -password $Password -scope 0 -h $SPBIP managefiles -list | select-string "_runlog.txt"
$listSPB
Start-Sleep -s 15
Write-Host "Generating Log Files For Storage Processor 'B' Please Wait!"
}
While ($listSPB -like '*runlog.txt')  

Write-Host "Generation of SP-'B' Log Files Now Complete! Proceeding with Backup."

Start-Sleep -s 10

$latestSPB = naviseccli -user $User -password $Password -scope 0 -h $SPBIP managefiles -list | Select-string "data.zip" | Select-Object -Last 1
$latestSPB = $latestSPB -split "  "; $latestSPB=$latestSPB[6]
$latestSPB
$BackupSPB = naviseccli -user $User -password $Password -scope 0 -h $SPBIP managefiles -retrieve -path $BackupLocation -file $latestSPB -o

$BackupLocation = $BackupLocation -replace '"', ""
invoke-item $BackupLocation

Read-Host "Confirm Presence of 'Array Capture XML' and 'SP Collects' in the Backup Directory!"

———————————————-

See Also @Pragmatic_IO post: ‘EMC VNX – Auto Array Configuration Backups using NavisecCLI and Powershell’