EMC VNX – List of Useful NAS Commands

Verify ‘NAS’ Services are running:
Login to the Control Station as ‘nasadmin’ and issue the cmd /nas/sbin/getreason from the CS console. The reason code output should be as follows (see detailed list of Reason Codes below):
10 - slot_0 primary control station
11 - slot_1 secondary control station
5 - slot_2 contacted
5 - slot_3 contacted

Complete a full ‘Health Check’:
Location of output:
#check Log:# cd /nas/log/
cat /nas/log/checkup-rundate.log

Confirm the EMC NAS version installed and the model name:

Stop ‘NAS’ Services:
/sbin/service nas stop
Start ‘NAS’ Services:
/sbin/service nas start

Check running status of the ‘DATA Movers’ and view which slot is active/standby:
nas_server -info -all

Verify connectivity to the VNX storage processors (SPs) from the Control Stations:
/nas/sbin/navicli -h SPA_IP domain -list
/nas/sbin/navicli -h SPB_IP domain -list

Confirm the VMAX/VNX is connected to the NAS:
nas_storage -check -all
nas_storage -list

View VNX NAS Control LUN Storage Group details:
/nas/sbin/navicli -h SP_IP storagegroup -list -gname ~filestorage

List the disk table to ensure all of the Control Volumes have been presented to both Data Movers:
nas_disk -list

Check the File Systems:
df -h

server_sysconfig server_2 -virtual
/nas/bin/nas_storage –info –all

View trunking devices created (LACP,Ethc,FSN):
server_sysconfig server_2 -virtual
Example view interface name “LACP_NAS”:
server_sysconfig server_2 -virtual -info LACP_NAS
server_2 :
*** Trunk LACP_NAS: Link is Up ***
*** Trunk LACP_NAS: Timeout is Short ***
*** Trunk LACP_NAS: Statistical Load Balancing is IP ***
Device Local Grp Remote Grp Link LACP Duplex Speed
fxg-1-0 10002 51840 Up Up Full 10000 Mbs
fxg-1-1 10002 51840 Up Up Full 10000 Mbs
fxg-2-1 10002 51840 Up Up Full 10000 Mbs
fxg-2-0 10002 51840 Up Up Full 10000 Mbs

Check Code Levels:
List the datamovers: nas_server -list
Check the DART code installed on the Data Movers: server_version ALL
Check the NAS code installed on the Control Station: nas_version

View Network Configuration:
To display parameters of all interfaces on a Data Mover, type:
Control Station: /sbin/ifconfig (eth3 is the mgmt interface)
Data Movers: server_ifconfig server_2 -all
cat ifcfg-eth3

View VNX SP IP Addresses from the CS console:
grep SP /etc/hosts | grep A_
grep SP /etc/hosts | grep B_

Verify Control Station Comms:
/nas/sbin/setup_enclosure -checkSystem

Confirm the unified FLAG is set:
/nas/sbin/nas_hw_upgrade -fc_option -enable

Date & Time:
Control Station: date
Data Movers: server_date ALL

Check IP & DNS info on the CS/DM:
nas_cs -info
server_dns ALL

Log Files:
Log file location: /var/log/messages
Example of NAS services starting successfully:
grep -A10 “Starting NAS services” /var/log/messages*
Dec 8 19:07:27 emcnas_i0 S95nas: Starting NAS services
Dec 8 19:07:46 emcnas_i0 EMCServer: nas_mcd: MCD will monitor CS IPMI connection.
Dec 8 19:08:46 emcnas_i0 EMCServer: nas_mcd: slot 0 missed 10 heartbeats from slot 1.
Dec 8 19:08:50 emcnas_i0 EMCServer: nas_mcd: Install Manager is running on slot 0, skipping slot 1 reboot
Dec 8 19:08:50 emcnas_i0 EMCServer: nas_mcd: Slot 0 becomes primary due to timeout
Dec 8 19:08:52 emcnas_i0 mcd_helper: All NBS devices are up
Dec 8 19:09:08 emcnas_i0 kernel: kjournald starting. Commit interval 5 seconds

Check the Data Mover Logs:
server_log server_2

Failing over a Control Station:
/nas/sbin/./cs_standby -failover
/nasmcd/sbin/./cs_standby -takeover
Or reboot:
nas_cs –reboot

Shutdown control station:
/sbin/shutdown -h now

Power off CS1 from CS0:
/nas/sbin/t2reset pwroff -s 1

List on which VMAX3 directors each CS and DM are located:
nas_inventory -list

List Datmover PARAMETERS:
/nas/bin/server_param server_2 -info
/nas/bin/server_param server_3 -info
/nas/bin/server_param server_2 -facility -all -list
/nas/bin/server_param server_3 -facility -all -list

Determine the failover status of the Blades (Datamovers):
/nas/bin/nas_server -info –all

Initiate a manual failover of server_2 to the standby Datamover:
server_standby server_2 -activate mover

List the status of the Datamovers:
nas_server -list

Review the information for server_2:
nas_server -info server_2
All DMs: nas_server -info ALL

Shutdown Datamover (Xblade):
/nas/bin/server_cpu server_2 -halt now

Power on the Datamover (Xblade):
/nasmcd/sbin/t2reset pwron -s 2

Restore the original primary Datamover:
server_standby server_2 -restore mover

To monitor an immediate cold restart of server_2:
server_cpu server_2 -reboot cold -monitor now
A cold reboot or a hardware reset shuts down the Data Mover completely before restarting, including a Power on Self Test (POST).

To monitor an immediate warm restart of server_2:
server_cpu server_2 -reboot -monitor now
A warm reboot or a software reset performs a partial shutdown of the Data Mover, and skips the POST after restarting. A software reset is faster than the hardware reset.

Clean Shutdown:
Shutdown Control Stations and DATAMovers:
/nasmcd/sbin/nas_halt -f now
finished when:exited on signal IS APPEARS

Powerdown Entire VNX Including Storage Processors:
nas_halt –f –sp now

Check if Product Serial Number is Correct:
/nasmcd/sbin/serial -db_check
Remove inconsistency between the db file and the enclosures:
/nasmcd/sbin/serial -repair

List All Hardware components by location:
nas_inventory -list -location
nas_inventory -list -location | grep “DME 0 Data Mover 2 IO Module”

Use location address to view specific component details:
nas_inventory -info “system:VNX5600:CKM001510001932007|enclosure:xpe:0|mover:VNX5600:2|iomodule::1”

List of Reason Codes:
0 – Reset (or unknown state)
1 – DOS boot phase, BIOS check, boot sequence
2 – SIB POST failures (that is, hardware failures)
3 – DART is loaded on Data Mover, DOS boot and execution of boot.bat, boot.cfg.
4 – DART is ready on Data Mover, running, and MAC threads started.
5 – DART is in contact with Control Station box monitor.
6 – Control Station is ready, but is not running NAS service.
7 – DART is in panic state.
9 – DART reboot is pending or in halted state.
10 – Primary Control Station reason code
11 – Secondary Control Station reason code
13 – DART panicked and completed memory dump (single Data Mover configurations only, same as code 7, but done with dump)
14 – This reason code can be set for the Blade for any of the following:
• Data Mover enclosure-ID was not found at boot time
• Data Mover’s local network interface MAC address is different from MAC address in configuration file
• Data Mover’s serial number is different from serial number in configuration file
• Data Mover was PXE booted with install configuration
• SLIC IO Module configuration mismatch (Foxglove systems)
15 – Data Mover is flashing firmware. DART is flashing BIOS and/or POST firmware. Data Mover cannot be reset.
17 – Data Mover Hardware fault detected
18 – DM Memory Test Failure. BIOS detected memory error
19 – DM POST Test Failure. General POST error
20 – DM POST NVRAM test failure. Invalid NVRAM content error (checksum, WWN, etc.)
21 – DM POST invalid peer Data Mover type
22 – DM POST invalid Data Mover part number
23 – DM POST Fibre Channel test failure. Error in blade Fibre connection (controller, Fibre discovery, etc.)
24 – DM POST network test failure. Error in Ethernet controller
25 – DM T2NET Error. Unable to get blade reason code due to management switch problems.

EMC NAS Plug-In For vSphere VAAI (VNXe Example)

The ‘EMC NAS Plug-in’ is required in order to enable VAAI (vSphere APIs for Array Integration) operations on ‘NFS Datastores’ on an ESXi 5.x host. If you are not familiar with VAAI; the purpose of enabling the VAAI API is to offload certain storage related I/O tasks to the storage array. As a result this will reduce the I/O requirement on the ESXi hosts and their associated networks. Instead of the ESXi host using resources to send I/O across the network for such tasks as Storage vMotion or cloning a VM, the hypervisor will now just send the NFS related commands that are required for the storage array to perform the necessary data movement. For block based storage arrays the VAAI primitives are available as default on the ESXi host and no plug-in is required.

Installation Of The NAS Plug-In On ESXi 5.x
1. Upload the .zip install package (EMCNasPlugin-1.0-11.zip) to the ESXi datastore.
2. Open an SSH Session to the ESXi host and change directory to the location of the install package:
# cd /vmfs/volumes/
If you need to list the name of your datastore:
/vmfs/volumes # ls -l
/vmfs/volumes # cd /vmfs/volumes/DatastoreName/
ls again to confirm the .zip package is present.
3. Ensure the NAS Plug-In is VMwareAccepted:
/vmfs/volumes/DatastoreName # esxcli software sources vib list -d file:///vmfs/volumes/DatastoreName/EMCNasPlugin-1.0-11.zip
Acceptance Level: VMwareAccepted
4. Run the installation:
/vmfs/volumes/DatastoreName # esxcli software vib install -n EMCNasPlugin -d file:///vmfs/volumes/DatastoreName/EMCNasPlugin-1.0-11.zip
Installation Result: completed successfully
Reboot Required: true
VIBs Installed: EMC_bootbank_EMCNasPlugin_1.0-11

5. Reboot the ESXi host and confirm the EMCEMCNasPlugin vib is loaded:
~ # esxcli software vib list | grep EMCNasPlugin

VAAI Example: ‘Full File Clone’ Primitive Operation With VNXe
‘Full File Clone’ is one of the VAAI NAS primitives which is used to copy or migrate data within the same physical array (Block equivalent is known as XCOPY). In this example we are using a ‘VNXe 3150’ with two NFS Datastores presented to one ESXi 5.5 host with the NAS Plug-In installed (VAAI enabled) and another ESXi 5.5 host without the NAS Plug-In installed (VAAI disabled).


Running a Storage vMotion from the NFS01 datastore to NFS02 on the ESXi host with VAAI enabled generates zero network traffic:


Running a Storage vMotion from the NFS01 datastore to NFS02 on the ESXi host without VAAI enabled maxes out the 1Gig ethernet link on the Host:


This is a rather simple example but it displays how the primitive operates by offloading the I/O tasks to the VNXe array.

Note: If you are accessing the NFS datastore directly via the datastore browser for Copy/Paste functionality then you will not see any benefit from VAAI. This is because the datastore browser has its own API and does not use the internal VMkernel Data Mover or VAAI.

VNXe CPU performance stats during the first SVMotion with VAAI enabled displays approximately 20% Storage Processor utilization and without VAAI enabled you can see CPU % at approx 70% util:


VNXe Network performance stats display no network traffic with VAAI enabled and without VAAI both read and write for SPA use approx 70MB of bandwidth each:

Note: For the ‘Full File Clone’ primitive to perform the offload during an SVMotion the VM needs to be powered off for the duration of the SVMotion.

See also Cormac Hogan’s blog post: VAAI Comparison – Block versus NAS