EMC VNX – List of Useful NAS Commands

Verify ‘NAS’ Services are running:
Login to the Control Station as ‘nasadmin’ and issue the cmd /nas/sbin/getreason from the CS console. The reason code output should be as follows (see detailed list of Reason Codes below):
10 - slot_0 primary control station
11 - slot_1 secondary control station
5 - slot_2 contacted
5 - slot_3 contacted

Complete a full ‘Health Check’:
/nas/bin/nas_checkup
Location of output:
#check Log:# cd /nas/log/
ls
cat /nas/log/checkup-rundate.log

Confirm the EMC NAS version installed and the model name:
/nasmcd/bin/nas_version
/nas/sbin/model

Stop ‘NAS’ Services:
/sbin/service nas stop
Start ‘NAS’ Services:
/sbin/service nas start

Check running status of the ‘DATA Movers’ and view which slot is active/standby:
nas_server -info -all

Verify connectivity to the VNX storage processors (SPs) from the Control Stations:
/nas/sbin/navicli -h SPA_IP domain -list
/nas/sbin/navicli -h SPB_IP domain -list

Confirm the VMAX/VNX is connected to the NAS:
nas_storage -check -all
nas_storage -list

View VNX NAS Control LUN Storage Group details:
/nas/sbin/navicli -h SP_IP storagegroup -list -gname ~filestorage

List the disk table to ensure all of the Control Volumes have been presented to both Data Movers:
nas_disk -list

Check the File Systems:
df -h

server_sysconfig server_2 -virtual
/nas/bin/nas_storage –info –all

View trunking devices created (LACP,Ethc,FSN):
server_sysconfig server_2 -virtual
Example view interface name “LACP_NAS”:
server_sysconfig server_2 -virtual -info LACP_NAS
server_2 :
*** Trunk LACP_NAS: Link is Up ***
*** Trunk LACP_NAS: Timeout is Short ***
*** Trunk LACP_NAS: Statistical Load Balancing is IP ***
Device Local Grp Remote Grp Link LACP Duplex Speed
------------------------------------------------------------------------
fxg-1-0 10002 51840 Up Up Full 10000 Mbs
fxg-1-1 10002 51840 Up Up Full 10000 Mbs
fxg-2-1 10002 51840 Up Up Full 10000 Mbs
fxg-2-0 10002 51840 Up Up Full 10000 Mbs

Check Code Levels:
List the datamovers: nas_server -list
Check the DART code installed on the Data Movers: server_version ALL
Check the NAS code installed on the Control Station: nas_version

View Network Configuration:
To display parameters of all interfaces on a Data Mover, type:
Control Station: /sbin/ifconfig (eth3 is the mgmt interface)
Data Movers: server_ifconfig server_2 -all
cat ifcfg-eth3

View VNX SP IP Addresses from the CS console:
grep SP /etc/hosts | grep A_
grep SP /etc/hosts | grep B_

Verify Control Station Comms:
/nas/sbin/setup_enclosure -checkSystem

Confirm the unified FLAG is set:
/nas/sbin/nas_hw_upgrade -fc_option -enable

Date & Time:
Control Station: date
Data Movers: server_date ALL

Check IP & DNS info on the CS/DM:
nas_cs -info
server_dns ALL

Log Files:
Log file location: /var/log/messages
Example of NAS services starting successfully:
grep -A10 “Starting NAS services” /var/log/messages*
Output:
Dec 8 19:07:27 emcnas_i0 S95nas: Starting NAS services
Dec 8 19:07:46 emcnas_i0 EMCServer: nas_mcd: MCD will monitor CS IPMI connection.
Dec 8 19:08:46 emcnas_i0 EMCServer: nas_mcd: slot 0 missed 10 heartbeats from slot 1.
Dec 8 19:08:50 emcnas_i0 EMCServer: nas_mcd: Install Manager is running on slot 0, skipping slot 1 reboot
Dec 8 19:08:50 emcnas_i0 EMCServer: nas_mcd: Slot 0 becomes primary due to timeout
Dec 8 19:08:52 emcnas_i0 mcd_helper: All NBS devices are up
Dec 8 19:09:08 emcnas_i0 kernel: kjournald starting. Commit interval 5 seconds

Check the Data Mover Logs:
server_log server_2

Failing over a Control Station:
Failover:
/nas/sbin/./cs_standby -failover
Takeover:
/nasmcd/sbin/./cs_standby -takeover
Or reboot:
nas_cs –reboot

Shutdown control station:
/sbin/shutdown -h now

Power off CS1 from CS0:
/nas/sbin/t2reset pwroff -s 1

List on which VMAX3 directors each CS and DM are located:
nas_inventory -list

List Datmover PARAMETERS:
/nas/bin/server_param server_2 -info
/nas/bin/server_param server_3 -info
/nas/bin/server_param server_2 -facility -all -list
/nas/bin/server_param server_3 -facility -all -list

Determine the failover status of the Blades (Datamovers):
/nas/bin/nas_server -info –all

Initiate a manual failover of server_2 to the standby Datamover:
server_standby server_2 -activate mover

List the status of the Datamovers:
nas_server -list

Review the information for server_2:
nas_server -info server_2
All DMs: nas_server -info ALL

Shutdown Datamover (Xblade):
/nas/bin/server_cpu server_2 -halt now

Power on the Datamover (Xblade):
/nasmcd/sbin/t2reset pwron -s 2

Restore the original primary Datamover:
server_standby server_2 -restore mover

To monitor an immediate cold restart of server_2:
server_cpu server_2 -reboot cold -monitor now
A cold reboot or a hardware reset shuts down the Data Mover completely before restarting, including a Power on Self Test (POST).

To monitor an immediate warm restart of server_2:
server_cpu server_2 -reboot -monitor now
A warm reboot or a software reset performs a partial shutdown of the Data Mover, and skips the POST after restarting. A software reset is faster than the hardware reset.

Clean Shutdown:
Shutdown Control Stations and DATAMovers:
/nasmcd/sbin/nas_halt -f now
finished when:exited on signal IS APPEARS

Powerdown Entire VNX Including Storage Processors:
nas_halt –f –sp now

Check if Product Serial Number is Correct:
/nasmcd/sbin/serial -db_check
Remove inconsistency between the db file and the enclosures:
/nasmcd/sbin/serial -repair

List All Hardware components by location:
nas_inventory -list -location
nas_inventory -list -location | grep “DME 0 Data Mover 2 IO Module”

Use location address to view specific component details:
nas_inventory -info “system:VNX5600:CKM001510001932007|enclosure:xpe:0|mover:VNX5600:2|iomodule::1”

List of Reason Codes:
0 – Reset (or unknown state)
1 – DOS boot phase, BIOS check, boot sequence
2 – SIB POST failures (that is, hardware failures)
3 – DART is loaded on Data Mover, DOS boot and execution of boot.bat, boot.cfg.
4 – DART is ready on Data Mover, running, and MAC threads started.
5 – DART is in contact with Control Station box monitor.
6 – Control Station is ready, but is not running NAS service.
7 – DART is in panic state.
9 – DART reboot is pending or in halted state.
10 – Primary Control Station reason code
11 – Secondary Control Station reason code
13 – DART panicked and completed memory dump (single Data Mover configurations only, same as code 7, but done with dump)
14 – This reason code can be set for the Blade for any of the following:
• Data Mover enclosure-ID was not found at boot time
• Data Mover’s local network interface MAC address is different from MAC address in configuration file
• Data Mover’s serial number is different from serial number in configuration file
• Data Mover was PXE booted with install configuration
• SLIC IO Module configuration mismatch (Foxglove systems)
15 – Data Mover is flashing firmware. DART is flashing BIOS and/or POST firmware. Data Mover cannot be reset.
17 – Data Mover Hardware fault detected
18 – DM Memory Test Failure. BIOS detected memory error
19 – DM POST Test Failure. General POST error
20 – DM POST NVRAM test failure. Invalid NVRAM content error (checksum, WWN, etc.)
21 – DM POST invalid peer Data Mover type
22 – DM POST invalid Data Mover part number
23 – DM POST Fibre Channel test failure. Error in blade Fibre connection (controller, Fibre discovery, etc.)
24 – DM POST network test failure. Error in Ethernet controller
25 – DM T2NET Error. Unable to get blade reason code due to management switch problems.

EMC VNX – Second Generation (Rockies)

“32 Cores firing MCx Technology”

“Leverage cores and scale them – core scaling to achieve the million IOPS”

“Amdahl’s law 80% scaling”

“Up to 1.1 million IOPS (I/Os per second) performance, 30-GBps bandwidth, and 6 PB of capacity”

“Million IO’s a second with sub ms response time”

“7600 all flash offering up to 1/2 million IOPS”

“Dedupe lun by lun not the entire array”

“6x the number of VMs on a common system”

“VDM: Containerise your filesystem and move seamlessly”

“Improved FAST VP granularity to 256 MB”

“Second-generation VNX will be at the center of a new preconfigured Vblock converged infrastructure solution from VCE

These were just some of the highlights from the Second Generation VNX Launch.

In this post we will take a look at some of the changes in hardware focusing on DPE, SPE and the new CS.

The full range of VNX Models have been rearchitected with new hardware and software. The VNX OE releases are codenamed after mountain ranges with this latest release codenamed “Rockies”, more on VNX code naming schemes can be found on Chads blog post: EMC VNX Inyo: “Hello World!”.

The main components of Rockies:
• New “Megatron” Storage Processor Enclosure (SPE) – (VNX 8000)
• New “Jetfire” Disk Processor Enclosure (DPE) – (VNX 5400-7600)
• New “Dobie” Control Station (CS)
• Existing “Argonaut” Data Mover Enclosures (DME)
• Existing 15, 25, 60 drive Disk Array Enclosures (DAE)

Meet the new Range of VNX Models which are available in unified block and file, block only, or file only configurations:

VNX_MODELS“Not a refresh a re-architecture”

New Disk and Storage Processor Enclosures:

VNX_CPU

There is a new 3U Disk Processor Enclosure (DPE) containing two SPs with 25×2.5″ drives. This DPE (Jetfire) will be used in the 5400 to 7600 systems. As can be seen above there are different processing capabilities dependent on the model. Two 1100W power supplies, one per CPU module. Four fan packs, two per CPU module – Torque Limiting Screw (TLS) for removal/insertion.

7600_Front

7600_Rear

For the 8000 model there is a new 4U Storage Processor Enclosure (SPE) that holds the two SP’s. Each SP consisting of 2x8core 2.7GHz Intel Sandy Bridge CPUs (16Cores Total), 128GB of 1600MHz DDR3 memory, 11 I/O PCI-E Gen3 slots with 9 slots supporting FC, FCoE, and iSCSI and the other 2 slots reserved for back-end connectivity. An update to the SP OS allowed for these greater memory configurations. This gives a total of 32 cores, 22 I/O Slots and 256GB of memory across the 2 SPs if you compare this to a VNX7500 which has 12 cores, 10 PCI-E Gen2 I/O slots and 96GB of memory you have a serious increase in performance. Two 2U dual Li-Ion SPSs are also included with One SPS dedicated to the Megatron SPE and the other one used for vault DAE.

8000_Front

8000_Back

New Control Station (Dobie)
The new 1U Control Station codenamed “Dobie” is an upgrade from the Maynard CS providing enahanced Unisphere performance using a multi core processor: 4 Core 3.1GHz Xeon CPU and 8GB memory. The same Dobie CS will be used across all models. File side connectivity continues to be provided for by the Argonaut X-Blades.

DOBIE

POOL Configurations
VNX2_PoolMax

Control Station Cabling
The new CS has the same level of connectivity as the Maynard CS with the only change being the location of the IPMI and CS A ports being swithced:
cs
cs_index

CS_Pic