Cisco UCS – Determining ESXi FNIC&ENIC via PowerCLI

The following script allows the user to retrieve a listing of Network (ENIC) & Storage (FNIC) firmware drivers installed on Cisco UCS blades at a per vSphere cluster level. You may download the ‘Cisco_FNIC_ENIC.ps1‘ script here: Cisco_FNIC_ENIC.ps1 (Remove the .doc extension).

The script will begin by prompting you to enter the vCenter IP Address, username and password. A list of all the available clusters residing in vCenter will be returned. Followed by a prompt to enter the vSphere cluster name, from the cluster defined the script will retrieve a per ESXi listing of ENIC&FNIC firmware levels. The script will firstly prompt the user to enable SSH on all the hosts in the cluster:

UCS_FNIC_ENIC1

UCS_FNIC_ENIC2

 

Once you have completed the tasks on the hosts that required SSH Access, you may then return to the running script and type option ‘y’ in order to again disable SSH on all the hosts in the specified cluster:

UCS_FNIC_ENIC3

PowerCLI Script:

#######################################
# Confirm CISCO FNIC & ENIC Drivers
# Date: 2016-07-01
# Created by: David Ring
#######################################

###### vCenter Connectivity Details ######

Write-Host “Please enter the vCenter Host IP Address:” -ForegroundColor Yellow -NoNewline

$VMHost = Read-Host

Write-Host “Please enter the vCenter Username:” -ForegroundColor Yellow -NoNewline

$User = Read-Host

Write-Host “Please enter the vCenter Password:” -ForegroundColor Yellow -NoNewline

$Pass = Read-Host

Connect-VIServer -Server $VMHost -User $User -Password $Pass

###### Please enter the Cluster to check CISCO Versions #######

Write-Host “Clusters Associated with this vCenter:” -ForegroundColor Green

$VMcluster = ‘*’

ForEach ($VMcluster in (Get-Cluster -name $VMcluster)| sort)

{
Write-Host $VMcluster
}

Write-Host “Please enter the Cluster to lookup CISCO FNIC & ENIC Drivers:” -ForegroundColor Yellow -NoNewline

$VMcluster = Read-Host

###### Enabling SSH ######

Write-Host “Do you need to Enable SSH on the Cluster ESXi Hosts? ” -ForegroundColor Yellow -NoNewline

Write-Host ” Y/N:” -ForegroundColor Red -NoNewline

$SSHEnable = Read-Host

if ($SSHEnable -eq “y”) {

Write-Host “Enabling SSH on all hosts in your specified cluster:” -ForegroundColor Green

Get-Cluster $VMcluster | Get-VMHost | ForEach {Start-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”})}

}

###### Confirm Driver Versions ######

Write-Host “Confirm CISCO FNIC & ENIC Drivers” -ForegroundColor Green

$hosts = Get-Cluster $VMcluster | Get-VMHost

forEach ($vihost in $hosts)

{

Write-Host -ForegroundColor Magenta “Gathering Driver versions on” $vihost

$esxcli = get-vmhost $vihost | Get-EsxCli

$esxcli.software.vib.list() | Where { $_.Name -like “net-enic”} | Select @{N=”VMHost”;E={$ESXCLI.VMHost}}, Name, Version

$esxcli.software.vib.list() | Where { $_.Name -like “scsi-fnic”} | Select @{N=”VMHost”;E={$ESXCLI.VMHost}}, Name, Version

}

###### Disabling SSH ######

Write-Host “Ready to Disable SSH? ” -ForegroundColor Yellow -NoNewline

Write-Host ” Y/N:” -ForegroundColor Red -NoNewline

$SSHDisable = Read-Host

if ($SSHDisable -eq “y”) {

Write-Host “Disabling SSH” -ForegroundColor Green

Get-Cluster $VMcluster | Get-VMHost | ForEach {Stop-VMHostService -HostService ($_ | Get-VMHostService | Where {$_.Key -eq “TSM-SSH”}) -Confirm:$FALSE}

}

 

Useful References:

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1027206

http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html

EMC XtremIO – Smart Zoning Example

The example provided is based on the following design:

  • Dual X-Brick Cluster
  • Cisco MDS Switches – Dual Fabric
  • VMware ESXi 4x Host Environment
  • 4 Smart Zoned Paths per ESXi Host (2 paths per Fabric)

ESXi in this scenario may depict a standalone host or a 4x cluster configuration (Scripts provided are based on a 4x Host basis). Balancing the ESXi Hosts between the XtremIO Storage Controllers is key for the design to provide a distributed workload across all the available Storage Controller target ports. The following example depicts a 4x Host configuration, in the event a 5th Host is required then it is advised to use a round-robin methodology (ESXi05 utilises ESXi01 zoning logic etc.).

The Smart Zoning feature is available with MDS 9000 series switches from NX-OS 5.2(6).

Some of the key benefits of using Smart Zoning:

  • Reduced configuration simplifying the zoning process.
  • Simplified addition of new ESXi hosts – add new ESXi member Host to a Zone and reactivate.
  • Eliminates single-initiator to single-target zones.
  • Reduced Zoneset size – multiple initiators and multiple targets zoned together.
  • Reduced number of Access Control Entries (ACEs).

Continue reading

CISCO MDS 9148 – PortChannel Maximums

An issue I encountered recently was in relation to the number of FLOGIs achievable on a single CISCO 9148 MDS PortChannel running on NX-OS Release 5.2(8e):

Cisco MDS NX-OS Release 5.2(x) Maximum Configuration Limit FLOGIs per port channel = 114

In this example we had a UCS environment with greater than 114 Blades (128 Blades to be exact) FLOGI was successfully completing for 126 out of the 128 UCS B-Series Blades (surprising as the stated limit was 114).

MDS-UCS

Example command outputs while running code version 5.2(8e) and exceeding the stated maximum limit:

MDS-9K-A# show logging logfile | include FLOGI
2015 Nov 9 00:57:05.182 MDS-9K-A %FLOGI-1-MSG_FLOGI_REJECT_FCID_ERROR: %$V
SAN 10%$ [VSAN 10, Interface port-channel 10: mode[F]] (pwwn: 20:00:00:25:b5:05:
XX:XX) FLOGI rejected – FCID allocation failed with error 0x401b0000.

MDS-9K-A# show flogi internal info | i key|Interface | i key p 1
FLOGI rejected – FCID allocation failed with error 0x401b0000

MDS-9K-A# show flogi database interface port-channel 10
Total number of flogi = 127

There was two options available to remedy this problem: one was to split the port-channel, this configuration would create two port-channels on each SAN Fabric switch (4 port-channels total) and split the blades HBA’s between the two PortChannels (64 FLOGIs per PortChannel) giving us the ability to cater for our required 128 FLOGIs per FABRIC/switch or 256 in total across both FABRICs. In fact this solution would theoretically allow for 228 FLOGIs per switch divided equally across both port-channels per switch or a theoretical maximum of 456 FLOGIs across both FABRICs SwitchA&B.

The second option and my preferred route is to perform a code upgrade of the switch:

Cisco MDS NX-OS Release 6.2(x) Maximum Configuration Limit FLOGIs per port channel = 256

Output after upgrading to 6.2:
MDS-9K-A# show flogi database interface port-channel 10
Total number of flogi = 129
Success- all 128 HBA’s are successfully logging into MDS-A switch via a single PortChannel.

MDS-9K-A# show flogi database interface port-channel 11
Total number of flogi = 129
Success- all 128 HBA’s are successfully logging into MDS-B switch via a single PortChannel.

Also worth noting:

NX-OS 5.2.x
PortChannels and member ports in PortChannels = 16 PortChannels with 16 members ports in all PortChannels. For example you could have 16 PortChannels each with 1 member, 2 PortChannels with 8 members or 1 PortChannel with 16 members.
NX-OS 5.2 Limits

NX-OS 6.2.x
PortChannels and member ports in PortChannels = 48 PortChannels with a maximum of 16 members in each PortChannel.
NX-OS 6.2 Limits

EMC ViPR – Cisco IVR Cross-Connect Zoning (VPLEX)

Known ViPR&VPLEX Storage Provisioning Issue:
The following error may be encountered while provision a shared VPLEX distributed volume to an ESXi Cluster using ViPR v2.x – 2.3:

CLUSTER MDS ERROR

The reason why this issue occurs during a ViPR storage provisioning task with VPLEX is due to the fact that ViPR incorrectly attempts to apply two simultaneous updates to the Cisco MDS IVR database, correctly the MDS database is locked by the first task and the second task times out resulting in a failed ViPR provisioning process. The tasks should be executed in a sequential fashion allowing each task to complete and then commit changes to the IVR database thus removing the lock it held once the commit is successful. Once the database lock is removed then the subsequent task may execute on the database.

Workaround:
Executing an exclusive storage provisioning order from ViPR catalog for a single ESXi host works perfectly, including automatically creating the required Cross-Connect Zoning, this is due to the fact the single workflow performs MDS IVR database updates sequentially. During the single ESXi host exclusive storage provisioning task ViPR creates the necessary initiators, storage views and IVR Zones (both local and cross-connect zoning) for a single host. BUT performing a shared storage provisioning task to an ESXi Cluster fails in a single catalog order, it will also fail if two exclusive storage provision orders are executed at the same time. In summary the workaround is to execute an exclusive storage provisioning order for each host in the cluster individually one at a time. Once this is complete and each host has a volume presented and VPLEX has the correct initiators and storage views created by ViPR, you may then create a new distributed LUN for the whole ESXi cluster. ViPR simply adds the new distributed volumes to existing storage views in VPLEX (there is no zoning going on when you run the ddev creation, thus no locking). Once you have a working distributed volume for all of the hosts, you may then remove the exclusive volumes and everything should function accordingly. Ensure to verify that all the required zoning (including IVR Zones) is configured correctly on all switches and the ESXi hosts can see all associated paths.

NOTE: ViPR engineering plan to enhance the Zoning workflow with an additional step to obtain/monitor any IVR database locks before proceeding with the IVR zoning operations. This will be targeted for the next ViPR release. I will provide updates to this post in due course.

Solution Example:
The below diagram depicts the connectivity requirements in order to implement a ViPR storage provisioning solution with a VPLEX Metro configuration using Cross-Connect Zoning:

VIPR-VPLEX-IVR-Dual - New Page

From the above digram you can see that an ISL is in place for Site-to-Site connectivity, in this example configuration the ISL carries VPLEX-FC-WAN-Replication traffic over VSAN30(Fabric-A) and VSAN31(Fabric-B) -(VPEX FC WAN COM). VSAN30 is stretched between Fabric-A switches on both sites and VSAN31 is stretched between both switches on Fabric-B for Site1&2. VSAN30&31 can be used as transit VSANs for this example IVR configuration.

In order for ViPR v2.x to successfully execute the task of automatically creating the required cross-connect zoning the following configuration needs to be in place (as per example diagram above):

Site1:
Fabric-A, VSAN10: associated interfaces|PC (even ESX hba of site1, VPLEX FE&BE and PC30) added as members to vsan10.
Fabric-B, VSAN11: associated interfaces|PC (odd ESX hba of site1, VPLEX FE&BE and PC30) added as members to vsan11.
Site 2:
Fabric-A, VSAN20: associated interfaces|PC (even ESX hba of site2, VPLEX FE&BE and PC31) added as members to vsan20.
Fabric-B, VSAN21: – associated interfaces|PC (odd ESX hba of site2, VPLEX FE&BE and PC31) added as members to vsan21.

Site1 – Site2:
Fabric-A: VSAN30 used as a transit vsan over Port-channel 30.
Fabric-B: VSAN31 used as a transit vsan over Port-channel 31.

A prereq is required in order for ViPR to successfully create the cross-connect zoning automatically as part of the provisioning workflow, the prereq is to manually create an IVR zone on fabric A, connecting vsan 10 and vsan 20 and an IVR zone on Fabric B connecting vsan11 and vsan 21 (example IVR Zones provided below).

In the case of ViPR v2.2 an additional prereq task is required and that is to stretch the VSANs between sites, as per this example VSAN20 gets added to switch-A on Site 1 and vice-versa VSAN10 added to switch-A on Site2, repeat same for Fabric-B switches but no local interfaces are assigned to these dummy VSANs, essentially a VSAN20 is created without any member on Switch-A Site1 etc. This is done for all respective VSANs as can be seen in the example configuration provided below. As part of the VSAN stretch ensure to add the allowed VSANs to the respective port-channels:

Port-Channel 30 Allowed VSAN 10,20,30
Port-Channel 31 Allowed VSAN 11,21,31

Once the VSAN is stretched across the sites as per the prereq for ViPR v2.2, ViPR will then automatically create the required IVR zones as part of the provisioning workflow.

Note: The vArray should be set for Automatic Zoning for all this to occur.

Example MDS Configuration
These are example configuration steps to be completed on both sites MDS switches in order to enable Cisco Inter-VSAN Routing (IVR is the standard for cross-connect zoning with VPLEX Metro) and to enable automatic cross-connect zoning with ViPR:

FABRIC ‘A’ Switches

feature ivr
ivr nat
ivr distribute
ivr commit

system default zone distribute full
system default zone mode enhanced
ivr vsan-topology auto
zone mode enhanced vsan 10
zone mode enhanced vsan 20
zone mode enhanced vsan 30

vsan database
vsan 10 name “VSAN10”
vsan 20 name “VSAN20”
vsan 30 name “vplex1_wan_repl_vsan30”

interface port-channel 30
channel mode active
switchport mode E
switchport trunk allowed vsan 10
switchport trunk allowed vsan add 20
switchport trunk allowed vsan add 30
switchport description CROSS-SITE-LINK
switchport speed 8000
switchport rate-mode dedicated

Configuring FABRIC A switches Fcdoamin priorities:

Site1:
fcdomain priority 2 vsan 10
fcdomain domain 10 static vsan 10
fcdomain priority 100 vsan 20
fcdomain domain 22 static vsan 20
fcdomain priority 2 vsan 30
fcdomain domain 30 static vsan 30

Site2:
fcdomain priority 100 vsan 10
fcdomain domain 12 static vsan 10
fcdomain priority 2 vsan 20
fcdomain domain 20 static vsan 20
fcdomain priority 100 vsan 30
fcdomain domain 32 static vsan 30

Example: configuring Inter-VSAN routing (IVR) Zones connecting an ESXi host HBA0 over VSANs 10 and 20 from site1->site2 and vice versa site2->site1 utilising the transit VSAN30:

device-alias database
device-alias name VPLEXSITE1-E1_A0_FC02 pwwn 50:00:14:42:A0:xx:xx:02
device-alias name VPLEXSITE1-E1_B0_FC02 pwwn 50:00:14:42:B0:xx:xx:02
device-alias name VPLEXSITE2-E1_A0_FC02 pwwn 50:00:14:42:A0:xx:xx:02
device-alias name VPLEXSITE2-E1_B0_FC02 pwwn 50:00:14:42:B0:xx:xx:02
device-alias name ESXi1SITE1-VHBA0 pwwn xx:xx:xx:xx:xx:xx:xx:xx
device-alias name ESXi1SITE2-VHBA0 pwwn xx:xx:xx:xx:xx:xx:xx:xx
device-alias commit
device-alias distribute

ivr zone name ESXi1SITE1-VHBA0_VPLEXSITE2-E1_A0_FC02
member device-alias ESXi1SITE1-VHBA0 vsan 10
member device-alias VPLEXSITE2-E1_A0_FC02 vsan 20
ivr zone name ESXi1SITE1-VHBA0_VPLEXSITE2-E1_B0_FC02
member device-alias ESXi1SITE1-VHBA0 vsan 10
member device-alias VPLEXSITE2-E1_B0_FC02 vsan 20

ivr zone name ESXi1SITE2-VHBA0_VPLEXSITE1-E1_A0_FC02
member device-alias ESXi1SITE2-VHBA0 vsan 20
member device-alias VPLEXSITE1-E1_A0_FC02 vsan 10
ivr zone name ESXi1SITE2-VHBA0_VPLEXSITE1-E1_B0_FC02
member device-alias ESXi1SITE2-VHBA0 vsan 20
member device-alias VPLEXSITE1-E1_B0_FC02 vsan 10

ivr zoneset name IVR_vplex_hosts_XC_A
member ESXi1SITE1-VHBA0_VPLEXSITE2-E1_A0_FC02
member ESXi1SITE1-VHBA0_VPLEXSITE2-E1_B0_FC02

member ESXi1SITE2-VHBA0_VPLEXSITE1-E1_A0_FC02
member ESXi1SITE2-VHBA0_VPLEXSITE1-E1_B0_FC02

ivr zoneset activate name IVR_vplex_hosts_XC_A
ivr commit

FABRIC ‘B’ Switches

feature ivr
ivr nat
ivr distribute
ivr commit

system default zone distribute full
system default zone mode enhanced
ivr vsan-topology auto
zone mode enhanced vsan 11
zone mode enhanced vsan 21
zone mode enhanced vsan 31

vsan database
vsan 11 name “VSAN11”
vsan 21 name “VSAN21”
vsan 31 name “vplex1_wan_repl_vsan31”

interface port-channel 31
channel mode active
switchport mode E
switchport trunk allowed vsan 11
switchport trunk allowed vsan add 21
switchport trunk allowed vsan add 31
switchport description CROSS-SITE-LINK
switchport speed 8000
switchport rate-mode dedicated

Configuring FABRIC B switches Fcdoamin priorities:

Site1:
fcdomain priority 2 vsan 11
fcdomain domain 11 static vsan 11
fcdomain priority 100 vsan 21
fcdomain domain 23 static vsan 21
fcdomain priority 2 vsan 31
fcdomain domain 31 static vsan 31

Site2:
fcdomain priority 100 vsan 11
fcdomain domain 13 static vsan 11
fcdomain priority 2 vsan 21
fcdomain domain 21 static vsan 21
fcdomain priority 100 vsan 31
fcdomain domain 33 static vsan 31

Example configuring Inter-VSAN routing (IVR) zones connecting an ESXi host HBA1 over VSANs 11 and 21 from site1->site2 and vice versa site2->site1 utilising the transit VSAN31:

device-alias database
device-alias name VPLEXSITE1-E1_A0_FC02 pwwn 50:00:14:42:A0:xx:xx:03
device-alias name VPLEXSITE1-E1_B0_FC02 pwwn 50:00:14:42:B0:xx:xx:03
device-alias name VPLEXSITE2-E1_A0_FC02 pwwn 50:00:14:42:A0:xx:xx:03
device-alias name VPLEXSITE2-E1_B0_FC02 pwwn 50:00:14:42:B0:xx:xx:03
device-alias name ESXi1SITE1-VHBA1 pwwn xx:xx:xx:xx:xx:xx:xx:xx
device-alias name ESXi1SITE2-VHBA1 pwwn xx:xx:xx:xx:xx:xx:xx:xx
device-alias commit
device-alias distribute

ivr zone name ESXi1SITE1-VHBA1_VPLEXSITE2-E1_A0_FC03
member device-alias ESXi1SITE1-VHBA1 vsan 11
member device-alias VPLEXSITE2-E1_A0_FC03 vsan 21
ivr zone name ESXi1SITE1-VHBA1_VPLEXSITE2-E1_B0_FC03
member device-alias ESXi1SITE1-VHBA1 vsan 11
member device-alias VPLEXSITE2-E1_B0_FC02 vsan 21

ivr zone name ESXi1SITE2-VHBA1_VPLEXSITE1-E1_A0_FC03
member device-alias ESXi1SITE2-VHBA0 vsan 20
member device-alias VPLEXSITE1-E1_A0_FC02 vsan 10
ivr zone name ESXi1SITE2-VHBA1_VPLEXSITE1-E1_B0_FC03
member device-alias ESXi1SITE2-VHBA1 vsan 21
member device-alias VPLEXSITE1-E1_B0_FC03 vsan 11

ivr zoneset name IVR_vplex_hosts_XC_B
member ESXi1SITE1-VHBA1_VPLEXSITE2-E1_A0_FC03
member ESXi1SITE1-VHBA1_VPLEXSITE2-E1_B0_FC03

member ESXi1SITE2-VHBA1_VPLEXSITE1-E1_A0_FC03
member ESXi1SITE2-VHBA1_VPLEXSITE1-E1_B0_FC03

ivr zoneset activate name IVR_vplex_hosts_XC_B
ivr commit

Verification commands to check status of configuration:
show fcdomain domain-list
Verifies unique domain ID assignment. If a domain overlap exists, edit and verify the allowed-domains list or manually configure static, non-overlapping domains for each participating switch and VSAN.

show interface brief
Verifies if the ports are operational, VSAN membership, and other configuration settings covered previously.

show fcns database
Verifies the name server registration for all devices participating in the IVR.

show zoneset active
Displays zones in the active zone set. This should include configured IVR zones.
show zone active vsan X |grep -i ivr

show ivr fcdomain
Displays the IVR persistent fcdomain database.

show ivr internal
Shows the IVR internal troubleshooting information.

show ivr pending-diff
Shows the IVR pending configuration.

show ivr service-group
Shows the difference between the IVR pending and configured databases.

show ivr tech-support
shows information that is used by your customer support representative to troubleshoot IVR issues.

show ivr virtual-domains
Shows IVR virtual domains for all local VSANs.

show ivr virtual-fcdomain-add-status
Shows IVR virtual fcdomain status.

show ivr vsan-topology
Verifies the configured IVR topology.

show ivr zoneset
Verifies the IVR zone set configuration.

show ivr zone
Verifies the IVR zone configuration.

clear ivr zone database
Clears all configured IVR zone information.
Note: Clearing a zone set erases only the configured zone database, not the active zone database.

Useful CISCO Docs:
Cisco IVR Troubleshooting
IVR Zones and Zonesets

Inter-VSAN Routing (IVR) definition: An IVR zone is a set of end devices that are allowed to communicate across VSANs within their interconnected SAN fabric. An IVR path is a set of switches and Inter-Switch Links (ISLs) through which a frame from an end device in one VSAN can reach another end device in some other VSAN. Multiple paths can exist between two such end devices. A Transit VSAN is a VSAN that exists along an IVR path from the source edge VSAN of that path to the destination edge VSAN of that path, in the example solution diagram above you will see that VSAN 30 and VSAN 31 are transit VSANs. Distributing the IVR Configuration Using CFS: The IVR feature uses the Cisco Fabric Services (CFS) infrastructure to enable efficient configuration management and to provide a single point of configuration for the entire fabric in the VSAN.

Thanks to @HeagaSteve,Joni,Hans,@dclauvel & Sarav for providing valuable input.

CISCO MDS – Useful ‘Show’ Commands

CONFIG:
show startup-config
show running-config
show running-config diff
show run |include host|gateway|ntp|fcdomain|http|telnet|zoneset|vsan
show start |include host|gateway|ntp|fcdomain|http|telnet|zoneset|vsan
show install all status
show switchname
show wwn switch
show switch summary
show version
show cdp neighbors
show boot
show system internal flash
show snmp host
show ntp peers
show ssh server
show telnet server

Continue reading

Cisco Nexus 3064 – Configuring Jumbo Frames

The default MTU size on a Nexus 3064 switch is 1500 bytes, the following details how to reconfigure the switch for a system wide MTU value of 9216(jumbo).

——————————————————————————–
Note: The Cisco Nexus 3000 Series switch does not fragment frames. As a result, the switch cannot have two ports in the same Layer 2 domain with different maximum transmission units (MTUs). A per-physical Ethernet interface MTU is not supported. Instead, the MTU is set according to the QoS classes. You modify the MTU by setting Class and Policy maps.
As per Cisco Documentation
——————————————————————————–

Configuring Jumbo Frames:
Begin by creating a policy-map of type network-qos named aptly in this example ‘JumboFrames’ and assign as the system-qos. (As stated above you cannot configure on a per interface level).


n3k-sw# config t
n3k-sw(config)# policy-map type network-qos ?
WORD Policy-map name (Max Size 40)
n3k-sw(config)# policy-map type network-qos JumboFrames
n3k-sw(config-pmap-nq)# class type network-qos class-default
n3k-sw(config-pmap-nq-c)# mtu 9216
n3k-sw(config-pmap-nq-c)# system qos
n3k-sw(config-sys-qos)# service-policy type network-qos ?
WORD Policy-map name (Max Size 40)
n3k-sw(config-sys-qos)# service-policy type network-qos class-default
n3k-sw(config-sys-qos)# show policy-map type network-qos
Type network-qos policy-maps
===============================
policy-map type network-qos JumboFrames
class type network-qos class-default
mtu 9216

Verification of the change can be validated by running a ‘show queuing interface’ command on one of the 3k interfaces (interface Ethernet 1/2 in this example):

n3k-sw# show queuing interface ethernet 1/2

Ethernet1/2 queuing information:
qos-group sched-type oper-bandwidth
0 WRR 100
qos-group 0
HW MTU: 9216 (9216 configured)
drop-type: drop, xon: 0, xoff: 0

If you run the ‘show interface’ command then a value of 1500 will be displayed despite the system wide change that was made:
n3k-sw# show queuing interface ethernet 1/2
Ethernet1/2 is up
Dedicated Interface
Hardware: 100/1000/10000 Ethernet
Description: 6296 2A Mgmt0
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec

CISCO UCS – Rebooting Fabric Interconnect(s)

Begin by connecting to the cluster IP over SSH and checking which FI is Primary/Subordinate:

FI-A# show cluster state
A: UP, PRIMARY
B: UP, SUBORDINATE

Note: show cluster extended-state will provide more detailed information.

Having confirmed the ‘B’ fabric switch is the subordinate connect to FI-B mgmt cli interface:

FI-A# connect local-mgmt B

From FI-B local-mgmt interface issue the reboot command:

FI-B(local-mgmt)# reboot
Before rebooting, please take a configuration backup.
Do you still want to reboot? (yes/no):yes

Run the ‘cluster state’ command again to check on the status of FI-B:

FI-A(local-mgmt)# show cluster state
A: UP, PRIMARY
B: DOWN, INAPPLICABLE
HA NOT READY
Peer Fabric Interconnect is down

Once the cluster enters a HA READY status, make FI-B the Primary switch in order to reboot FI-A:

FI-A(local-mgmt)# cluster lead b

Note: After initiating a fail over the SSH session will disconnect, re-connect to the cluster and confirm cluster state.

Connect to local mgmt ‘a’ and reboot FI-A:

FI-B# connect local-mgmt a
FI-A(local-mgmt)# reboot
Before rebooting, please take a configuration backup.
Do you still want to reboot? (yes/no):yes

Confirm HA READY status before setting FI-A back to PRIMARY:

FI-B# show cluster state
B: UP, PRIMARY
A: UP, SUBORDINATE
HA READY

Set FI-A as PRIMARY, this will need to be set from current PRIMARY FI-B mgmt interface:

FI-B# connect local-mgmt b
FI-B(local-mgmt)# cluster lead a