This ‘Error 12000’ maybe encountered while exporting a VMAX 3/AFA LUN from ViPR Controller as a shared datastore to a specific vSphere ESXi Cluster (ViPR shared export mask). The reason for the failure is because ViPR either attempts to add the new shared LUN to independent exclusive ESXi Masking Views or to a manually created shared cluster masking view:
|Error 12000: Operation failed due to the following error: Smis job failed: string ErrorDescription = “A device cannot belong to more than one storage group in use by FAST”;|
This issue arises in scenarios where for example the ESXi hosts already have independent Masking Views created without the NO_VIPR suffix in the Masking view name and/or an ESXi Cluster Masking View (Tenant Pod in EHC terms) has been created outside of ViPR control.
In the case of VMAX ensure only one shared cluster Masking View (MV) exists for the tenant cluster (utilizing cascaded initiator groups) and is under ViPR management control – if the Cluster MV was created manually (for example VxBlock factory) then create a small volume for this manually created MV directly from Unisphere/Symcli and then perform a ViPR ingestion of this newly created volume, this will result in the MV coming under ViPR Management.
In the case of a VxBlock (including Cisco UCS blades) all hosts in the cluster must contain exclusive masking views for their respective boot volumes and these exclusive masking views MUST have a NO_VIPR suffix.
You may ask the question why each host has its own dedicated masking view?: Think Vblock/VxBlock with UCS, where each UCS ESXi blade server boots from a SAN-attached boot volume presented from the VMAX array (Vblock/VxBlock 700 series = VMAX). Further detail can be found here on how specific functioning Masking Views are configured on a Vblock/VxBlock:
Key point: dedicated exclusive Masking views are required for VMware ESXi boot volumes and MUST have a NO_VIPR suffix in addition to Cluster Masking Views for shared vmfs datastores being under ViPR Control. Please reference the following post for guidance in relation to Boot Volumes exclusive masking views and how to ingest these in ViPR:
In the case of ViPR in this scenario it is best to ingest the boot volumes as per the guidance above and then perform the export of a shared volume which will result in ViPR skipping over the exclusive masking views ( _NO_VIPR appended to their exclusive mask name) and ViPR either creating or utilizing (in the case of an existing ViPR export mask) a ViPR controlled shared cluster Masking View.
Note: if you have circumvented this error by manually creating the shared Cluster Masking View (through Unisphere/SYMCLI) in advance of the first cluster wide ViPR export please ingest this Masking View in order to bring it under ViPR control as per above guidance else you will experience issues later (for example adding new ESXi hosts to the cluster).