
EMC VNX 7500 – 48GB Memory Configuration
Offered with 05.32 release (INYO) is an expanded memory option on the VNX7500 model. This larger memory option allows for a memory upgrade from 24Gig to 48Gig per Service Processor. […]
Virtualization & Storage
Offered with 05.32 release (INYO) is an expanded memory option on the VNX7500 model. This larger memory option allows for a memory upgrade from 24Gig to 48Gig per Service Processor. […]
What is Vaulting Symmetrix VMAX systems are configured with vault drives on back-end Fibre Channel loops. Vaulting is the mechanism used to protect your VMAX data when: 1. powering down […]
In this example I will show how to complete the zoning for a two Host ESX Cluster (ESX01 & ESX02), using a dual Fabric connecting to a EMC VMAX 10K […]
“32 Cores firing MCx Technology” “Leverage cores and scale them – core scaling to achieve the million IOPS” “Amdahl’s law 80% scaling” “Up to 1.1 million IOPS (I/Os per second) […]
Here I have compiled a list of the most commonly used VNX Terms and a brief description of each. Term: Definition SP: Storage Processor – provides access to all external hosts and the file side of VNX. SP A: Storage processor A – Generic term for the first storage processor in VNX for block. SP B: Storage processor B – […]
The VNXe can send E-mail alerts of system events to a specified IP address when it encounters alerts or error conditions. View and Configure SMTP Server Settings In order to […]
The following tables detail the Minimum required Solutions Enabler & Unisphere code levels per major Enginuity release and the EMC recommended target codes to ensure a stable environment: Note: Check […]
In this post I will give a guideline on how to calculate the required drive count for a VNX Pool based on Throughput performance (IOPS). This is only a Rough-Order-of […]
In this post I will detail how to configure EMC FAST VP (Virtual Provisioning) on a VMAX storage array via SYMCLI, you can also use the “EMC Unisphere for VMAX” […]
When building a Vblock720 I use the following information to assist in Zoning and Masking of UCS Blades. In order to gather the wwn’s of all the VMAX front-end director […]
In this post I will detail some considerations for RecoverPoint Journal and Replica Volumes from a Performance Perspective. This will give you some insight into the workings and designs that […]
This post is a result of having to migrate Cisco UCS Blade boot LUNs between Raid Groups. Both the source and destination were Classic LUNs – RAID Group LUN (FLARE […]
A very important consideration is the capacity sizing of your RecoverPoint journals which is done on a per Consistency Group basis. This will dicatate the level of failback allowed (Protection Window) on a per CG configuration. The JV must have the correct performance characteristics in order to handle the total write performance of the LUN(s) being protected (next blog). They […]
This is my first blog on RecoverPoint; in this initial post I will detail some of the basic concepts and terminology around RecoverPoint and the GEN 5 hardware appliance specification. […]
This is the second part in the series on configuring the VNXe via command line. Here I will detail the Steps involved in creating both NFS and iSCSI datastores. The configuration steps outlined in Part2 will be the following: LACP Configuration Create the Network Interface for NFS NFS Shared Folder Server Configuration Create NFS datastores Creating iSCSI Interfaces/Nodes/Datastores LACP Configuration Link […]
This is the first in a series of blog posts on configuring VNXe using the command line. All the configurations here will be performed using “uemcli” which can be downloaded here . If you prefer to use the GUI interface then Henri has a very good series of blog posts here. The following scripts defined here are very useful if […]
This script is a result of having to create quite a large number of dedicated Masking views for VMware ESX 5.x server boot volumes and Masking Views for shared vmfs datastore clusters. In this example I will create two dedicated ESX server MV’s and one Cluster Masking View consisting of the two ESX Hosts sharing a VMFS datastore. Each VMware […]