Replacing a Disk in a VNX is quite a simple task if the correct procedure is followed (Discover the failed drive BED naviseccli -h SP_IP faults -list). In this post […]
Replacing a Disk in a VNX is quite a simple task if the correct procedure is followed (Discover the failed drive BED naviseccli -h SP_IP faults -list). In this post I will guide you through the process and provide a quick insight as to what happens under the hood. Firstly I insert the replacement disk in a free slot and run the getdisk command to ensure no data resides on the disk and also verify the firmware level on the disk:
Next I will run the zerodisk command with the getzeromark option, it returns an 8-digit number that is the address on the disk above which all the sectors are zeroed. This would be the expected initial zeromark for this disk and therefore no user data resides on this disk:
Unisphere Service Manager will be used to replace the drive and update the firmware.
1. Start USM
2. From the System screen, select Hardware > Hardware Replacement > Replace Faulted Disk.
3. Follow the instructions that appear.
From the output in the image below you can get a clearer picture of the Raid Group ‘RG241’ (RAID 10) that the failed disk is part of, the disks that make up the RG and also how the internal LUNs are carved up. Here we have 16 private LUNs 8108-8123 which make up the base structure for a Private Raid Group (4+4 R10) which would be part of a larger Storage Pool:
From the SP logs you can check the Hot spare that was in use and also view the rebuild status. Running a naviseccli getdisk –hs is another method of determining HS in use.
You may also use Powershell to check the disk firmware revision:
naviseccli -h SP_IP getdisk 0_3_8 | select-string “Product Revision:”
Or using NaviCLI firmware option:
naviseccli -h SP_IP -user username -password password -scope 0 firmware filename -d B_E_D,B_E_D,……