Failed HDD Replacement Instructions
- From Nexenta Management Console (NMC) check the status of your pools
- $ zpool status
- Find the faulted HDD and record the worldwide number (WWN) of the HDD, and the name of the pool it resides in. You will use the WWN to identify this HDD later
- An example WWN is: c0t5000C500409B091Bd0
- Offline the HDD using it’s WWN recorded above
- $ setup volume <POOL> offline-lun <WWN>
- Clear the pool errors
- $ setup volume <POOL> clear-errors
- The faulted HDD should now show as OFFLINE
- If the HDD shows as UNAVAIL, SSH as root to the second head node and use that node for steps 4 and 5
- Find the HDD by it's WWN and record the JBOD and slot it is in
- $ show lun slotmap | grep -i <WWN>
- Blink the HDD to prepare it for removal
- $ show lun <WWN> blink -y
- The HDD should now be blinking red in the JBOD
- Physically remove the blinking (red) HDD from the JBOD
- Physically insert the replacement HDD into the same slot
- Press CTRL-C to interrupt blinking of the HDD
- Rescan the HDD’s in the system
- $ lunsync -r -y
- For StorCore 104 HA Clusters, run this on both head nodes
- Replace the faulted HDD with the replacement via NMC
- $ setup volume <POOL> replace-lun -o <WWN>
- For StorCore 104 HA Clusters, run this on the active head node
- Select the HDD (LUN) to use as a replacement
- You will be prompted with available HDDs not currently assigned to a pool, cache, or hotspare
- Verify the disk is resilvering
- $ show volume <POOL> status
Heartbeat HDD Replacement Instructions
1. Replace the drive with the following instructions located here. Be sure to note the WWN of the failed HDD, as as well as the replacement HDD. 2. Open the cluster configuration file with your favorite editor (we will use vim in this example) # vim ...
Failed Boot Mirror HDD Replacement Instructions
From NMV: Settings > Disks Click blink on the far right for the faulted LUN Ignore the warning, it will not cause issues In this example, we will use c3t1d0 This will do a DD read blink on the drive. From NMC: # setup volume syspool offline-lun # ...
Disk mapping for failed drives
How to identify failing drives When monitoring for drive failures in Nexenta and other ZFS-based storage appliances, not all failures will manifest as taking the drive offline completely. If a user or the technician suspects that a failing drive is ...
How to replace a failed HDD using sas2ircu
1. Run as root: # zpool offline data2 <BAD WWN> 2. Remove the failed disk, then insert the replacement. Wait at least 20 seconds for the drive to initialize. 3. Run as root: # devfsadm -Cv This rescans and removes old dangling device links from the ...
ZeusRAM C023 Firmware Update Instructions
1. From Nexenta Management Console (NMC) check the status of your pools $ zpool status <pool_name> 2. Find your log device and record the Worldwide SAS ID (WWSID). You will use the Worldwide SAS ID to identify this device later. Write it down. For an ...