Standard Upgrade Procedure (Nexenta)

Standard Upgrade Procedure (Nexenta)

Check the Release Notes for any special requirements for the upgraded version

Verify access to the package repository
$setup appliance upgrade -t
this is a dry run, that does not make any changes

Clean up the device links, on both nodes:
nmc$ lunsync -r
# devfsadm -Cv

Make sure all appliance faults have been cleared.
nmc$ show faults
Verify node to node communications
nmc$ show network ssh-bindings

Make a backup of the appliance configuration (both nodes)
nmc$ setup appliance configuration store

Transfer the backup file off the system. the file is saved in:
/volumes/.configuration

If iSCSI, also backup the mapping configuration.

On the node with active iSCSI connections:
nmc$ setup iscsi config saved

Disable all auto-sync and auto-snap jobs
nmc$ show auto-syncnmc$ show auto-snap

Remove cluster cache files
# mv /etc/zfs/zpool.cache /etc/zfs/zpool.cache.old
# mv /opt/HAC/RSF-1/etc/volume-cache /opt/HAC/RSF-1/etc/volume-cache.old# mv /opt/HAC/RSF-1/etc/volume-cache-live /opt/HAC/RSF-1/etc/volume-cache-live.old# mv /opt/HAC/RSF-1/etc/.MHDC.cache /opt/HAC/RSF-1/etc/.MHDC.cache.old
If you are using Coraid, check with Coraid and make sure that you have the current driver for the revision of Nexenta OS, that you will be upgrading to.


Start the upgrade
1) Place all services into Manual failover mode on both nodes of the cluster.
2) Verify Access to all shares from the clients

On the SECONDARY node:

Verify no volumes are imported
nmc:$ show volume-cache

Refresh the repository
nmc:$ setup appliance repository:wq to save and answer yes to re-read configuration (make sure you can reach repository for upgrade)

3) Perform the upgrade
nmc:/$ setup appliance upgrade

This will create a checkpoint automatically

4) Follow the interactive questions during the upgrade process, the final question will request to reboot.
5) After the reboot, say 'Yes' to enable active checkpoint
6) Verify you are running the expected version
nmc:/$ show appliance version7) Verify RSF Cluster status on newly upgraded node:
NMV -> Settings -> HA Cluster -> Status

On the PRIMARY Node:
Set failover to 'Automatic' on the SECONDARY node. If still in 'Manual' volume will not import.

1) Failover the volumes from the PRIMARY node to newly upgraded SECONDARY nodeVerify connectivity to the shares from the clients.

Verify no volumes are imported on the PRIMARY node
nmc:$ show volumes

Refresh the repository
nmc:$ setup appliance repository:wq to save and answer yes to re-read configuration
2) Perform the upgrade
nmc:/$ setup appliance upgrade
this will create a checkpoint automatically
3) Follow the interactive questions during the upgrade process, the final question will request to reboot.
4) After the reboot, say 'Yes' to enable active checkpoint
5) Verify you are running the expected version
nmc:/$ show appliance version

6) Verify RSF Cluster status on newly upgraded node:
NMV -> Settings

    • Related Articles

    • How do I upgrade from Bright 6.0/6.1/7.0/7.1 to Bright 7.2?

      The procedure below can be used to upgrade a Bright 6.0 or 6.1 or 7.0 or 7.1 installation with failover to Bright 7.2   Supported Linux distributions An upgrade to Bright 7.2 is supported for Bright 6.0, 6.1, 7.0, or 7.1 clusters that are running the ...
    • ZeusRAM C023 Firmware Update Instructions

      1. From Nexenta Management Console (NMC) check the status of your pools $ zpool status <pool_name> 2. Find your log device and record the Worldwide SAS ID (WWSID). You will use the Worldwide SAS ID to identify this device later. Write it down. For an ...
    • Failed HDD Replacement Instructions

      From Nexenta Management Console (NMC) check the status of your pools $ zpool status Find the faulted HDD and record the worldwide number (WWN) of the HDD, and the name of the pool it resides in. You will use the WWN to identify this HDD later An ...
    • Disk mapping for failed drives

      How to identify failing drives When monitoring for drive failures in Nexenta and other ZFS-based storage appliances, not all failures will manifest as taking the drive offline completely. If a user or the technician suspects that a failing drive is ...
    • Install Nexenta-Collector

      SSH as root and run: # !bash # y # wget http://www.nexenta.com/nexenta-collector_latest_solaris-i386.deb # dpkg -i nexenta-collector_latest_solaris-i386.deb # nexenta-collector --no-upload Once it’s finished, it’ll display the name and location of ...