The procedure below can be used to upgrade a Bright 6.0 or 6.1 or 7.0 or 7.1 installation with failover to Bright 7.2
Supported Linux distributions
An upgrade to Bright 7.2 is supported for Bright 6.0, 6.1, 7.0, or 7.1 clusters that are running the following Linux distributions:
- RedHat Enterprise Linux 6.5 (RHEL6u5)
- RedHat Enterprise Linux 6.6 (RHEL6u6)
- RedHat Enterprise Linux 6.7 (RHEL6u7)
- RedHat Enterprise Linux 7.0 (RHEL7u0)
- RedHat Enterprise Linux 7.1 (RHEL7u1)
- RedHat Enterprise Linux 7.2 (RHEL7u2)
- CentOS Linux 6.5 (CENTOS6u5)
- CentOS Linux 6.6 (CENTOS6u6)
- CentOS Linux 6.7 (CENTOS6u7)
- CentOS Linux 7.0 (CENTOS7u0)
- CentOS Linux 7.1 (CENTOS7u1)
- CentOS Linux 7.2 (CENTOS7u2)
- Scientific Linux 6.6 (SL6u6)
- Scientific Linux 6.7 (SL6u7)
- Scientific Linux 7.1 (SL7u1)
- Scientific Linux 7.2 (SL7u2)
- SuSE Linux Enterprise Server 11 Service Pack 3 (SLES11sp3)
- SuSE Linux Enterprise Server 11 Service Pack 4 (SLES11sp4)
- SuSE Linux Enterprise Server 12 (SLES12)
Prerequisites
- Extra base distribution RPMs will be installed by yum/zypper in order to resolve dependencies that might arise as a result of the upgrade. Hence the base distribution repositories must be reachable. This means that the clusters that run the Enterprise Linux distributions (RHEL and SLES11) must be subscribed to the appropriate software channels.
- Packages in /cm/shared are upgraded, but the administrator should be aware of the following:
- If /cm/shared is installed in the local partition, then the packages are upgraded. This may not be desirable for users that wish to retain the old behavior.
- If /cm/shared is mounted from a separate partition, then unmounting it will prevent upgrades to the mounted partition, but will allow new packages to be installed in /cm/shared within the local partition. This may be desirable for the administrator, who can later copy over updates from the local /cm/shared to the remote /cm/shared manually according to site specific requirements.
Since unmounting of mounted /cm/shared is carried out by default, a local /cm/shared will have files from any packages installed there upgraded. According to the yum database, the system is then upgraded even though the files are misplaced in the local partition. However, the newer packages can only be expected to work properly if their associated files are copied over from the local partition to the remote partition
-
IMPORTANT: If upgrading from Bright 7.0 or 7.1
- Bright Hadoop deployments must be removed (using cm-hadoop-setup). Upgrades are not supported for Hadoop.
- Bright OpenStack deployments must be removed (using cm-openstack-setup). Upgrades are not supported for OpenStack.
Important Note
The upgrade process will not only upgrade CMDaemon and its dependencies, but it will also upgrade other packages. This means that old packages will not be available from the repositories of the latest version of Bright (in this case 7.2 repositories). In some cases, this will require recompiling the user applications to use the upgraded versions of the compilers and the libraries. Also, the configurations of the old packages will not be copied automatically to the new packages, which means that the administrator will have to adjust the configurationfrom the old packages to suit the new packages manually.
Enable the upgrade repo and install the upgrade RPM
Install the Bright Cluster Manager upgrade RPM on the Bright head node (s) as shown below:
1. Add and enable the upgrade repo
Create a repo file with the following contents:
[cm-upgrade-72]
name=Bright 7.2 Upgrade Repository
baseurl=http://support.brightcomputing.com/upgrade/7.2/<DIST>/updates
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-cm
Note: Plese replace <DIST> with one of : rhel/7 , rhel/6, sles/12, sles/11
On RHEL based distributions, save file to /etc/yum.repos.d/
On SLES based distributions, save file to /etc/zypp/repos.d/
2. Install RPM
yum install cm-upgrade-7.2
3. Make the cm-upgrade command available in the default PATH
module load cm-upgrade/7.2
Upgrade procedure
The recommended order for upgrade is:
- Power off regular nodes, cloud nodes and cloud directors
- Apply existing updates to Bright 6.0/6.1/7.0/7.1 on head node and in the software images.
- Update head node:
RHEL derivatives:
yum update
SLES derivatives:
zypper up
- Update software images. For each software image run the :
RHEL derivatives:
yum --installroot /cm/images/<software image> update
SLES derivatives:
zypper --root /cm/images/<software image> up
Note: If the software image repositories differ from the repositories that the head node uses, then you should chroot into the software image first before attempting to run "yum update" or "zypper up" . This is because using the --installroot or --root switch will not allow yum/zypper to use the repositories defined in the software images.
- Upgrade head nodes to Bright 7.2:
cm-upgrade -u 7.2
Important: this must be run on both head nodes in a high availability setup
Recommended: Upgrade active head node first and then the passive head node
- In a HA setup, after upgrading both the head nodes, resync the databases. Run the following from the active head node (it is very important to complete this step before moving to the next one):
cmha dbreclone <secondary>
- Upgrade the software image(s) to Bright 7.2
cm-upgrade -u 7.2 -i all
Important: this must be run only on the active head node
- Power on the regular nodes, cloud nodes and cloud directors
Usage and help
For more detailed information on usage, examples, and a full description:
- cm-upgrade
without any arguments prints the usage and several examples on how to use the script.
- cm-upgrade --help
prints the complete help and description
Upgrading using a Bright DVD/ISO
When using a Bright DVD/ISO to perform the upgrade, it is important to use a DVD/ISO that is not older than 7.2-14. The DVD/ISO version can be found (assuming that the DVD/ISO is mounted under /mnt/cdrom) with a find command such as:
# find /mnt/cdrom -type d -name '7.2-*'
/mnt/cdrom/data/cm-rpms/7.2-14
FAQs and Troubleshooting
Q: Why are my SGE or Torque jobs not running after upgrading to Bright 7.2 ?
A: This is mostly because there is an obsolete broken prolog symlink
/cm/local/apps/sge/var/prologs/10-prolog-healthchecker
or
/cm/local/apps/torque/var/prologs/10-prolog-healthchecker
Solution: Remove the broken symlink on the nodes and re-submit job.
Q: Why is the Bright package perl-Config-IniFiles on my SLES11 cluster not upgraded to 7.2 ?
A: This will happen when zypper cannot find a dependency package perl-List-MoreUtils and will skip
updating perl-Config-IniFiles
Solution: Enable a repository that contains the perl-List-MoreUtils rpm and then run:
zypper update perl-Config-IniFiles
Q: Why did cm-upgrade fail at the stage: 'Installing distribution packages' or 'Upgrading packages to Bright 7.2' ?
A: This will happen when some distribution package dependencies could not be met. Please look in
/var/log/cm-upgrade.log for detailed information about what packages are missing.
Solution: Enable required additional base distribution repositories and re-run cm-upgrade with the -f option.
Example: cm-upgrade -u 7.2 -f
Q: After upgrading from Bright 6.0 to Bright 7.2, why is the MySQL healthcheck failing because the cmdaemon monitoring database engine is not MyISAM ?
A: This is because Bright versions before 6.1 use InnoDB as the MySQL engine. Starting with Bright 6.1, MyISAM is the default monitoring database engine.
Solution: Change the engine type for the cmdaemon_mon database to MyISAM