Oracle 18.104.22.168 is out, after lots of announcements the product has finally been released. I had just extended my 22.214.171.124.3 cluster to 3 nodes and was about to apply the July PSU when I saw the news. So why not try and upgrade to the brand new thing?
What struck me at first was the list of new features … Oracle’s patching strategy has really changed over time. I remember the days when Oracle didn’t usually add additional features into point releases. Have a look at the new 126.96.36.199 features and that would possibly qualify to be 12c Release 2…
In summary the upgrade process is actually remarkably simple, and hasn’t changed much since earlier versions of the software. Here are the steps in chronological order.
I don’t know how often I have type ./ruinInstaller instead of runInstaller, but here you go. This is the first wizard screen after splash screen has disappeared.
Naturally I went for the upgrade of my cluster. Before launching the installer though I made sure that everything was in working order by means of cluvfy. On to the next screen:
I always install English only. Troubleshooting Oracle in a different language (especially if I don’t speak or understand) is really hard so I avoid it in the first place.
Over to the screen that follows and oops-my SYSDG disk group (containing OCR and voting files) is too small. Bugger. In the end I added 3 new 10GB LUNs and dropped the old ones. But it took me a couple of hours to do so. Worse: it wasn’t even needed, but proved to be a good learning exercise. The requirement to have that much free space is most likely caused by the management repository and related infrastructure.
Back to this screen everything is in best order, the print screen has been taken just prior to the change to the next. Note the button to skip the updates on unreachable nodes. Not sure if I wanted to do that though.
I haven’t got OEM agents on the servers (yet) so I’m skipping the registration for now. You can always do that later.
This screen is familiar; I am keeping my choices from the initial installation. Grid Infrastructure is owned by Oracle despite the ASMDBA and ASMADMIN groups by the way.
On the screen below you define where on the file system you want to install Grid Infrastructure. Remember that for clustered deployments the ORACLE_HOME cannot be in the path of the ORACLE_BASE. For this to work you have to jump to the command line and create the directory on all servers and grant ownership to the GI owner account (oracle in this case, could be grid as well).
Since I like to be in control I don’t allow Oracle to run the root scripts. I didn’t in 188.8.131.52 either:
In that screen you notice the familiar checking of requirements.
In my case there were only a few new ones shown here. This is a lab server so I don’t plan on using swap, but the kernel parameter “panic_on_oops” is new. I also didn’t set the reverse path filtering which I corrected before continuing. Interestingly the installer points out that there is a change in the asm_diskstring with its implications.
One thing I haven’t recorded here (because I am using Oracle Linux 6.5 with UEK3) is the requirement for using a 2.6.39 kernel – that sounds like UEK2 to me.
Update: my system is Oracle Linux 6.5, not Red Hat. See Sigrid’s comments below: for Red Hat Linux there doesn’t seem to be a similar requirement to use UEK 2, which matches the documentation (Installation guide for Grid Infrastructure/Linux).
Another interesting case was that the kernel_core pattern wasn’t equal on all nodes. Turned out that 2 nodes had the package abrt installed, and the other two didn’t. Once the packages were installed on all nodes, the warning went away.
Unfortunately I didn’t take a print screen of the summary in case you wonder where that is. I went straight into the installation phase:
At the end of which you are prompted to run the upgrade scripts. Remember to run them in screen and pay attention to the order you run them in.
The output from the last node is shown here:
[root@rac12node3 ~]# /u01/app/184.108.40.206/grid/rootupgrade.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/220.127.116.11/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/18.104.22.168/grid/crs/install/crsconfig_params 2014/07/26 16:15:51 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector. 2014/07/26 16:19:58 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector. 2014/07/26 16:20:02 CLSRSC-464: Starting retrieval of the cluster configuration data 2014/07/26 16:20:51 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed. 2014/07/26 16:20:51 CLSRSC-363: User ignored prerequisites during installation ASM configuration upgraded in local node successfully. 2014/07/26 16:21:16 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack 2014/07/26 16:22:51 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed. OLR initialization - successful 2014/07/26 16:26:53 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf' CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2014/07/26 16:34:34 CLSRSC-343: Successfully started Oracle Clusterware stack clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 12c Release 1. Successfully taken the backup of node specific configuration in OCR. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. 2014/07/26 16:35:55 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded 2014/07/26 16:35:55 CLSRSC-482: Running command: '/u01/app/22.214.171.124/grid/bin/crsctl set crs activeversion' Started to upgrade the Oracle Clusterware. This operation may take a few minutes. Started to upgrade the CSS. The CSS was successfully upgraded. Started to upgrade Oracle ASM. Started to upgrade the CRS. The CRS was successfully upgraded. Successfully upgraded the Oracle Clusterware. Oracle Clusterware operating version was successfully set to 126.96.36.199.0 2014/07/26 16:38:51 CLSRSC-479: Successfully set Oracle Clusterware active version 2014/07/26 16:39:13 CLSRSC-476: Finishing upgrade of resource types 2014/07/26 16:39:26 CLSRSC-482: Running command: 'upgrade model -s 188.8.131.52.0 -d 184.108.40.206.0 -p last' 2014/07/26 16:39:26 CLSRSC-477: Successfully completed upgrade of resource types 2014/07/26 16:40:17 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Did you notice that TFA has been added? Trace File Analyzer is another of these cool things to play with, it was available with 220.127.116.11 and as an add-on to 18.104.22.168.
Back to OUI to complete the upgrade. After which cluvfy performs a final check and I’m done. Prove it worked:
[oracle@rac12node1 ~]$ sqlplus / as sysdba SQL*Plus: Release 22.214.171.124.0 Production on Thu Jul 26 17:13:02 2014 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 126.96.36.199.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> select banner from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 12c Enterprise Edition Release 188.8.131.52.0 - 64bit Production PL/SQL Release 184.108.40.206.0 - Production CORE 220.127.116.11.0 Production TNS for Linux: Version 18.104.22.168.0 - Production NLSRTL Version 22.214.171.124.0 - Production SQL>
In another post I’ll detail the upgrade for my databases. I am particularly interested about the unplug/plug way of migrating…