Oracle 12.1.0.2 is out, after lots of announcements the product has finally been released. I had just extended my 12.1.0.1.3 cluster to 3 nodes and was about to apply the July PSU when I saw the news. So why not try and upgrade to the brand new thing?
What struck me at first was the list of new features … Oracle’s patching strategy has really changed over time. I remember the days when Oracle didn’t usually add additional features into point releases. Have a look at the new 12.1.0.2 features and that would possibly qualify to be 12c Release 2…
In summary the upgrade process is actually remarkably simple, and hasn’t changed much since earlier versions of the software. Here are the steps in chronological order.
./runInstaller
I don’t know how often I have type ./ruinInstaller instead of runInstaller, but here you go. This is the first wizard screen after splash screen has disappeared.
Naturally I went for the upgrade of my cluster. Before launching the installer though I made sure that everything was in working order by means of cluvfy. On to the next screen:
I always install English only. Troubleshooting Oracle in a different language (especially if I don’t speak or understand) is really hard so I avoid it in the first place.
Over to the screen that follows and oops-my SYSDG disk group (containing OCR and voting files) is too small. Bugger. In the end I added 3 new 10GB LUNs and dropped the old ones. But it took me a couple of hours to do so. Worse: it wasn’t even needed, but proved to be a good learning exercise. The requirement to have that much free space is most likely caused by the management repository and related infrastructure.
Back to this screen everything is in best order, the print screen has been taken just prior to the change to the next. Note the button to skip the updates on unreachable nodes. Not sure if I wanted to do that though.
I haven’t got OEM agents on the servers (yet) so I’m skipping the registration for now. You can always do that later.
This screen is familiar; I am keeping my choices from the initial installation. Grid Infrastructure is owned by Oracle despite the ASMDBA and ASMADMIN groups by the way.
On the screen below you define where on the file system you want to install Grid Infrastructure. Remember that for clustered deployments the ORACLE_HOME cannot be in the path of the ORACLE_BASE. For this to work you have to jump to the command line and create the directory on all servers and grant ownership to the GI owner account (oracle in this case, could be grid as well).
Since I like to be in control I don’t allow Oracle to run the root scripts. I didn’t in 12.1.0.1 either:
In that screen you notice the familiar checking of requirements.
In my case there were only a few new ones shown here. This is a lab server so I don’t plan on using swap, but the kernel parameter “panic_on_oops” is new. I also didn’t set the reverse path filtering which I corrected before continuing. Interestingly the installer points out that there is a change in the asm_diskstring with its implications.
One thing I haven’t recorded here (because I am using Oracle Linux 6.5 with UEK3) is the requirement for using a 2.6.39 kernel – that sounds like UEK2 to me.
Update: my system is Oracle Linux 6.5, not Red Hat. See Sigrid’s comments below: for Red Hat Linux there doesn’t seem to be a similar requirement to use UEK 2, which matches the documentation (Installation guide for Grid Infrastructure/Linux).
Another interesting case was that the kernel_core pattern wasn’t equal on all nodes. Turned out that 2 nodes had the package abrt installed, and the other two didn’t. Once the packages were installed on all nodes, the warning went away.
Unfortunately I didn’t take a print screen of the summary in case you wonder where that is. I went straight into the installation phase:
At the end of which you are prompted to run the upgrade scripts. Remember to run them in screen and pay attention to the order you run them in.
The output from the last node is shown here:
[root@rac12node3 ~]# /u01/app/12.1.0.2/grid/rootupgrade.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/12.1.0.2/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params 2014/07/26 16:15:51 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector. 2014/07/26 16:19:58 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector. 2014/07/26 16:20:02 CLSRSC-464: Starting retrieval of the cluster configuration data 2014/07/26 16:20:51 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed. 2014/07/26 16:20:51 CLSRSC-363: User ignored prerequisites during installation ASM configuration upgraded in local node successfully. 2014/07/26 16:21:16 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack 2014/07/26 16:22:51 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed. OLR initialization - successful 2014/07/26 16:26:53 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf' CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2014/07/26 16:34:34 CLSRSC-343: Successfully started Oracle Clusterware stack clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 12c Release 1. Successfully taken the backup of node specific configuration in OCR. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. 2014/07/26 16:35:55 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded 2014/07/26 16:35:55 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/crsctl set crs activeversion' Started to upgrade the Oracle Clusterware. This operation may take a few minutes. Started to upgrade the CSS. The CSS was successfully upgraded. Started to upgrade Oracle ASM. Started to upgrade the CRS. The CRS was successfully upgraded. Successfully upgraded the Oracle Clusterware. Oracle Clusterware operating version was successfully set to 12.1.0.2.0 2014/07/26 16:38:51 CLSRSC-479: Successfully set Oracle Clusterware active version 2014/07/26 16:39:13 CLSRSC-476: Finishing upgrade of resource types 2014/07/26 16:39:26 CLSRSC-482: Running command: 'upgrade model -s 12.1.0.1.0 -d 12.1.0.2.0 -p last' 2014/07/26 16:39:26 CLSRSC-477: Successfully completed upgrade of resource types 2014/07/26 16:40:17 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Did you notice that TFA has been added? Trace File Analyzer is another of these cool things to play with, it was available with 11.2.0.4 and as an add-on to 12.1.0.1.
Result!
Back to OUI to complete the upgrade. After which cluvfy performs a final check and I’m done. Prove it worked:
[oracle@rac12node1 ~]$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Thu Jul 26 17:13:02 2014 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> select banner from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production PL/SQL Release 12.1.0.2.0 - Production CORE 12.1.0.2.0 Production TNS for Linux: Version 12.1.0.2.0 - Production NLSRTL Version 12.1.0.2.0 - Production SQL>
In another post I’ll detail the upgrade for my databases. I am particularly interested about the unplug/plug way of migrating…
Hi Martin
Thanks for the great posting. Had a question about your comment on “One thing I haven’t recorded here (because I am using Oracle Linux 6.5 with UEK3) is the requirement for using a 2.6.39 kernel – that sounds like UEK2 to me. I wonder what that means for Red Hat customers. I needed the RHEL compatible kernel because Oracle hasn’t provided virtio SCSI in UEK 2 (but does in UEK3).”
We upgraded from GI 11.2.0.3 to 12.1.0.1 on this platform and are getting ready to upgrade to 12.1.0.2, but now I am not clear if its supported on RHEL 6.5 2.3.32 kernel?
Please clarify it.
Thx
-Ashish
Ashish,
I used Oracle Linux 6.5 for this test and didn’t use RedHat 6.x. If you want to find out about your configuration please run cluvfy from the 12.1.0.2 installation package against your test system to assert it’s configuration is supported.
Hope this helps,
Martin
Thanks Martin. We were able to successfully upgrade to 12.1.0.2 on RHEL6.5.
Glad it worked for you. Out of curiosity, did you use UEK2 or was your 2.6.32.x kernel supported?
Hi Martin,
where did you see that a 2.6.39 kernel was required? I just ran cluvfy on RHEL6.5, 2.6.32-431.5.1, and it didn’t complain… in the installation guide it says (http://docs.oracle.com/database/121/CWLIN/prelinux.htm#CEGCECCC)
Red Hat Enterprise Linux 6
Supported distributions:
•Red Hat Enterprise Linux 6: 2.6.32-71.el6.x86_64 or later
•Red Hat Enterprise Linux 6 with the Unbreakable Enterprise Kernel: 2.6.32-100.28.5.el6.x86_64 or later
However, what I really find annoying is the space requirements for the voting disks growing and growing, and you don’t know where the upper bound might be ;-)
I mean, in the lab it’s fine, but in the workplace, it would be nice to be able to give some reliable information to one’s colleagues as regards lun / nfs export/ whatever requirements… ;-)
For the sandbox db at work, I’d already asked for 8G luns when I installed 12.1.0.1, now that’s just a LITTLE bit too small and there I am again, asking for new luns (or else in this case as it’s an upgrade not a fresh install, I’ll perhaps just move the mgmtdb away to another dg but of course I’m not sure how the upgrade will run after that modification …)
Hi Sigrid,
On my Oracle Linux 6.5 system the installer explicitly checked for UEK2. This is an excerpt from the installactions.log
INFO: *********************************************
INFO: OS Kernel Version: This is a prerequisite condition to test whether the system kernel version is at least “2.6.39”.
INFO: Severity:CRITICAL
INFO: OverallStatus:SUCCESSFUL
INFO: ———————————————–
INFO: Verification Result for Node:rac12node3
INFO: Expected Value:2.6.39
INFO: Actual Value:3.8.13-16.2.1.el6uek.x86_64
INFO: ———————————————–
So I suppose that the checks on Red Hat 6.x are different. And I found the same reference in the documentation as you. But in my experience documentation is not always an accurate description of reality :)
I agree about the LUN sizes for the OCR/voting disks. In my second build I just added more LUNs to the disk group, without swapping the smaller ones out.
Martin