Since the first patch for Oracle 12c has been made available I was of course keen to see how to apply it. For a first test I opted to use my 3 node RAC cluster which is running on Oracle Linux 6.4 with UEK2. This post is not terribly well-polished, it’s more of a log of what I did…
UPDATE: You might want to review this blog post for more information about a known defect with datapatch for PSU 1 and PSU 2.
The cluster makes use of some of the new 12c RAC features such as Flex ASM but it is not a Flex Cluster:
[oracle@rac12node3 ~]$ srvctl config asm ASM home: /u01/app/12.1.0.1/grid Password file: +OCR/orapwASM ASM listener: LISTENER ASM instance count: 2 Cluster ASM listener: ASMNET1LSNR_ASM [oracle@rac12node3 ~]$
My database is a standard 3 node administrator managed build:
[oracle@rac12node3 ~]$ srvctl config database -d rcdb1 Database unique name: RCDB1 Database name: RCDB1 Oracle home: /u01/app/oracle/product/12.1.0.1/dbhome_2 Oracle user: oracle Spfile: +DATA/RCDB1/spfileRCDB1.ora Password file: +DATA/RCDB1/orapwrcdb1 Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: RCDB1 Database instances: RCDB11,RCDB12,RCDB13 Disk Groups: DATA Mount point paths: Services: Type: RAC Start concurrency: Stop concurrency: Database is administrator managed
Patching
Now about the patch: the patch number to use at the time of this writing is 17272829 which I usually stage into an NFS mount before I apply it. But as always, you need a new version of OPatch (sigh). I have applied the patch with this opatch version:
[oracle@rac12node3 dbhome_2]$ /u01/app/oracle/product/12.1.0.1/dbhome_2/OPatch/opatch version OPatch Version: 12.1.0.1.2 OPatch succeeded.
The next requirement is the OCM.rsp (Oracle Configuration Manager) file which you can create using $ORACLE_HOME/OPatch/ocm/bin/emocmrsp executable. The resulting file is usually created in the current working directory (CWD). The file should then be distributed to all RAC nodes in your cluster.
What to patch
In my case with a pure 12.1.0.1.0 environment the decision about what to patch is quickly made: you need opatch auto. Refer to the read me for further configuration information. My case however is made a little more interesting as I have an ADVM volume plus an ACFS file system containing (a different) shared Oracle home-I wonder how that is going to be handled. The relevant section in the readme is named “Case 2: GI Home is not shared, Database Home is shared, ACFS may be used”, this is what you can read about here. Refer to the Readme for instructions on how to patch a 12.1.0.1.0 system without ACFS or shared software homes.
Patch application
I am starting with rac12node1, the first cluster node. It has an ASM instance:
[grid@rac12node1 temp]$ srvctl status asm ASM is running on rac12node1,rac12node2
I am stopping all Oracle instances for the Oracle home in question using srvctl stop instance -d <db_unique_name> -i . This leaves the following auxiliary instances on my system:
[grid@rac12node1 temp]$ ps -ef | grep pmon grid 2540 1 0 09:50 ? 00:00:00 asm_pmon_+ASM1 grid 3026 1 0 09:51 ? 00:00:00 mdb_pmon_-MGMTDB grid 6058 1 0 09:52 ? 00:00:00 apx_pmon_+APX1 grid 10754 9505 0 10:41 pts/3 00:00:00 grep pmon [grid@rac12node1 temp]$
Except for +ASM1 all of the other ones are new to 12.1, and I’ll cover these in another post in more detail.
I then unmounted the ACFS file system on the first node before I could use the opatchauto command as root to apply the patch. DO NOT DO THIS AS IT DOESN’T WORK-read on.
[root@rac12node1 ~]# /u01/app/12.1.0.1/grid/OPatch/opatchauto apply /u01/app/grid/temp/17272829 \ > -oh /u01/app/12.1.0.1/grid -ocmrf ocm.rsp OPatch Automation Tool Copyright (c) 2013, Oracle Corporation. All rights reserved. OPatchauto version : 12.1.0.1.2 OUI version : 12.1.0.1.0 Running from : /u01/app/12.1.0.1/grid opatchauto log file: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/17272829/opatch_gi_2013-11-13_10-45-33_deploy.log Parameter Validation: Successful System Configuration Collection failed: oracle.osysmodel.driver.crs.productdriver.ProductDriverException: Unable to determine if "/u01/app/12.1.0.1/grid" is a shared oracle home. opatchauto failed with error code 2
Huh? Turns out this is a bug which is still under development so we need to apply the patch the old fashioned way. This is how I described it in Pro Oracle RAC 11g on Linux, on 11.2.0.1 and before.
Manually patching the GRID Home
Here are the steps, apologies for the verbose output, that’s normally nicely hidden away in $GRID_HOME/cfgtoollogs. First the GRID home has to be unlocked as it is normally owned by the root account. Do this as root.
[root@rac12node1 ~]# /u01/app/12.1.0.1/grid/crs/install/rootcrs.pl -prepatch Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params Oracle Clusterware active version on the cluster is [12.1.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0]. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12node1' CRS-2673: Attempting to stop 'ora.crsd' on 'rac12node1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac12node1' CRS-2673: Attempting to stop 'ora.DATA.ORAHOMEVOL.advm' on 'rac12node1' CRS-2673: Attempting to stop 'ora.oc4j' on 'rac12node1' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'rac12node1' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac12node1' CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac12node1' CRS-2673: Attempting to stop 'ora.cvu' on 'rac12node1' CRS-2673: Attempting to stop 'ora.mgmtdb' on 'rac12node1' CRS-2677: Stop of 'ora.cvu' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.cvu' on 'rac12node3' CRS-2676: Start of 'ora.cvu' on 'rac12node3' succeeded CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.scan3.vip' on 'rac12node1' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.rac12node1.vip' on 'rac12node1' CRS-2677: Stop of 'ora.scan3.vip' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.scan3.vip' on 'rac12node2' CRS-2677: Stop of 'ora.rac12node1.vip' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.rac12node1.vip' on 'rac12node3' CRS-2676: Start of 'ora.scan3.vip' on 'rac12node2' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'rac12node2' CRS-2677: Stop of 'ora.DATA.dg' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.OCR.dg' on 'rac12node1' CRS-2676: Start of 'ora.rac12node1.vip' on 'rac12node3' succeeded CRS-2677: Stop of 'ora.OCR.dg' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac12node1' CRS-2677: Stop of 'ora.asm' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac12node1' CRS-2677: Stop of 'ora.oc4j' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.oc4j' on 'rac12node2' CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'rac12node2' succeeded CRS-2677: Stop of 'ora.mgmtdb' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'rac12node1' CRS-2677: Stop of 'ora.MGMTLSNR' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.MGMTLSNR' on 'rac12node3' CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac12node3' CRS-2676: Start of 'ora.MGMTLSNR' on 'rac12node3' succeeded CRS-2672: Attempting to start 'ora.mgmtdb' on 'rac12node3' CRS-2676: Start of 'ora.asm' on 'rac12node3' succeeded CRS-2676: Start of 'ora.oc4j' on 'rac12node2' succeeded CRS-2676: Start of 'ora.mgmtdb' on 'rac12node3' succeeded CRS-2677: Stop of 'ora.DATA.ORAHOMEVOL.advm' on 'rac12node1' succeeded CRS-2679: Attempting to clean 'ora.DATA.ORAHOMEVOL.advm' on 'rac12node1' CRS-2681: Clean of 'ora.DATA.ORAHOMEVOL.advm' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.proxy_advm' on 'rac12node1' CRS-2677: Stop of 'ora.proxy_advm' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'rac12node1' CRS-2677: Stop of 'ora.ons' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'rac12node1' CRS-2677: Stop of 'ora.net1.network' on 'rac12node1' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac12node1' has completed CRS-2677: Stop of 'ora.crsd' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.evmd' on 'rac12node1' CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac12node1' CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac12node1' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12node1' CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12node1' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rac12node1' succeeded CRS-2677: Stop of 'ora.gpnpd' on 'rac12node1' succeeded CRS-2677: Stop of 'ora.evmd' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'rac12node1' CRS-2673: Attempting to stop 'ora.storage' on 'rac12node1' CRS-2677: Stop of 'ora.storage' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac12node1' CRS-2677: Stop of 'ora.asm' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac12node1' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac12node1' succeeded CRS-2677: Stop of 'ora.ctssd' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac12node1' CRS-2677: Stop of 'ora.cssd' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.crf' on 'rac12node1' CRS-2677: Stop of 'ora.crf' on 'rac12node1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac12node1' CRS-2677: Stop of 'ora.gipcd' on 'rac12node1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12node1' has completed CRS-4133: Oracle High Availability Services has been stopped. 2013/11/13 10:54:05 CLSRSC-347: Successfully unlock /u01/app/12.1.0.1/grid
Watch out for the last line, it must read “successfully unlock $GRID_HOME”
Switch to the GRID owner account now! As I’m using separation of duties in my environment that’s the grid user:
[grid@rac12node1 temp]$ opatch apply -oh /u01/app/12.1.0.1/grid -local 17272829/17027533 -silent \ -ocmrf /tmp/ocm.rsp Oracle Interim Patch Installer version 12.1.0.1.2 Copyright (c) 2013, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/12.1.0.1/grid Central Inventory : /u01/app/oraInventory from : /u01/app/12.1.0.1/grid/oraInst.loc OPatch version : 12.1.0.1.2 OUI version : 12.1.0.1.0 Log file location : /u01/app/12.1.0.1/grid/cfgtoollogs/opatch/opatch2013-11-13_10-58-09AM_1.log Applying interim patch '17027533' to OH '/u01/app/12.1.0.1/grid' Verifying environment and performing prerequisite checks... All checks passed. Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = '/u01/app/12.1.0.1/grid') Is the local system ready for patching? [y|n] y Y (auto-answered by -silent) User Responded with: Y Backing up files... Patching component oracle.rdbms, 12.1.0.1.0... Patching component oracle.rdbms.dbscripts, 12.1.0.1.0... Patching component oracle.rdbms.rsf, 12.1.0.1.0... Patching component oracle.ldap.rsf, 12.1.0.1.0... Patching component oracle.ldap.rsf.ic, 12.1.0.1.0... Verifying the update... Patch 17027533 successfully applied Log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/opatch/opatch2013-11-13_10-58-09AM_1.log OPatch succeeded.
Followed by the next patch:
[grid@rac12node1 temp]$ opatch apply -oh /u01/app/12.1.0.1/grid -local 17272829/17303297 -silent \ > -ocmrf /tmp/ocm.rsp Oracle Interim Patch Installer version 12.1.0.1.2 Copyright (c) 2013, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/12.1.0.1/grid Central Inventory : /u01/app/oraInventory from : /u01/app/12.1.0.1/grid/oraInst.loc OPatch version : 12.1.0.1.2 OUI version : 12.1.0.1.0 Log file location : /u01/app/12.1.0.1/grid/cfgtoollogs/opatch/opatch2013-11-13_11-06-37AM_1.log Applying interim patch '17303297' to OH '/u01/app/12.1.0.1/grid' Verifying environment and performing prerequisite checks... All checks passed. Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = '/u01/app/12.1.0.1/grid') Is the local system ready for patching? [y|n] Y (auto-answered by -silent) User Responded with: Y Backing up files... Patching component oracle.usm, 12.1.0.1.0... Verifying the update... Patch 17303297 successfully applied Log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/opatch/opatch2013-11-13_11-06-37AM_1.log OPatch succeeded.
And finally the last one for the GI home:
[grid@rac12node1 temp]$ opatch apply -oh /u01/app/12.1.0.1/grid -local 17272829/17077442 -silent \ > -ocmrf /tmp/ocm.rsp Oracle Interim Patch Installer version 12.1.0.1.2 Copyright (c) 2013, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/12.1.0.1/grid Central Inventory : /u01/app/oraInventory from : /u01/app/12.1.0.1/grid/oraInst.loc OPatch version : 12.1.0.1.2 OUI version : 12.1.0.1.0 Log file location : /u01/app/12.1.0.1/grid/cfgtoollogs/opatch/opatch2013-11-13_11-26-23AM_1.log Applying interim patch '17077442' to OH '/u01/app/12.1.0.1/grid' Verifying environment and performing prerequisite checks... All checks passed. Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = '/u01/app/12.1.0.1/grid') Is the local system ready for patching? [y|n] Y (auto-answered by -silent) User Responded with: Y Backing up files... Patching component oracle.crs, 12.1.0.1.0... Patching component oracle.has.db, 12.1.0.1.0... Patching component oracle.has.common, 12.1.0.1.0... Verifying the update... Patch 17077442 successfully applied Log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/opatch/opatch2013-11-13_11-26-23AM_1.log OPatch succeeded.
That’s one Oracle home patched, now over to the RAC home.
Patching the RDBMS home
As the RDBMS owner, patch the home. The first step is to apply the prepatch:
[oracle@rac12node1 ~]$ /u01/app/grid/temp/17272829/17077442/custom/scripts/prepatch.sh \ > -dbhome /u01/app/oracle/product/12.1.0.1/dbhome_2 /u01/app/grid/temp/17272829/17077442/custom/scripts/prepatch.sh completed successfully.
Then apply a subset of the patches just applied to the RDBMS home: 17027533 and 17077442:
[oracle@rac12node1 ~]$ opatch apply -oh /u01/app/oracle/product/12.1.0.1/dbhome_2 \ > -local /u01/app/grid/temp/17272829/17027533 -silent -ocmrf /tmp/ocm.rsp Oracle Interim Patch Installer version 12.1.0.1.2 Copyright (c) 2013, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/12.1.0.1/dbhome_2 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/12.1.0.1/dbhome_2/oraInst.loc OPatch version : 12.1.0.1.2 OUI version : 12.1.0.1.0 Log file location : /u01/app/oracle/product/12.1.0.1/dbhome_2/cfgtoollogs/opatch/opatch2013-11-13_11-34-24AM_1.log Applying interim patch '17027533' to OH '/u01/app/oracle/product/12.1.0.1/dbhome_2' Verifying environment and performing prerequisite checks... All checks passed. Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = '/u01/app/oracle/product/12.1.0.1/dbhome_2') Is the local system ready for patching? [y|n] Y (auto-answered by -silent) User Responded with: Y Backing up files... Patching component oracle.rdbms, 12.1.0.1.0... Patching component oracle.rdbms.dbscripts, 12.1.0.1.0... Patching component oracle.rdbms.rsf, 12.1.0.1.0... Patching component oracle.ldap.rsf, 12.1.0.1.0... Patching component oracle.ldap.rsf.ic, 12.1.0.1.0... Verifying the update... Patch 17027533 successfully applied Log file location: /u01/app/oracle/product/12.1.0.1/dbhome_2/cfgtoollogs/opatch/opatch2013-11-13_11-34-24AM_1.log OPatch succeeded. [oracle@rac12node1 dbhome_2]$ opatch apply -oh /u01/app/oracle/product/12.1.0.1/dbhome_2 \ > -local /u01/app/grid/temp/17272829/17077442 -silent -ocmrf /tmp/ocm.rsp Oracle Interim Patch Installer version 12.1.0.1.2 Copyright (c) 2013, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/12.1.0.1/dbhome_2 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/12.1.0.1/dbhome_2/oraInst.loc OPatch version : 12.1.0.1.2 OUI version : 12.1.0.1.0 Log file location : /u01/app/oracle/product/12.1.0.1/dbhome_2/cfgtoollogs/opatch/opatch2013-11-13_11-03-20AM_1.log Applying interim patch '17077442' to OH '/u01/app/oracle/product/12.1.0.1/dbhome_2' Verifying environment and performing prerequisite checks... Patch 17077442: Optional component(s) missing : [ oracle.crs, 12.1.0.1.0 ] All checks passed. Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = '/u01/app/oracle/product/12.1.0.1/dbhome_2') Is the local system ready for patching? [y|n] y Y (auto-answered by -silent) User Responded with: Y Backing up files... Patching component oracle.has.db, 12.1.0.1.0... Patching component oracle.has.common, 12.1.0.1.0... Verifying the update... Patch 17077442 successfully applied Log file location: /u01/app/oracle/product/12.1.0.1/dbhome_2/cfgtoollogs/opatch/opatch2013-11-13_11-03-20AM_1.log OPatch succeeded.
The ACFS ORACLE Home cannot be patched at this time as it’s not yet mounted. I’ll do that later I think, this has already taken longer than I expected. Now it is time to clean up and preparing to start the stack:
[oracle@rac12node1 ~]$ /u01/app/grid/temp/17272829/17077442/custom/scripts/postpatch.sh \ > -dbhome /u01/app/oracle/product/12.1.0.1/dbhome_2 Reading /u01/app/oracle/product/12.1.0.1/dbhome_2/install/params.ora.. Reading /u01/app/oracle/product/12.1.0.1/dbhome_2/install/params.ora.. Parsing file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/srvctl Parsing file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/srvconfig Parsing file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/cluvfy Verifying file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/racgwrap Skipping the missing file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/racgwrap Verifying file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/srvctl Verifying file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/srvconfig Verifying file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/cluvfy Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/srvctl Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/srvconfig Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/cluvfy Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/diskmon.bin Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/lsnodes Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/osdbagrp Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/rawutl Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/srvm/admin/ractrans Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/srvm/admin/getcrshome Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/gnsd Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/crsdiag.pl
Wrap up
As root, you need to run a few more steps to lock the homes and set ownership/permissions properly:
/u01/app/12.1.0.1/grid/rdbms/install/rootadd_rdbms.sh [root@rac12node1 install]# /u01/app/12.1.0.1/grid/crs/install/rootcrs.pl -postpatch Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.mdnsd' on 'rac12node1' CRS-2672: Attempting to start 'ora.evmd' on 'rac12node1' CRS-2676: Start of 'ora.evmd' on 'rac12node1' succeeded CRS-2676: Start of 'ora.mdnsd' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac12node1' CRS-2676: Start of 'ora.gpnpd' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac12node1' CRS-2676: Start of 'ora.gipcd' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac12node1' CRS-2676: Start of 'ora.cssdmonitor' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac12node1' CRS-2672: Attempting to start 'ora.diskmon' on 'rac12node1' CRS-2676: Start of 'ora.diskmon' on 'rac12node1' succeeded CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'rac12node1' CRS-2676: Start of 'ora.cssd' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac12node1' CRS-2672: Attempting to start 'ora.ctssd' on 'rac12node1' CRS-2676: Start of 'ora.ctssd' on 'rac12node1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac12node1' CRS-2676: Start of 'ora.asm' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.storage' on 'rac12node1' CRS-2676: Start of 'ora.storage' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.crf' on 'rac12node1' CRS-2676: Start of 'ora.crf' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac12node1' CRS-2676: Start of 'ora.crsd' on 'rac12node1' succeeded CRS-6023: Starting Oracle Cluster Ready Services-managed resources CRS-6017: Processing resource auto-start for servers: rac12node1 CRS-2672: Attempting to start 'ora.MGMTLSNR' on 'rac12node1' CRS-2672: Attempting to start 'ora.scan3.vip' on 'rac12node1' CRS-2672: Attempting to start 'ora.scan2.vip' on 'rac12node1' CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac12node1' CRS-2672: Attempting to start 'ora.rac12node1.vip' on 'rac12node1' CRS-2672: Attempting to start 'ora.ons' on 'rac12node1' CRS-2672: Attempting to start 'ora.rac12node2.vip' on 'rac12node1' CRS-2672: Attempting to start 'ora.oc4j' on 'rac12node1' CRS-2672: Attempting to start 'ora.cvu' on 'rac12node1' CRS-2672: Attempting to start 'ora.rac12node3.vip' on 'rac12node1' CRS-2676: Start of 'ora.cvu' on 'rac12node1' succeeded CRS-2676: Start of 'ora.scan2.vip' on 'rac12node1' succeeded CRS-2676: Start of 'ora.scan3.vip' on 'rac12node1' succeeded CRS-2676: Start of 'ora.scan1.vip' on 'rac12node1' succeeded CRS-2676: Start of 'ora.rac12node2.vip' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac12node1' CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac12node1' CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'rac12node1' CRS-2676: Start of 'ora.rac12node3.vip' on 'rac12node1' succeeded CRS-2676: Start of 'ora.rac12node1.vip' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'rac12node1' CRS-2676: Start of 'ora.MGMTLSNR' on 'rac12node1' succeeded CRS-2676: Start of 'ora.ons' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.mgmtdb' on 'rac12node1' CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac12node1' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac12node1' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'rac12node1' succeeded CRS-2676: Start of 'ora.LISTENER.lsnr' on 'rac12node1' succeeded CRS-2672: Attempting to start 'ora.rcdb1.db' on 'rac12node1' CRS-2676: Start of 'ora.oc4j' on 'rac12node1' succeeded CRS-2676: Start of 'ora.mgmtdb' on 'rac12node1' succeeded CRS-2676: Start of 'ora.rcdb1.db' on 'rac12node1' succeeded CRS-6016: Resource auto-start has completed for server rac12node1 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. Oracle Clusterware active version on the cluster is [12.1.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0]. PRCC-1010 : proxy_advm was already enabled PRCR-1002 : Resource ora.proxy_advm is already enabled SQL Patching tool version 12.1.0.1.0 on Wed Nov 13 11:48:18 2013 Copyright (c) 2013, Oracle. All rights reserved. Connecting to database...OK Determining current state...done Nothing to roll back The following patches will be applied: 17027533 Adding patches to installation queue...done Installing patches...done Validating logfiles...done SQL Patching tool complete on Wed Nov 13 11:48:54 2013
Once that rootcrs script finished you should be able to patch the ACFS-based RDBMS home which I have skipped for now.
You need to repeat the patching on all nodes in your cluster before the operation has completed, and that’s how I thought I’d end the post. However here the fun only really started!
Remember that CRS is currently started on rac12node1, the one that I just patched. To work around space issues on my /u01 mount point and I had to extend it on rac12node2 and 3. Before shutting the nodes down I disabled CRS, then added storage and started again. In other words there was only 1 active node currently active. Trying to run the prepatch command on rac12node2 and rac12node3 failed.
[root@rac12node2 rac12node2]# /u01/app/12.1.0.1/grid/crs/install/rootcrs.pl -prepatch Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params CRS-4640: Oracle High Availability Services is already active CRS-4000: Command Start failed, or completed with errors. 2013/11/13 12:27:59 CLSRSC-117: Failed to start Oracle Clusterware stack Died at /u01/app/12.1.0.1/grid/crs/install/crspatch.pm line 646.
I stopped CRS and started it again but no chance: CRSD and EVMD just wouldn’t want to come up. The OCR service complained about a version mismatch:
2013-11-13 12:25:51.750: [crsd(3079)]CRS-1019:The OCR Service exited on host rac12node2. Details in /u01/app/12.1.0.1/grid/log/rac12node2/crsd/crsd.log
The log file showed information about the OCR master and this is where I believe is the problem: rac12node1 probably is the master since nodes 2 and 3 were shut down when CRS started on node 1. There is some evidence to this in the crsd log file:
2013-11-13 12:25:19.182: [ OCRMAS][241129216]th_populate_rank: Rank of OCR:[1]. Rank of ASM Instance:[0]. Rank of CRS Standby:[0]. OCR on ASM:[1]. ASM mode:[2]. My Rank:[1]. Min Rank:[3]. 2013-11-13 12:25:19.183: [ OCRMAS][241129216]prom_dump_pubdata: pubdatactx->prom_pubdata_ocrid=479152486 2013-11-13 12:25:19.183: [ OCRMAS][241129216]prom_dump_pubdata: begin dumping pubdatactx->prom_pubdata_prom_con 2013-11-13 12:25:19.183: [ OCRMAS][241129216]prom_dump_con: promcon->cache_invalidation_port=0 2013-11-13 12:25:19.183: [ OCRMAS][241129216]prom_dump_con: promcon->remote_listening_port=0 2013-11-13 12:25:19.183: [ OCRMAS][241129216]prom_dump_con: promcon->local_listening_port=0 2013-11-13 12:25:19.183: [ OCRMAS][241129216]prom_dump_pubdata: end dumping pubdatactx->prom_pubdata_prom_con 2013-11-13 12:25:19.183: [ OCRMAS][241129216]prom_dump_pubdata: pubdatactx->prom_pubdata_software_version=202375424 2013-11-13 12:25:19.183: [ OCRMAS][241129216]prom_dump_pubdata: pubdatactx->prom_pubdata_active_version=202375424 2013-11-13 12:25:19.183: [ OCRMAS][241129216]prom_dump_pubdata: pubdatactx->prom_pubdata_software_patch=0 2013-11-13 12:25:19.183: [ OCRMAS][241129216]prom_dump_pubdata: pubdatactx->prom_pubdata_active_patch=0 2013-11-13 12:25:19.183: [ OCRMAS][241129216]prom_dump_pubdata: pubdatactx->prom_pubdata_state=0 2013-11-13 12:25:19.183: [ OCRMAS][241129216]prom_dump_pubdata: pubdatactx->prom_pubdata_priv_nodename=gipcha<rac12node2> <dd63-600c-b68f-816f><17ae-619b-7d2b-9c41> 2013-11-13 12:25:19.191: [ OCRMAS][239027968]th_monitor_ocrlocalgrp: Reconfig event is received and there is no change in ASM instance membership. Ignoring this event. 2013-11-13 12:25:19.195: [ OCRMAS][241129216]proath_master:5':reconfig/grpmaster event [1] happened. Incarnation:[194] 2013-11-13 12:25:19.197: [ OCRMAS][241129216]rcfg_con:1: Member [1] joined. Inc [194]. 2013-11-13 12:25:19.197: [ OCRMAS][241129216]rcfg_con:1: Member [2] joined. Inc [194]. 2013-11-13 12:25:19.197: [ OCRMAS][241129216]proath_master: Master changing. cssctx->master [-1] new master [1] is_new_mastership [1] num_rcfg [1]. 2013-11-13 12:25:19.198: [ OCRMAS][241129216]th_hub_verify_master_pubdata: Shutdown CacheLocal. Patch Levels don't match. Local Patch Level [0] != Cache Writer Patch Level [1650217826] 2013-11-13 12:25:19.198: [ OCRAPI][241129216]procr_ctx_set_invalid: ctx is in state [6]. 2013-11-13 12:25:19.198: [ OCRAPI][241129216]procr_ctx_set_invalid: ctx set to invalid 2013-11-13 12:25:19.199: [ OCRAPI][241129216]procr_ctx_set_invalid: Aborting...
I then stopped CRS on node 1-remember it is my lab so I can easily do this. In a real production environment where you are performing a rolling patch this wouldn’t be so easy.
Trying the prepatch again on node 2 and 3 succeeded once node1 had CRS stopped. The patch application went smooth as well. It reminded me of exactly the same problem when I patched 10.2.0.4 to 11.2.0.1 years ago: watch out for the OCR master :)
Pingback: 12c database 12.1.0.1 PSU 1, GI Patch 17272829 on RAC – Not so easy peasy lemon squeezy | Malcolm Thorns
Hi Martin,
Nice article and it was of help to me.
On my system the error ‘Unable to determine if /u01/app/12.1.0.1/grid is a shared oracle home” was worked around with a permissions change to the patch directory hierarchy (strangely) – and the opatchauto command worked – finally !!! Had similar issues on the second node – where CRS did not want to stop until CRS on the first (patched) node was closed.
I blogged about it here: http://malcolmthorns.wordpress.com/2013/11/14/231/
Thanks,
Malcolm
Hi Martin,
I was able to repeat this again on node 3 as demonstrated below:
‘Over’ permissions on parent directory hierarchy
======================================
[grid@ukrac3 patches]$ chmod 777 /home/grid
[grid@ukrac3 patches]$ chmod 777 /home/grid/patches
[grid@ukrac3 patches]$ chmod 777 /home/grid/patches/PSU1
[root@ukrac3 grid]# ls -l /home
drwxrwxrwx. 5 grid oinstall 4096 Nov 21 16:59 grid
drwx——. 5 oracle oinstall 4096 Oct 27 16:29 oracle
[root@ukrac3 grid]# ls -l /home/grid
drwxrwxrwx. 3 grid oinstall 4096 Nov 19 16:06 patches
[root@ukrac3 grid]# ls -l /home/grid/patches
drwxrwxrwx. 3 grid oinstall 4096 Nov 19 16:07 PSU1
[grid@ukrac3 patches]$ ls -l /home/grid/patches/PSU1
drwxrwxrwx. 6 grid oinstall 4096 Oct 14 08:53 17272829
-rw-r–r–. 1 grid oinstall 739187177 Nov 19 16:05 p17272829_121010_Linux-x86-64.zip
APPLY THE PATCH – FAIL
———————————-
[root@ukrac3 grid]# u01/app/12.1.0/grid/OPatch/opatchauto apply /home/grid/patches/PSU1/17272829 -ocmrf /tmp/ocm.rsp
bash: u01/app/12.1.0/grid/OPatch/opatchauto: No such file or directory
[root@ukrac3 grid]# /u01/app/12.1.0/grid/OPatch/opatchauto apply /home/grid/patches/PSU1/17272829 -ocmrf /tmp/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation. All rights reserved.
OPatchauto version : 12.1.0.1.2
OUI version : 12.1.0.1.0
Running from : /u01/app/12.1.0/grid
opatchauto log file: /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/17272829/opatch_gi_2013-11-21_17-26-10_deploy.log
Parameter Validation: Successful
System Configuration Collection failed: oracle.osysmodel.driver.crs.productdriver.ProductDriverException: Unable to determine if “/u01/app/12.1.0/grid” is a shared oracle home.
opatchauto failed with error code 2.
‘Less’ Permissions on parent directory hierarchy
======================================
[root@ukrac3 grid]# chmod 755 /home/grid
[root@ukrac3 grid]# chmod 755 /home/grid/patches
[root@ukrac3 grid]# chmod 755 /home/grid/patches/PSU1
[root@ukrac3 grid]# ls -l /home
drwxr-xr-x. 5 grid oinstall 4096 Nov 21 16:59 grid
drwx——. 5 oracle oinstall 4096 Oct 27 16:29 oracle
[root@ukrac3 grid]# ls -l /home/grid
drwxr-xr-x. 3 grid oinstall 4096 Nov 19 16:06 patches
[root@ukrac3 grid]# ls -l /home/grid/patches
drwxr-xr-x. 3 grid oinstall 4096 Nov 19 16:07 PSU1
[grid@ukrac3 patches]$ ls -l /home/grid/patches/PSU1
drwxrwxrwx. 6 grid oinstall 4096 Oct 14 08:53 17272829
-rw-r–r–. 1 grid oinstall 739187177 Nov 19 16:05 p17272829_121010_Linux-x86-64.zip
APPLY THE PATCH – SUCCESS
—————————————–
[root@ukrac3 grid]# /u01/app/12.1.0/grid/OPatch/opatchauto apply /home/grid/patches/PSU1/17272829 -ocmrf /tmp/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation. All rights reserved.
OPatchauto version : 12.1.0.1.2
OUI version : 12.1.0.1.0
Running from : /u01/app/12.1.0/grid
opatchauto log file: /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/17272829/opatch_gi_2013-11-21_17-28-48_deploy.log
Parameter Validation: Successful
Grid Infrastructure home:
/u01/app/12.1.0/grid
RAC home(s):
/u01/app/oracle/product/12.1.0/dbhome_1
Configuration Validation: Successful
Patch Location: /home/grid/patches/PSU1/17272829
Grid Infrastructure Patch(es): 17027533 17077442 17303297
RAC Patch(es): 17027533 17077442
Patch Validation: Successful
Stopping RAC (/u01/app/oracle/product/12.1.0/dbhome_1) … Successful
Applying patch(es) to “/u01/app/oracle/product/12.1.0/dbhome_1” …
Patch “/home/grid/patches/PSU1/17272829/17027533” successfully applied to “/u01/app/oracle/product/12.1.0/dbhome_1”.
Patch “/home/grid/patches/PSU1/17272829/17077442” successfully applied to “/u01/app/oracle/product/12.1.0/dbhome_1”.
Stopping CRS … Successful
Applying patch(es) to “/u01/app/12.1.0/grid” …
Patch “/home/grid/patches/PSU1/17272829/17027533” successfully applied to “/u01/app/12.1.0/grid”.
Patch “/home/grid/patches/PSU1/17272829/17077442” successfully applied to “/u01/app/12.1.0/grid”.
Patch “/home/grid/patches/PSU1/17272829/17303297” successfully applied to “/u01/app/12.1.0/grid”.
Starting CRS … Successful
Starting RAC (/u01/app/oracle/product/12.1.0/dbhome_1) … Successful
[WARNING] SQL changes, if any, could not be applied on the following database(s): RAC … Please refer to the log file for more details.
Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u01/app/12.1.0/grid: 17027533, 17077442, 17303297
RAC Home: /u01/app/oracle/product/12.1.0/dbhome_1: 17027533, 17077442
opatchauto succeeded.
Thanks for the follow up!
Pingback: GI PSU 12.1.0.1.2 | Stojan's Oracle Blog