Martins Blog

Trying to explain complex things in simple terms

Archive for November, 2013

Inside a RAC 12c GNS cluster

Posted by Martin Bach on November 26, 2013

Based on some reader feedback I started looking at GNS again, but this time it will be for RAC 12c. According to the documentation GNS has been enhanced so that you can use it without subdomain delegation. I decided to try the “old fashioned” way though: DHCP for VIPs, SCAN IPs, subdomain delegation and the like as it is the most complex setup. I occasionally like complex.

The network setup is exactly the same as I used before in 11.2 and thankfully didn’t require any changes. The cluster I am building is a 2 node system on Oracle Linux 6.4 and the Red Hat compatible kernel. I have to use this as the Unbreakable Kernel doesn’t know about block devices made available to it via virtio-scsi. I use virtio-scsi for shared block devices very much in the same way I did for Xen.

The installation was rather rough, I had a few niggles around the DNS setup with reverse name resolution. I wish I had an updated checkip.sh available for the standard RAC installation, it would make it so much easier to detect problems in advance!

Once the networking problems were sorted out I could confirm that GNS was indeed working after the installation of both nodes:

[oracle@rac12node1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac12node1               STABLE
               ONLINE  ONLINE       rac12node2               STABLE
ora.SYSDG.dg
               ONLINE  ONLINE       rac12node1               STABLE
               ONLINE  ONLINE       rac12node2               STABLE
ora.asm
               ONLINE  ONLINE       rac12node1               Started,STABLE
               ONLINE  ONLINE       rac12node2               Started,STABLE
ora.net1.network
               ONLINE  ONLINE       rac12node1               STABLE
               ONLINE  ONLINE       rac12node2               STABLE
ora.ons
               ONLINE  ONLINE       rac12node1               STABLE
               ONLINE  ONLINE       rac12node2               STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac12node2               STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac12node1               STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac12node1               STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       rac12node1               169.254.61.234 192.1
                                                             68.101.30 192.168.10
                                                             2.30,STABLE
ora.cvu
      1        ONLINE  ONLINE       rac12node1               STABLE
ora.gns
      1        ONLINE  ONLINE       rac12node1               STABLE
ora.gns.vip
      1        ONLINE  ONLINE       rac12node1               STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       rac12node1               Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       rac12node1               STABLE
ora.rac12node1.vip
      1        ONLINE  ONLINE       rac12node1               STABLE
ora.rac12node2.vip
      1        ONLINE  ONLINE       rac12node2               STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac12node2               STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac12node1               STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac12node1               STABLE
--------------------------------------------------------------------------------

As you can see the GNS processes (GNS + GNS-VIP) are part of the upper stack, and both are avilable. If that is the case then some IP addresses must have been acquired via DHCP, and fair enough, if I check my DHCP server I can find out that addresses were lent:

...
Nov 15 14:24:22 aux dhcpd: DHCPDISCOVER from 00:00:00:00:00:00 via eth0
Nov 15 14:24:22 aux dhcpd: Abandoning IP address 192.168.100.32: pinged before offer
Nov 15 14:24:22 aux dhcpd: Wrote 0 class decls to leases file.
Nov 15 14:24:22 aux dhcpd: Wrote 2 leases to leases file.
Nov 15 14:24:27 aux dhcpd: DHCPDISCOVER from 00:00:00:00:00:00 via eth0
Nov 15 14:24:28 aux dhcpd: DHCPOFFER on 192.168.100.33 to 00:00:00:00:00:00 via eth0
Nov 15 14:24:28 aux dhcpd: DHCPREQUEST for 192.168.100.33 (192.168.100.2) from
 00:00:00:00:00:00 via eth0
Nov 15 14:24:28 aux dhcpd: DHCPACK on 192.168.100.33 to 00:00:00:00:00:00 via eth0
Nov 15 14:24:46 aux dhcpd: DHCPDISCOVER from 00:00:00:00:00:00 via eth0
Nov 15 14:24:47 aux dhcpd: DHCPOFFER on 192.168.100.34 to 00:00:00:00:00:00 via eth0
Nov 15 14:24:47 aux dhcpd: DHCPREQUEST for 192.168.100.34 (192.168.100.2) from 00:00:00:00:00:00 via eth0
Nov 15 14:24:47 aux dhcpd: DHCPACK on 192.168.100.34 to 00:00:00:00:00:00 via eth0
Nov 15 14:24:52 aux dhcpd: DHCPDISCOVER from 00:00:00:00:00:00 via eth0
Nov 15 14:24:53 aux dhcpd: DHCPOFFER on 192.168.100.35 to 00:00:00:00:00:00 via eth0
Nov 15 14:24:53 aux dhcpd: DHCPREQUEST for 192.168.100.35 (192.168.100.2) from
00:00:00:00:00:00 via eth0
Nov 15 14:24:53 aux dhcpd: DHCPACK on 192.168.100.35 to 00:00:00:00:00:00 via eth0
Nov 15 14:25:11 aux dhcpd: DHCPDISCOVER from 00:00:00:00:00:00 via eth0
Nov 15 14:25:12 aux dhcpd: DHCPOFFER on 192.168.100.36 to 00:00:00:00:00:00 via eth0
Nov 15 14:25:12 aux dhcpd: DHCPREQUEST for 192.168.100.36 (192.168.100.2) from
00:00:00:00:00:00 via eth0
Nov 15 14:25:12 aux dhcpd: DHCPACK on 192.168.100.36 to 00:00:00:00:00:00 via eth0
...

Further information about the network address acquisition can be found in the gns log file in $GRID_HOME/log/$(hostname -s)/gnsd/gnsd.log. After a lot of verbose messaging about its own intialisation you can find some more interesting information.

2013-11-15 14:24:16.892: [     GNS][827389472]main::clsgndGetInstanceInfo: version: 12.1.0.1.0
 address "tcp://192.168.100.37:33912" port: 33912 process ID: "11081" state: "Initializing".
2013-11-15 14:24:16.892: [     GNS][827389472]main::clsgndadvAdvertise: Listening for commands
 on tcp://192.168.100.37:33912 port 33912.

So it’s ready to take commands. And soon thereafter they come in:

2013-11-15 14:24:21.309: [     GNS][690910976]Resolver #0::clsgndnsAddAnswer: name:
 "Oracle-GNS" adding Name: "Oracle-GNS" Type: SRV service: Oracle-GNS instance: rac12gns
  target: Oracle-GNS protocol: tcp service name: gns.example.com sub-subdomain: (NULL)
  port: 33912 weight: 0 priority: 0 Unique: FALSE Flags: ALLOCATED NON-PERSISTENT
  NO-EXPIRATION Creation: Fri Nov 15 14:24:16 2013  Expiration Fri Nov 15 14:24:31 2013
...
2013-11-15 14:24:38.947: [     GNS][680404736]Command #1::clsgndcpHostAdvertise:
 advertising name "rac12node1-vip" address: "192.168.100.33" time to live (seconds): 0
 lease expiration (seconds) 0.
2013-11-15 14:24:43.601: [     GNS][680404736]Command #1::clsgnctrInsertHMAC: Connection
 ID: length: 48 7169...f7d85556125a
2013-11-15 14:24:51.707: [     GNS][678303488]Command #2::clsgnctrCreateReceivePacket:
 connection version: 12.1.0.0.0 (0xc100000)
2013-11-15 14:24:51.707: [     GNS][678303488]Command #2::clsgnctrCreateReceivePacket:
 Connection ID: length: 48 b720f6f1ab0...30f2fe49811677520829bb71b74fbce
2013-11-15 14:24:51.708: [     GNS][678303488]Command #2::clsgndcpWait: Running command
 "advertise" (4933)
2013-11-15 14:24:51.708: [     GNS][678303488]Command #2::clsgndcpHostAdvertise:
 advertising name "rac12gns-scan1-vip" address: "192.168.100.34" time to
 live (seconds): 0 lease expiration (seconds) 0.
2013-11-15 14:24:57.286: [     GNS][678303488]Command #2::clsgnctrInsertHMAC: Connection
 ID: length: 48 77d111eb90a30...4f6da783e4e2fd033529
2013-11-15 14:24:57.288: [     GNS][680404736]Command #1::clsgnctrCreateReceivePacket:
  connection version: 12.1.0.0.0 (0xc100000)
2013-11-15 14:24:57.288: [     GNS][680404736]Command #1::clsgnctrCreateReceivePacket:
 Connection ID: length: 48 bddc...930aa35aef01bcd880ab041c0406681958f1fb3c08d8ab51
2013-11-15 14:24:57.289: [     GNS][680404736]Command #1::clsgndcpWait: Running command
 "advertise" (4933)
2013-11-15 14:24:57.289: [     GNS][680404736]Command #1::clsgndcpHostAdvertise:
 advertising name "rac12scan" address: "192.168.100.34" time to live (seconds): 0
 lease expiration (seconds) 0.
2013-11-15 14:25:11.489: [     GNS][676202240]Command #3::clsgnctrCreateReceivePacket:
 connection version: 12.1.0.0.0 (0xc100000)
2013-11-15 14:25:11.489: [     GNS][676202240]Command #3::clsgnctrCreateReceivePacket:
 Connection ID: length: 48 1898551a83efa80ee41b50dd68ea...7dba2099a097
2013-11-15 14:25:11.489: [     GNS][676202240]Command #3::clsgndcpWait: Running command
 "advertise" (4933)
2013-11-15 14:25:11.489: [     GNS][676202240]Command #3::clsgndcpHostAdvertise:
 advertising name "rac12gns-scan2-vip" address: "192.168.100.35" time to live (seconds):
 0 lease expiration (seconds) 0.
...

But instead of having to go over this file you can get the same information from clusterware:

[oracle@rac12node1 ~]$ srvctl config scan
SCAN name: rac12scan.gns.example.com, Network: 1
Subnet IPv4: 192.168.100.0/255.255.255.0/eth0
Subnet IPv6:
SCAN 0 IPv4 VIP: -/scan1-vip/192.168.100.34
SCAN name: rac12scan.gns.example.com, Network: 1
Subnet IPv4: 192.168.100.0/255.255.255.0/eth0
Subnet IPv6:
SCAN 1 IPv4 VIP: -/scan2-vip/192.168.100.35
SCAN name: rac12scan.gns.example.com, Network: 1
Subnet IPv4: 192.168.100.0/255.255.255.0/eth0
Subnet IPv6:
SCAN 2 IPv4 VIP: -/scan3-vip/192.168.100.36

[oracle@rac12node1 ~]$ srvctl config nodeapps
Network 1 exists
Subnet IPv4: 192.168.100.0/255.255.255.0/eth0, dhcp
Subnet IPv6:
VIP exists: network number 1, hosting node rac12node1
VIP IPv4 Address: -/rac12node1-vip/192.168.100.33
VIP IPv6 Address:
VIP exists: network number 1, hosting node rac12node2
VIP IPv4 Address: -/rac12node2-vip/192.168.100.39
VIP IPv6 Address:
ONS exists: Local port 6100, remote port 6200, EM port 2016

[oracle@rac12node1 ~]$ srvctl config gns -list
Oracle-GNS A 192.168.100.37 Unique Flags: 0x15
rac12gns-scan1-vip A 192.168.100.34 Unique Flags: 0x1
rac12gns-scan2-vip A 192.168.100.35 Unique Flags: 0x1
rac12gns-scan3-vip A 192.168.100.36 Unique Flags: 0x1
rac12gns.Oracle-GNS SRV Target: Oracle-GNS Protocol: tcp Port: 33912 Weight: 0 Priority: 0
  Flags: 0x15
rac12gns.Oracle-GNS TXT CLUSTER_NAME="rac12gns", CLUSTER_GUID="16aaef3df3d8efd6bffb1f37101d84e3",
  NODE_ADDRESS="192.168.100.37", SERVER_STATE="RUNNING", VERSION="12.1.0.1.0",
  DOMAIN="gns.example.com" Flags: 0x15
rac12node1-vip A 192.168.100.33 Unique Flags: 0x1
rac12node2-vip A 192.168.100.39 Unique Flags: 0x1
rac12scan A 192.168.100.34 Unique Flags: 0x1
rac12scan A 192.168.100.35 Unique Flags: 0x1
rac12scan A 192.168.100.36 Unique Flags: 0x1

[root@rac12node1 ~]# srvctl config gns -detail
GNS is enabled.
GNS is listening for DNS server requests on port 53
GNS is using port 5,353 to connect to mDNS
GNS status: OK
Domain served by GNS: gns.example.com
GNS version: 12.1.0.1.0
Globally unique identifier of the cluster where GNS is running: 16aaef3df3d8efd6bffb1f37101d84e3
Name of the cluster where GNS is running: rac12gns
Cluster type: server.
GNS log level: 1.
GNS listening addresses: tcp://192.168.100.37:16934.

[oracle@rac12node1 ~]$ cluvfy comp gns -postcrsinst -verbose

Verifying GNS integrity

Checking GNS integrity...
Checking if the GNS subdomain name is valid...
The GNS subdomain name "gns.example.com" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.100.0, 192.168.100.0, 192.168.100.0, 192.168.100.0,
  192.168.100.0" match with the GNS VIP "192.168.100.0, 192.168.100.0, 192.168.100.0,
  192.168.100.0, 192.168.100.0"
Checking if the GNS VIP is a valid address...
GNS VIP "gns-vip.example.com" resolves to a valid IP address
Checking the status of GNS VIP...
Checking if FDQN names for domain "gns.example.com" are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable
Checking status of GNS resource...
  Node          Running?                  Enabled?
  ------------  ------------------------  ------------------------
  rac12node1    yes                       yes
  rac12node2    no                        yes

GNS resource configuration check passed
Checking status of GNS VIP resource...
  Node          Running?                  Enabled?
  ------------  ------------------------  ------------------------
  rac12node1    yes                       yes
  rac12node2    no                        yes

GNS VIP resource configuration check passed.

GNS integrity check passed

Verification of GNS integrity was successful.

And finally you can use nslookup to perform name resolution:

[oracle@rac12node1 ~]$ nslookup rac12scan.gns.example.com 192.168.100.37
Server:         192.168.100.37
Address:        192.168.100.37#53

Name:   rac12scan.gns.example.com
Address: 192.168.100.34
Name:   rac12scan.gns.example.com
Address: 192.168.100.36
Name:   rac12scan.gns.example.com
Address: 192.168.100.35

This is reflected in the GNS log file:

013-11-15 15:06:21.190: [     GNS][682505984]Resolver #3::clsgndnsProcessNameQuestion:
 Query received for name: "rac12scan.gns.example.com." type: "A"
2013-11-15 15:06:21.190: [     GNS][682505984]Resolver #3::clsgndnsProcessNameQuestion:
 Query received for name: "rac12scan.gns.example.com." type: "A"
2013-11-15 15:06:21.190: [     GNS][682505984]Resolver #3::clsgndnsAddAnswer: name:
 "rac12scan" adding Name: "rac12scan" Type: A 192.168.100.34 Unique: TRUE Flags: ALLOCATED
2013-11-15 15:06:21.190: [     GNS][682505984]Resolver #3::clsgndnsAddAnswer: name:
 "rac12scan" adding Name: "rac12scan" Type: A 192.168.100.36 Unique: TRUE Flags: ALLOCATED
2013-11-15 15:06:21.190: [     GNS][682505984]Resolver #3::clsgndnsAddAnswer: name:
 "rac12scan" adding Name: "rac12scan" Type: A 192.168.100.35 Unique: TRUE Flags: ALLOCATED
2013-11-15 15:06:21.190: [     GNS][682505984]Resolver #3::clsgndnsResolve: name
 "rac12scan.gns.example.com." Type: Addr Resolved: TRUE

So that’s Clusterware using GNS installed. A few points to notice

  • The GNS VIP is on the domain, in this case gns-vip.example.com
  • The public node names have to be defined in DNS or the hosts file (or both)
  • Only the VIPs (node VIPs and SCAN VIPs) are assigned by DHCP
  • Have a large enough range of addresses in /etc/dhcp/dhcpd.conf
  • Run cluvfy stage -pre dbinst -n node1,node2 to check if the prerequisites for the RDBMS installation are met

Happy installations!

Posted in 12c Release 1, KVM, Linux | 4 Comments »

Applying PSU 12.1.0.1.1 in the lab environment

Posted by Martin Bach on November 13, 2013

Since the first patch for Oracle 12c has been made available I was of course keen to see how to apply it. For a first test I opted to use my 3 node RAC cluster which is running on Oracle Linux 6.4 with UEK2. This post is not terribly well-polished, it’s more of a log of what I did…

UPDATE: You might want to review this blog post for more information about a known defect with datapatch for PSU 1 and PSU 2.

The cluster makes use of some of the new 12c RAC features such as Flex ASM but it is not a Flex Cluster:

[oracle@rac12node3 ~]$ srvctl config asm
ASM home: /u01/app/12.1.0.1/grid
Password file: +OCR/orapwASM
ASM listener: LISTENER
ASM instance count: 2
Cluster ASM listener: ASMNET1LSNR_ASM
[oracle@rac12node3 ~]$

My database is a standard 3 node administrator managed build:

[oracle@rac12node3 ~]$ srvctl config database -d rcdb1
Database unique name: RCDB1
Database name: RCDB1
Oracle home: /u01/app/oracle/product/12.1.0.1/dbhome_2
Oracle user: oracle
Spfile: +DATA/RCDB1/spfileRCDB1.ora
Password file: +DATA/RCDB1/orapwrcdb1
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: RCDB1
Database instances: RCDB11,RCDB12,RCDB13
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
Database is administrator managed

Patching

Now about the patch: the patch number to use at the time of this writing is 17272829 which I usually stage into an NFS mount before I apply it. But as always, you need a new version of OPatch (sigh). I have applied the patch with this opatch version:

[oracle@rac12node3 dbhome_2]$ /u01/app/oracle/product/12.1.0.1/dbhome_2/OPatch/opatch version
OPatch Version: 12.1.0.1.2

OPatch succeeded.

The next requirement is the OCM.rsp (Oracle Configuration Manager) file which you can create using $ORACLE_HOME/OPatch/ocm/bin/emocmrsp executable. The resulting file is usually created in the current working directory (CWD). The file should then be distributed to all RAC nodes in your cluster.

What to patch

In my case with a pure 12.1.0.1.0 environment the decision about what to patch is quickly made: you need opatch auto. Refer to the read me for further configuration information. My case however is made a little more interesting as I have an ADVM volume plus an ACFS file system containing (a different) shared Oracle home-I wonder how that is going to be handled. The relevant section in the readme is named “Case 2: GI Home is not shared, Database Home is shared, ACFS may be used”, this is what you can read about here. Refer to the Readme for instructions on how to patch a 12.1.0.1.0 system without ACFS or shared software homes.

Patch application

I am starting with rac12node1, the first cluster node. It has an ASM instance:

[grid@rac12node1 temp]$ srvctl status asm
ASM is running on rac12node1,rac12node2

I am stopping all Oracle instances for the Oracle home in question using srvctl stop instance -d <db_unique_name> -i . This leaves the following auxiliary instances on my system:

[grid@rac12node1 temp]$ ps -ef | grep pmon
grid      2540     1  0 09:50 ?        00:00:00 asm_pmon_+ASM1
grid      3026     1  0 09:51 ?        00:00:00 mdb_pmon_-MGMTDB
grid      6058     1  0 09:52 ?        00:00:00 apx_pmon_+APX1
grid     10754  9505  0 10:41 pts/3    00:00:00 grep pmon
[grid@rac12node1 temp]$

Except for +ASM1 all of the other ones are new to 12.1, and I’ll cover these in another post in more detail.

I then unmounted the ACFS file system on the first node before I could use the opatchauto command as root to apply the patch. DO NOT DO THIS AS IT DOESN’T WORK-read on.

[root@rac12node1 ~]# /u01/app/12.1.0.1/grid/OPatch/opatchauto apply /u01/app/grid/temp/17272829 \
> -oh /u01/app/12.1.0.1/grid -ocmrf ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /u01/app/12.1.0.1/grid

opatchauto log file: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/17272829/opatch_gi_2013-11-13_10-45-33_deploy.log

Parameter Validation: Successful

System Configuration Collection failed: oracle.osysmodel.driver.crs.productdriver.ProductDriverException:
Unable to determine if "/u01/app/12.1.0.1/grid" is a shared oracle home.

opatchauto failed with error code 2

Huh? Turns out this is a bug which is still under development so we need to apply the patch the old fashioned way. This is how I described it in Pro Oracle RAC 11g on Linux, on 11.2.0.1 and before.

Manually patching the GRID Home

Here are the steps, apologies for the verbose output, that’s normally nicely hidden away in $GRID_HOME/cfgtoollogs. First the GRID home has to be unlocked as it is normally owned by the root account. Do this as root.

[root@rac12node1 ~]# /u01/app/12.1.0.1/grid/crs/install/rootcrs.pl -prepatch
Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params
Oracle Clusterware active version on the cluster is [12.1.0.1.0]. The cluster upgrade state is [NORMAL].
The cluster active patch level is [0].
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12node1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac12node1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac12node1'
CRS-2673: Attempting to stop 'ora.DATA.ORAHOMEVOL.advm' on 'rac12node1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac12node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'rac12node1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac12node1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac12node1'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac12node1'
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'rac12node1'
CRS-2677: Stop of 'ora.cvu' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'rac12node3'
CRS-2676: Start of 'ora.cvu' on 'rac12node3' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'rac12node1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.rac12node1.vip' on 'rac12node1'
CRS-2677: Stop of 'ora.scan3.vip' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.scan3.vip' on 'rac12node2'
CRS-2677: Stop of 'ora.rac12node1.vip' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.rac12node1.vip' on 'rac12node3'
CRS-2676: Start of 'ora.scan3.vip' on 'rac12node2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'rac12node2'
CRS-2677: Stop of 'ora.DATA.dg' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'rac12node1'
CRS-2676: Start of 'ora.rac12node1.vip' on 'rac12node3' succeeded
CRS-2677: Stop of 'ora.OCR.dg' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac12node1'
CRS-2677: Stop of 'ora.asm' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac12node1'
CRS-2677: Stop of 'ora.oc4j' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'rac12node2'
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'rac12node2' succeeded
CRS-2677: Stop of 'ora.mgmtdb' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'rac12node1'
CRS-2677: Stop of 'ora.MGMTLSNR' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.MGMTLSNR' on 'rac12node3'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac12node3'
CRS-2676: Start of 'ora.MGMTLSNR' on 'rac12node3' succeeded
CRS-2672: Attempting to start 'ora.mgmtdb' on 'rac12node3'
CRS-2676: Start of 'ora.asm' on 'rac12node3' succeeded
CRS-2676: Start of 'ora.oc4j' on 'rac12node2' succeeded
CRS-2676: Start of 'ora.mgmtdb' on 'rac12node3' succeeded
CRS-2677: Stop of 'ora.DATA.ORAHOMEVOL.advm' on 'rac12node1' succeeded
CRS-2679: Attempting to clean 'ora.DATA.ORAHOMEVOL.advm' on 'rac12node1'
CRS-2681: Clean of 'ora.DATA.ORAHOMEVOL.advm' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'rac12node1'
CRS-2677: Stop of 'ora.proxy_advm' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac12node1'
CRS-2677: Stop of 'ora.ons' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac12node1'
CRS-2677: Stop of 'ora.net1.network' on 'rac12node1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac12node1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'rac12node1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac12node1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac12node1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12node1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12node1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac12node1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac12node1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac12node1'
CRS-2673: Attempting to stop 'ora.storage' on 'rac12node1'
CRS-2677: Stop of 'ora.storage' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac12node1'
CRS-2677: Stop of 'ora.asm' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac12node1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac12node1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac12node1'
CRS-2677: Stop of 'ora.cssd' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac12node1'
CRS-2677: Stop of 'ora.crf' on 'rac12node1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac12node1'
CRS-2677: Stop of 'ora.gipcd' on 'rac12node1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2013/11/13 10:54:05 CLSRSC-347: Successfully unlock /u01/app/12.1.0.1/grid

Watch out for the last line, it must read “successfully unlock $GRID_HOME”

Switch to the GRID owner account now! As I’m using separation of duties in my environment that’s the grid user:

[grid@rac12node1 temp]$ opatch apply -oh /u01/app/12.1.0.1/grid -local 17272829/17027533 -silent \
-ocmrf /tmp/ocm.rsp
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/12.1.0.1/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.1.0.1/grid/oraInst.loc
OPatch version    : 12.1.0.1.2
OUI version       : 12.1.0.1.0
Log file location : /u01/app/12.1.0.1/grid/cfgtoollogs/opatch/opatch2013-11-13_10-58-09AM_1.log

Applying interim patch '17027533' to OH '/u01/app/12.1.0.1/grid'
Verifying environment and performing prerequisite checks...
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/12.1.0.1/grid')

Is the local system ready for patching? [y|n]
y
Y (auto-answered by -silent)
User Responded with: Y
Backing up files...

Patching component oracle.rdbms, 12.1.0.1.0...

Patching component oracle.rdbms.dbscripts, 12.1.0.1.0...

Patching component oracle.rdbms.rsf, 12.1.0.1.0...

Patching component oracle.ldap.rsf, 12.1.0.1.0...

Patching component oracle.ldap.rsf.ic, 12.1.0.1.0...

Verifying the update...
Patch 17027533 successfully applied
Log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/opatch/opatch2013-11-13_10-58-09AM_1.log

OPatch succeeded.

Followed by the next patch:

[grid@rac12node1 temp]$ opatch apply -oh /u01/app/12.1.0.1/grid -local 17272829/17303297 -silent \
> -ocmrf /tmp/ocm.rsp
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/12.1.0.1/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.1.0.1/grid/oraInst.loc
OPatch version    : 12.1.0.1.2
OUI version       : 12.1.0.1.0
Log file location : /u01/app/12.1.0.1/grid/cfgtoollogs/opatch/opatch2013-11-13_11-06-37AM_1.log

Applying interim patch '17303297' to OH '/u01/app/12.1.0.1/grid'
Verifying environment and performing prerequisite checks...
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/12.1.0.1/grid')

Is the local system ready for patching? [y|n]
Y (auto-answered by -silent)
User Responded with: Y
Backing up files...

Patching component oracle.usm, 12.1.0.1.0...

Verifying the update...
Patch 17303297 successfully applied
Log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/opatch/opatch2013-11-13_11-06-37AM_1.log

OPatch succeeded.

And finally the last one for the GI home:

[grid@rac12node1 temp]$ opatch apply -oh /u01/app/12.1.0.1/grid -local 17272829/17077442 -silent \
> -ocmrf /tmp/ocm.rsp
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/12.1.0.1/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.1.0.1/grid/oraInst.loc
OPatch version    : 12.1.0.1.2
OUI version       : 12.1.0.1.0
Log file location : /u01/app/12.1.0.1/grid/cfgtoollogs/opatch/opatch2013-11-13_11-26-23AM_1.log

Applying interim patch '17077442' to OH '/u01/app/12.1.0.1/grid'
Verifying environment and performing prerequisite checks...
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/12.1.0.1/grid')

Is the local system ready for patching? [y|n]
Y (auto-answered by -silent)
User Responded with: Y
Backing up files...

Patching component oracle.crs, 12.1.0.1.0...

Patching component oracle.has.db, 12.1.0.1.0...

Patching component oracle.has.common, 12.1.0.1.0...

Verifying the update...

Patch 17077442 successfully applied
Log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/opatch/opatch2013-11-13_11-26-23AM_1.log

OPatch succeeded.

That’s one Oracle home patched, now over to the RAC home.

Patching the RDBMS home

As the RDBMS owner, patch the home. The first step is to apply the prepatch:

[oracle@rac12node1 ~]$ /u01/app/grid/temp/17272829/17077442/custom/scripts/prepatch.sh \
> -dbhome /u01/app/oracle/product/12.1.0.1/dbhome_2
/u01/app/grid/temp/17272829/17077442/custom/scripts/prepatch.sh completed successfully.

Then apply a subset of the patches just applied to the RDBMS home: 17027533 and 17077442:

[oracle@rac12node1 ~]$ opatch apply -oh /u01/app/oracle/product/12.1.0.1/dbhome_2 \
> -local /u01/app/grid/temp/17272829/17027533 -silent -ocmrf /tmp/ocm.rsp
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/12.1.0.1/dbhome_2
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0.1/dbhome_2/oraInst.loc
OPatch version    : 12.1.0.1.2
OUI version       : 12.1.0.1.0
Log file location : /u01/app/oracle/product/12.1.0.1/dbhome_2/cfgtoollogs/opatch/opatch2013-11-13_11-34-24AM_1.log

Applying interim patch '17027533' to OH '/u01/app/oracle/product/12.1.0.1/dbhome_2'
Verifying environment and performing prerequisite checks...
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/12.1.0.1/dbhome_2')

Is the local system ready for patching? [y|n]
Y (auto-answered by -silent)
User Responded with: Y
Backing up files...

Patching component oracle.rdbms, 12.1.0.1.0...

Patching component oracle.rdbms.dbscripts, 12.1.0.1.0...

Patching component oracle.rdbms.rsf, 12.1.0.1.0...

Patching component oracle.ldap.rsf, 12.1.0.1.0...

Patching component oracle.ldap.rsf.ic, 12.1.0.1.0...

Verifying the update...
Patch 17027533 successfully applied
Log file location: /u01/app/oracle/product/12.1.0.1/dbhome_2/cfgtoollogs/opatch/opatch2013-11-13_11-34-24AM_1.log

OPatch succeeded.

[oracle@rac12node1 dbhome_2]$ opatch apply -oh /u01/app/oracle/product/12.1.0.1/dbhome_2 \
> -local /u01/app/grid/temp/17272829/17077442 -silent -ocmrf /tmp/ocm.rsp
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/12.1.0.1/dbhome_2
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/12.1.0.1/dbhome_2/oraInst.loc
OPatch version    : 12.1.0.1.2
OUI version       : 12.1.0.1.0
Log file location : /u01/app/oracle/product/12.1.0.1/dbhome_2/cfgtoollogs/opatch/opatch2013-11-13_11-03-20AM_1.log

Applying interim patch '17077442' to OH '/u01/app/oracle/product/12.1.0.1/dbhome_2'
Verifying environment and performing prerequisite checks...
Patch 17077442: Optional component(s) missing : [ oracle.crs, 12.1.0.1.0 ]
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/12.1.0.1/dbhome_2')

Is the local system ready for patching? [y|n]
y
Y (auto-answered by -silent)
User Responded with: Y
Backing up files...

Patching component oracle.has.db, 12.1.0.1.0...
Patching component oracle.has.common, 12.1.0.1.0...

Verifying the update...
Patch 17077442 successfully applied
Log file location: /u01/app/oracle/product/12.1.0.1/dbhome_2/cfgtoollogs/opatch/opatch2013-11-13_11-03-20AM_1.log

OPatch succeeded.

The ACFS ORACLE Home cannot be patched at this time as it’s not yet mounted. I’ll do that later I think, this has already taken longer than I expected. Now it is time to clean up and preparing to start the stack:

[oracle@rac12node1 ~]$ /u01/app/grid/temp/17272829/17077442/custom/scripts/postpatch.sh \
> -dbhome /u01/app/oracle/product/12.1.0.1/dbhome_2
Reading /u01/app/oracle/product/12.1.0.1/dbhome_2/install/params.ora..
Reading /u01/app/oracle/product/12.1.0.1/dbhome_2/install/params.ora..
Parsing file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/srvctl
Parsing file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/srvconfig
Parsing file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/cluvfy
Verifying file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/racgwrap
Skipping the missing file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/racgwrap
Verifying file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/srvctl
Verifying file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/srvconfig
Verifying file /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/cluvfy
Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/srvctl
Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/srvconfig
Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/cluvfy
Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/diskmon.bin
Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/lsnodes
Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/osdbagrp
Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/rawutl
Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/srvm/admin/ractrans
Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/srvm/admin/getcrshome
Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/gnsd
Reapplying file permissions on /u01/app/oracle/product/12.1.0.1/dbhome_2/bin/crsdiag.pl

Wrap up

As root, you need to run a few more steps to lock the homes and set ownership/permissions properly:

/u01/app/12.1.0.1/grid/rdbms/install/rootadd_rdbms.sh

[root@rac12node1 install]# /u01/app/12.1.0.1/grid/crs/install/rootcrs.pl -postpatch
Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac12node1'
CRS-2672: Attempting to start 'ora.evmd' on 'rac12node1'
CRS-2676: Start of 'ora.evmd' on 'rac12node1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac12node1'
CRS-2676: Start of 'ora.gpnpd' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac12node1'
CRS-2676: Start of 'ora.gipcd' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac12node1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac12node1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac12node1'
CRS-2676: Start of 'ora.diskmon' on 'rac12node1' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'rac12node1'
CRS-2676: Start of 'ora.cssd' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac12node1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac12node1'
CRS-2676: Start of 'ora.ctssd' on 'rac12node1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac12node1'
CRS-2676: Start of 'ora.asm' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac12node1'
CRS-2676: Start of 'ora.storage' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac12node1'
CRS-2676: Start of 'ora.crf' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac12node1'
CRS-2676: Start of 'ora.crsd' on 'rac12node1' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: rac12node1
CRS-2672: Attempting to start 'ora.MGMTLSNR' on 'rac12node1'
CRS-2672: Attempting to start 'ora.scan3.vip' on 'rac12node1'
CRS-2672: Attempting to start 'ora.scan2.vip' on 'rac12node1'
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac12node1'
CRS-2672: Attempting to start 'ora.rac12node1.vip' on 'rac12node1'
CRS-2672: Attempting to start 'ora.ons' on 'rac12node1'
CRS-2672: Attempting to start 'ora.rac12node2.vip' on 'rac12node1'
CRS-2672: Attempting to start 'ora.oc4j' on 'rac12node1'
CRS-2672: Attempting to start 'ora.cvu' on 'rac12node1'
CRS-2672: Attempting to start 'ora.rac12node3.vip' on 'rac12node1'
CRS-2676: Start of 'ora.cvu' on 'rac12node1' succeeded
CRS-2676: Start of 'ora.scan2.vip' on 'rac12node1' succeeded
CRS-2676: Start of 'ora.scan3.vip' on 'rac12node1' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'rac12node1' succeeded
CRS-2676: Start of 'ora.rac12node2.vip' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac12node1'
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac12node1'
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'rac12node1'
CRS-2676: Start of 'ora.rac12node3.vip' on 'rac12node1' succeeded
CRS-2676: Start of 'ora.rac12node1.vip' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'rac12node1'
CRS-2676: Start of 'ora.MGMTLSNR' on 'rac12node1' succeeded
CRS-2676: Start of 'ora.ons' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.mgmtdb' on 'rac12node1'
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac12node1' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac12node1' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'rac12node1' succeeded
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'rac12node1' succeeded
CRS-2672: Attempting to start 'ora.rcdb1.db' on 'rac12node1'
CRS-2676: Start of 'ora.oc4j' on 'rac12node1' succeeded
CRS-2676: Start of 'ora.mgmtdb' on 'rac12node1' succeeded
CRS-2676: Start of 'ora.rcdb1.db' on 'rac12node1' succeeded
CRS-6016: Resource auto-start has completed for server rac12node1
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
Oracle Clusterware active version on the cluster is [12.1.0.1.0]. The cluster upgrade state is [NORMAL].
The cluster active patch level is [0].
PRCC-1010 : proxy_advm was already enabled
PRCR-1002 : Resource ora.proxy_advm is already enabled

SQL Patching tool version 12.1.0.1.0 on Wed Nov 13 11:48:18 2013
Copyright (c) 2013, Oracle.  All rights reserved.

Connecting to database...OK
Determining current state...done
Nothing to roll back
The following patches will be applied: 17027533
Adding patches to installation queue...done
Installing patches...done
Validating logfiles...done
SQL Patching tool complete on Wed Nov 13 11:48:54 2013

Once that rootcrs script finished you should be able to patch the ACFS-based RDBMS home which I have skipped for now.

You need to repeat the patching on all nodes in your cluster before the operation has completed, and that’s how I thought I’d end the post. However here the fun only really started!

Remember that CRS is currently started on rac12node1, the one that I just patched. To work around space issues on my /u01 mount point and I had to extend it on rac12node2 and 3. Before shutting the nodes down I disabled CRS, then added storage and started again. In other words there was only 1 active node currently active. Trying to run the prepatch command on rac12node2 and rac12node3 failed.

[root@rac12node2 rac12node2]# /u01/app/12.1.0.1/grid/crs/install/rootcrs.pl -prepatch
Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
2013/11/13 12:27:59 CLSRSC-117: Failed to start Oracle Clusterware stack

Died at /u01/app/12.1.0.1/grid/crs/install/crspatch.pm line 646.

I stopped CRS and started it again but no chance: CRSD and EVMD just wouldn’t want to come up. The OCR service complained about a version mismatch:

2013-11-13 12:25:51.750:
[crsd(3079)]CRS-1019:The OCR Service exited on host rac12node2.
Details in /u01/app/12.1.0.1/grid/log/rac12node2/crsd/crsd.log

The log file showed information about the OCR master and this is where I believe is the problem: rac12node1 probably is the master since nodes 2 and 3 were shut down when CRS started on node 1. There is some evidence to this in the crsd log file:

2013-11-13 12:25:19.182: [  OCRMAS][241129216]th_populate_rank: Rank of OCR:[1]. Rank of ASM Instance:[0].
Rank of CRS Standby:[0]. OCR on ASM:[1]. ASM mode:[2]. My Rank:[1]. Min Rank:[3].
2013-11-13 12:25:19.183: [  OCRMAS][241129216]prom_dump_pubdata: pubdatactx->prom_pubdata_ocrid=479152486
2013-11-13 12:25:19.183: [  OCRMAS][241129216]prom_dump_pubdata: begin dumping pubdatactx->prom_pubdata_prom_con
2013-11-13 12:25:19.183: [  OCRMAS][241129216]prom_dump_con: promcon->cache_invalidation_port=0
2013-11-13 12:25:19.183: [  OCRMAS][241129216]prom_dump_con: promcon->remote_listening_port=0
2013-11-13 12:25:19.183: [  OCRMAS][241129216]prom_dump_con: promcon->local_listening_port=0
2013-11-13 12:25:19.183: [  OCRMAS][241129216]prom_dump_pubdata: end dumping pubdatactx->prom_pubdata_prom_con
2013-11-13 12:25:19.183: [  OCRMAS][241129216]prom_dump_pubdata: pubdatactx->prom_pubdata_software_version=202375424
2013-11-13 12:25:19.183: [  OCRMAS][241129216]prom_dump_pubdata: pubdatactx->prom_pubdata_active_version=202375424
2013-11-13 12:25:19.183: [  OCRMAS][241129216]prom_dump_pubdata: pubdatactx->prom_pubdata_software_patch=0
2013-11-13 12:25:19.183: [  OCRMAS][241129216]prom_dump_pubdata: pubdatactx->prom_pubdata_active_patch=0
2013-11-13 12:25:19.183: [  OCRMAS][241129216]prom_dump_pubdata: pubdatactx->prom_pubdata_state=0
2013-11-13 12:25:19.183: [  OCRMAS][241129216]prom_dump_pubdata: pubdatactx->prom_pubdata_priv_nodename=gipcha<rac12node2>
<dd63-600c-b68f-816f><17ae-619b-7d2b-9c41>
2013-11-13 12:25:19.191: [  OCRMAS][239027968]th_monitor_ocrlocalgrp: Reconfig event is received and there is no change
in ASM instance membership. Ignoring this event.
2013-11-13 12:25:19.195: [  OCRMAS][241129216]proath_master:5':reconfig/grpmaster event [1] happened. Incarnation:[194]
2013-11-13 12:25:19.197: [  OCRMAS][241129216]rcfg_con:1: Member [1] joined. Inc [194].
2013-11-13 12:25:19.197: [  OCRMAS][241129216]rcfg_con:1: Member [2] joined. Inc [194].
2013-11-13 12:25:19.197: [  OCRMAS][241129216]proath_master: Master changing. cssctx->master [-1] new master [1]
is_new_mastership [1] num_rcfg [1].
2013-11-13 12:25:19.198: [  OCRMAS][241129216]th_hub_verify_master_pubdata: Shutdown CacheLocal. Patch Levels don't match.
Local Patch Level [0] != Cache Writer Patch Level [1650217826]
2013-11-13 12:25:19.198: [  OCRAPI][241129216]procr_ctx_set_invalid: ctx is in state [6].
2013-11-13 12:25:19.198: [  OCRAPI][241129216]procr_ctx_set_invalid: ctx set to invalid
2013-11-13 12:25:19.199: [  OCRAPI][241129216]procr_ctx_set_invalid: Aborting...

I then stopped CRS on node 1-remember it is my lab so I can easily do this. In a real production environment where you are performing a rolling patch this wouldn’t be so easy.

Trying the prepatch again on node 2 and 3 succeeded once node1 had CRS stopped. The patch application went smooth as well. It reminded me of exactly the same problem when I patched 10.2.0.4 to 11.2.0.1 years ago: watch out for the OCR master :)

Posted in 12c Release 1, Linux, RAC | 5 Comments »