Martins Blog

Trying to explain complex things in simple terms

11.2.0.2 Bundled Patch 3 for Linux x86-64bit Take 2

Posted by Martin Bach on February 3, 2011

Yesterday I wrote about the application of Bundle Patch 3 to my 2 node RAC cluster. Unfortunately I have run into problems when applying the patches for the GRID_HOME. I promised a fix to the situation, and here it is.

First of all I was unsure if I could apply the missing patches manually, but then decided against it. The opatch output for interim patches lists the patch together with a unique patch as shown here:

Interim patches (3) :

Patch  10626132     : applied on Wed Feb 02 16:08:43 GMT 2011
Unique Patch ID:  13350217
 Created on 31 Dec 2010, 00:18:12 hrs PST8PDT
 Bugs fixed:
 10626132

The formatting unfortunately is lost when pasting this here.

I was not sure if that patch/unique patch combination would appear if I patched manually so decided to not be brave and roll the bundle patch back altogether before applying it again.

Patch Rollback

This was actually very simple: I have opted to rollback all the applied patches from the GRID_HOME. The documentation states that you have to simply append the “-rollback” flag to the opatch command. I tried it on the node where the application of 2 patches failed:

[root@node1 stage]# opatch auto /u01/app/oracle/product/stage/10387939 -oh /u01/app/oragrid/product/11.2.0.2 -rollback
Executing /usr/bin/perl /u01/app/oragrid/product/11.2.0.2/OPatch/crs/patch112.pl -patchdir /u01/app/oracle/product/stage -patchn 10387939 -oh /u01/app/oragrid/product/11.2.0.2 -rollback -paramfile /u01/app/oragrid/product/11.2.0.2/crs/install/crsconfig_params
opatch auto log file location is /u01/app/oragrid/product/11.2.0.2/OPatch/crs/../../cfgtoollogs/opatchauto2011-02-03_09-04-13.log
Detected Oracle Clusterware install
Using configuration parameter file: /u01/app/oragrid/product/11.2.0.2/crs/install/crsconfig_params
OPatch  is bundled with OCM, Enter the absolute OCM response file path:
/u01/app/oracle/product/stage/ocm.rsp
Successfully unlock /u01/app/oragrid/product/11.2.0.2
patch 10387939  rollback successful for home /u01/app/oragrid/product/11.2.0.2
The patch  10157622 does not exist in /u01/app/oragrid/product/11.2.0.2
The patch  10626132 does not exist in /u01/app/oragrid/product/11.2.0.2
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4123: Oracle High Availability Services has been started.

So that was simple enough. Again – you won’t see anything in your opatch session and you might think that the command somehow stalled. I usually start a “screen” on my terminal and open a new window to tail the opatchauto file in $GRID_HOME/cfgtoollogs/.

Re-Applying the Patch

The next step is to re-apply the patch. The initial failure was a lack of disk space as it has been evident from the log file.

2011-02-02 15:57:45: The apply patch output is Invoking OPatch 11.2.0.1.4

 Oracle Interim Patch Installer version 11.2.0.1.4
 Copyright (c) 2010, Oracle Corporation.  All rights reserved.

 UTIL session

 Oracle Home       : /u01/app/oragrid/product/11.2.0.2
 Central Inventory : /u01/app/oracle/product/oraInventory
 from           : /etc/oraInst.loc
 OPatch version    : 11.2.0.1.4
 OUI version       : 11.2.0.2.0
 OUI location      : /u01/app/oragrid/product/11.2.0.2/oui
 Log file location : /u01/app/oragrid/product/11.2.0.2/cfgtoollogs/opatch/opatch2011-02-02_15-57-35PM.log

 Patch history file: /u01/app/oragrid/product/11.2.0.2/cfgtoollogs/opatch/opatch_history.txt

 Invoking utility "napply"
 Checking conflict among patches...
 Checking if Oracle Home has components required by patches...
 Checking conflicts against Oracle Home...
 OPatch continues with these patches:   10157622

 Do you want to proceed? [y|n]
 Y (auto-answered by -silent)
 User Responded with: Y

 Running prerequisite checks...
 Prerequisite check "CheckSystemSpace" failed.
 The details are:
 Required amount of space(2086171834) is not available.
 UtilSession failed: Prerequisite check "CheckSystemSpace" failed.

 OPatch failed with error code 73

2011-02-02 15:57:45: patch /u01/app/oracle/product/stage/10387939/10157622  apply  failed  for home  /u01/app/oragrid/product/11.2.0.2
2011-02-02 15:57:45: Performing Post patch actions

So this time around I ensured that I had enough free space (2.5G recommended minimum) available in my GRID_HOME. The procedure is the inverse to the rollback:

[root@node1 stage]# opatch auto /u01/app/oracle/product/stage/10387939 -oh /u01/app/oragrid/product/11.2.0.2
Executing /usr/bin/perl /u01/app/oragrid/product/11.2.0.2/OPatch/crs/patch112.pl -patchdir /u01/app/oracle/product/stage -patchn 10387939 -oh /u01/app/oragrid/product/11.2.0.2 -paramfile /u01/app/oragrid/product/11.2.0.2/crs/install/crsconfig_params
opatch auto log file location is /u01/app/oragrid/product/11.2.0.2/OPatch/crs/../../cfgtoollogs/opatchauto2011-02-03_09-27-39.log
Detected Oracle Clusterware install
Using configuration parameter file: /u01/app/oragrid/product/11.2.0.2/crs/install/crsconfig_params
OPatch  is bundled with OCM, Enter the absolute OCM response file path:
/u01/app/oracle/product/stage/ocm.rsp
Successfully unlock /u01/app/oragrid/product/11.2.0.2
patch /u01/app/oracle/product/stage/10387939/10387939  apply successful for home  /u01/app/oragrid/product/11.2.0.2
patch /u01/app/oracle/product/stage/10387939/10157622  apply successful for home  /u01/app/oragrid/product/11.2.0.2
patch /u01/app/oracle/product/stage/10387939/10626132  apply successful for home  /u01/app/oragrid/product/11.2.0.2
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4123: Oracle High Availability Services has been started.
[root@node1 stage]#

And this time it worked-all 3 patches were applied. However my free space diminished quite drastically from 2.5G to around 780M. And that was after purging lots of logs from $GRID_HOME/log/`hostname -s`/ . Nevertheless, this concludes the patch application.

Summary

In summary I am quite impressed with this patch. It looks as if it had been designed to be deployable by OEM and it’s silent, doesn’t require input (except for the ocm.rsp file) and is a rolling patch. However, the user has to manually check the installed patches vs the list of installable targets manually or through a script to ensure that all patches have been applied.You also have to ensure you have enough free space on your $GRID_HOME mount point.

As an enhancement request I’d like to request feedback from the opatch session that it started doing things-initially I hit CTRL-C thinking the command had stalled while it was actually busy in the background, shutting down my CRS stack. The “workaround” is to tail the logfile with the “-f” option.

2 Responses to “11.2.0.2 Bundled Patch 3 for Linux x86-64bit Take 2”

  1. Yogesh Garg said

    hi,

    I am facing an issue while doing RAC installation on Linux environment.

    This is with regards to Oracle RAC installation on standard edition RDBMS.

    Oracle 11.2.0.2 grid infrastructure and RDBMS standard edition for RAC was installed successfully (after applying patch no. 9974223) without any errors.

    Now while performing High availibility tests we faced following problem.

    When we restarted both the machines parallelly, all the services got mounted on both the nodes as per it’s normal behavior.

    Now If one of the two nodes got down, services which were running on failed node got transferred to the working or available node. So till this step everything was fine.

    But once the failed node is restarted respective services which got transferred to available node didn’t failed back to the restarted/failed node. This is the main issue.

    Please suggest if you have any solutions….

    • Martin said

      Hi –

      that’s the expected and documented behaviour. Services don’t fail back automatically to the preferred node when it comes back online. If you want this you’d have to write some code to start/relocate the service.

      HTH,

      Martin

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: