Monthly Archives: January 2014

Data Guard transport lag in OEM 12c

I have come across this phenomenon a couple of times now so I thought it was worth writing up.

Consider a scenario where you get an alert because your standby database has an apply lag. The alert is generated by OEM and when you log in and check-it has indeed an apply lag. Even worse, the apply lag increases with every refresh of the page! I tagged this as an 11.2 problem but it’s definitely not related to that version.

Here is a screenshot of this misery:

 Lag in OEM

Now there are of course a number of possible causes:

  • There is a lag
  • You are not using Real Time Apply

The first one is easy to check: look at the redo generation rate on the primary database to see if it’s any different. Maybe you are currently loading lots of data? Maybe a batch job has been initiated that goes over a lot of data… the possibilities are nearly endless.

Another, more subtle interpretation could be that you are not using Real Time Apply. How can you check? In the broker command line interface for example:

DGMGRL> show configuration

Configuration - test

  Protection Mode: MaxPerformance
  Databases:
    pri - Primary database
      Warning: ORA-16789: standby redo logs not configured

    sby - Physical standby database
      Warning: ORA-16789: standby redo logs not configured

Fast-Start Failover: DISABLED

Configuration Status:
WARNING

The warnings about missing standby redo logs show that you cannot possibly use Real Time Apply (it needs standby redo logs). The other option is in the database itself:

SQL> select dest_id,status,database_mode,recovery_mode
  2  from v$archive_dest_status
  3  where status <> 'INACTIVE';

   DEST_ID STATUS    DATABASE_MODE   RECOVERY_MODE
---------- --------- --------------- -----------------------
         1 VALID     MOUNTED-STANDBY MANAGED
        32 VALID     UNKNOWN         IDLE

Did you notice dest_id of 32? That’s a bit of an unusual one, more on that later (since you can only set log_archive_dest_x where x is {1,31}).

So indeed we have managed recovery active, but not using Real Time Apply. This is expressed in the database status:

DGMGRL> show database verbose sby

Database - sby

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   28 seconds
  Apply Lag:       28 seconds
  Real Time Query: OFF
  Instance(s):
    sby

A few moments later when you query the database again the lag has increased:

DGMGRL> show database verbose sby

Database - sby

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   3 minutes 22 seconds
  Apply Lag:       3 minutes 22 seconds
  Real Time Query: OFF
  Instance(s):
    sby

This is to be expected-the primary is still happily processing user requests. The cure is to add standby redo logs, as suggested in so many places and described in the Data Guard documentation. After the successful addition of SRLs the lag should disappear. A restart of managed recovery using the broker will show something along these lines on the standby:

2014-01-30 14:35:18.353000 +00:00
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE  THROUGH ALL SWITCHOVER DISCONNECT  USING CURRENT LOGFILE
Attempt to start background Managed Standby Recovery process (sby)
MRP0 started with pid=24, OS id=4854
MRP0: Background Managed Standby Recovery process started (sby)
2014-01-30 14:35:23.406000 +00:00
 started logmerger process
Managed Standby Recovery starting Real Time Apply
...
2014-01-30 14:37:12.595000 +00:00
Media Recovery Waiting for thread 1 sequence 20 (in transit)
2014-01-30 14:37:13.691000 +00:00
Recovery of Online Redo Log: Thread 1 Group 5 Seq 20 Reading mem 0
  Mem# 0: +DATA/sby/onlinelog/group_5.266.838218819

Two important bits of information are shown here: Managed Standby Recovery starting Real Time Apply and the fact that it is using the standby redo log. Sure enough, after the database is in sync with its primary and uses the log, the lag is gone:

DGMGRL> show database verbose sby

Database - sby

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds
  Apply Lag:       0 seconds
  Real Time Query: OFF
  Instance(s):
    sby

And also in the OEM view:

OEM-lag-02

Slight Variation

I have also seen this problem in OEM where the transport lag was near 0 and therefore hardly visible due to the scale of the graph. The apply lag nevertheless resulted from the primary working and the current log hasn’t shipped to the standby-obviously before the implementation of standby redo logs. You saw a spike mounting in the OEM view until the next log switch on the primary when the apply lag dropped to 0 for a brief moment before increasing again.

Summary

Real Time Apply is a very very useful feature, especially when used together with the maximum availability protection mode. The real risk of not using standby redo logs – and implicitly no RT Apply – is that you lose data since the current online redo log on the primary has not been copied across. If you need to activate your standby you will be some transactions short of the primary. The larger the online redo log, the larger the gap.

Advertisements

Massive tablespace fragmentation on LMT with ASSM

I have been asked to investigate another interesting problem worth writing about. It’s worth mentioning here because it deals with a problem I believed to have long since been solved: tablespace fragmentation. However, in this case it was a locally managed tablespace (LMT) with Automatic Segment Space Management (ASSM) enabled. It should be difficult to have fragmentation on one of these, but as you will see it is not impossible.

So the story started innocently enough with an ORA-01653 while shuffling around subpartitions to another tablespace:

01653, 00000, "unable to extend table %s.%s by %s in tablespace %s"
// *Cause:  Failed to allocate an extent of the required number of blocks for
//          a table segment in the tablespace indicated.
// *Action: Use ALTER TABLESPACE ADD DATAFILE statement to add one or more
//          files to the tablespace indicated.

Well except that there was plenty of free space available, the sum(bytes) in dba_free_tablespaces for the tablespace showed in excess of 20 GB. So it could only be: fragmentation. And indeed I quickly found out that the largest contiguous free block was 8 MB. Which should really have been enough for most use cases… After a little digging around I noticed that all subpartitions on the tablespace had at least 9 extents, some more. The sum of bytes per subpartition ever so slightly exceeded 8MB which was the cause for the fragmentation. Here is an example:

SQL> select segment_name,partition_name,bytes
  2  from dba_extents
  3  where segment_name = 'T1_SUBPART'
  4  and partition_name = 'SYS_SUBP26737';

SEGMENT_NAME                   PARTITION_NAME                      BYTES
------------------------------ ------------------------------ ----------
T1_SUBPART                     SYS_SUBP26737                     1048576
T1_SUBPART                     SYS_SUBP26737                     1048576
T1_SUBPART                     SYS_SUBP26737                     1048576
T1_SUBPART                     SYS_SUBP26737                     1048576
T1_SUBPART                     SYS_SUBP26737                     1048576
T1_SUBPART                     SYS_SUBP26737                     1048576
T1_SUBPART                     SYS_SUBP26737                      131072
T1_SUBPART                     SYS_SUBP26737                      131072
T1_SUBPART                     SYS_SUBP26737                      131072
T1_SUBPART                     SYS_SUBP26737                      131072
T1_SUBPART                     SYS_SUBP26737                      196608
T1_SUBPART                     SYS_SUBP26737                     1048576
T1_SUBPART                     SYS_SUBP26737                     1048576
                                                              ----------
sum                                                              9109504

13 rows selected.

Some of the extents are nicely aligned along 1MB boundaries, but there are a few that are 128kb in size. Multiply this by a few thousand and you get the picture. It’s not good if you have 20,000 entries in DBA_FREE_SPACE for a given tablespace!

The error message was thrown during the execution of an anonymous PL/SQL block that moved lots of subpartitions from one tablespace to another. There were these already mentioned roughly few thousand subpartitions to be moved plus a few unpartitioned tables. The process aborted halfway. The following is more or less a transcript of my troubleshooting efforts.

The test case

To reproduce the problem in our lab I created a table with lots of sub partitions, using the following SQL statement. It doesn’t matter that the subpartitions are more or less empty in this case.

CREATE TABLE t1_subpart
  (
    id, t_pad, date_Created, date_completed, state, spcol
  )
  partition BY range (id) interval (1000)
  subpartition BY list (spcol) subpartition template
  (
    subpartition sp1 VALUES ('0'),
    subpartition sp2 VALUES ('1'),
    subpartition sp3 VALUES ('2'),
    subpartition sp4 VALUES ('3'),
    subpartition spdef VALUES (DEFAULT)
  )
  (
    partition p_man VALUES less than (10000)
  ) AS
WITH v1 AS
  (SELECT rownum n FROM dual CONNECT BY level <= 10000
  )
SELECT mod(rownum,1000000) id,
  rpad(rownum,100) t_pad,
  TRUNC(sysdate) - 180 + TRUNC((rownum-1)/3)/86400 date_created,
  TRUNC(sysdate) - 180 + TRUNC((rownum-1)/3)/86400 + dbms_random.value(1800, 86400)/86400 date_completed,
  CASE
    WHEN mod(rownum,100000) = 0
    THEN CAST('RARE' AS VARCHAR2(12))
    WHEN mod(rownum,10000) = 0
    THEN CAST('FAIRLY RARE' AS VARCHAR2(12))
    WHEN mod(rownum,1000) = 0
    THEN CAST('NOT RARE' AS VARCHAR2(12))
    WHEN mod(rownum,100) = 0
    THEN CAST('COMMON' AS   VARCHAR2(12))
    ELSE CAST('THE REST' AS VARCHAR2(12))
  END state,
  TO_CHAR(mod(rownum, 4)) spcol
FROM v1,
  v1
WHERE rownum <= 1e6; 

(If this looks familiar to you then you probably know this presentation by Jonathan Lewis)

A couple of things to remember on this 11.2.0.3 database: Oracle introduced large extents for partitions, as explained in “Initial Extent Size of a Partition Changed to 8MB from 64KB After Upgrade to 11.2.0.2 or Later (Doc ID 1295484.1)”. So having empty partitions is now a huge waste of space!

But back to the problem: is there anything that would lead you to suspect fragmentation?

SQL> select max(blocks) from dba_segments where segment_name = 'T1_SUBPART';

MAX(BLOCKS)
-----------
       1024

Elapsed: 00:00:00.15

No, all subpartitions are 8 MB, most of them 1x8M, < 15 different:

select s.segment_name,s.partition_name, s.blocks, s.extents
from user_segments s
where s.segment_name = 'T1_SUBPART'
and extents <> 1
order by extents desc;

SEGMENT_NAME                   PARTITION_NAME                     BLOCKS    EXTENTS
------------------------------ ------------------------------ ---------- ----------
T1_SUBPART                     SYS_SUBP29210                        1024          8
T1_SUBPART                     SYS_SUBP32032                        1024          5
T1_SUBPART                     SYS_SUBP29180                        1024          5
T1_SUBPART                     SYS_SUBP28738                        1024          4
T1_SUBPART                     SYS_SUBP32624                        1024          4
T1_SUBPART                     SYS_SUBP26953                        1024          4
T1_SUBPART                     SYS_SUBP30400                        1024          4
T1_SUBPART                     SYS_SUBP30996                        1024          4
T1_SUBPART                     SYS_SUBP31591                        1024          4
T1_SUBPART                     SYS_SUBP28143                        1024          4
T1_SUBPART                     SYS_SUBP27548                        1024          4
T1_SUBPART                     SYS_SUBP29805                        1024          4

Most of the subpartitions have 1×8 MB extents, very few have more, but all sum up to 8MB:

SQL> select count(extents),extents from user_segments s
   where s.segment_name = 'T1_SUBPART'
   group by extents

COUNT(EXTENTS)    EXTENTS
-------------- ----------
          4942          1
             9          4
             2          5
             1          8

Elapsed: 00:00:00.66
SQL>

SQL> select distinct bytes from user_segments where segment_name = 'T1_SUBPART';

     BYTES
----------
   8388608

Next I am trying to reproduce the problem as closely as possible. Here, HCC compression has been enabled by the customer on the table, after which the partitions have been moved to the destination tablespace. Note that this problem has nothing to do with HCC. I have since tried to reproduce this case outside of Exadata with the same result.

SQL> alter table t1_subpart compress for query high;

Table altered.

Next I’ll show you DBA_FREE_SPACE for the tablepace:

SQL> select * from dba_free_space
  2* where tablespace_name = 'DESTINATION';

TABLESPACE_NAME                   FILE_ID   BLOCK_ID      BYTES     BLOCKS RELATIVE_FNO
------------------------------ ---------- ---------- ---------- ---------- ------------
DESTINATION                            10        128 1072693248     130944           10
DESTINATION                            11        128 1072693248     130944           11

Elapsed: 00:00:00.01

So now I’m going to move all of the subpartitions. At first I couldn’t reproduce the fragmentation at all when I moved these serially-everything stayed just as it was, some of the subpartitions which consisted of more than 1 extent actually were coalesced into just 1. I didn’t pay attention to the “parallel” keyword as I thought that was just a means of speeding things up. When after the 3rd time I had no luck reproducing the fragmentation I actually tried with parallel, and here is how it went. First the anonymous PL/SQL block:

SQL> !cat movemove.sql
begin
 for i in (select subpartition_name from user_tab_subpartitions
           where table_name = 'T1_SUBPART' and tablespace_name <> 'DESTINATION')
 loop
  begin
   execute immediate 'alter table t1_subpart move subpartition ' || i.subpartition_name || ' tablespace DESTINATION parallel';
  exception
   when others then
    dbms_output.put_line('could not move ' || i.subpartition_name || ' due to ' || sqlerrm(sqlcode));
  end;
 end loop;
end;
/

This takes a long time, and the quality of the script is not production-worthy! In this case the anonymous block completed without issues, but that was just 1 table and not the same volume as the real system where this happened. Look at the extent distribution now:

SQL> select count(extents),extents from user_segments s
  2  where s.segment_name = 'T1_SUBPART'
  3  group by extents;

COUNT(EXTENTS)    EXTENTS
-------------- ----------
           634          1
          3959         13
           356          5
             4         48
             1          8

Also consider the impact on DBA_FREE_SPACE:

SQL> select count(1) from dba_free_Space where tablespace_name = 'DESTINATION';

  COUNT(1)
----------
     19796

Now looking at a random subpartition shows the dilemma:

SQL> select PARTITION_NAME, SEGMENT_TYPE, TABLESPACE_NAME, EXTENT_ID, BYTES, BLOCKS
  2  from dba_extents where segment_name = 'T1_SUBPART'
  3  and partition_name = 'SYS_SUBP26737'
  4  order by extent_id;

PARTITION_NAME                 SEGMENT_TYPE       TABLESPACE_NAME                 EXTENT_ID      BYTES     BLOCKS
------------------------------ ------------------ ------------------------------ ---------- ---------- ----------
SYS_SUBP26737                  TABLE SUBPARTITION DESTINATION                             0     131072         16
SYS_SUBP26737                  TABLE SUBPARTITION DESTINATION                             1     131072         16
SYS_SUBP26737                  TABLE SUBPARTITION DESTINATION                             2     131072         16
SYS_SUBP26737                  TABLE SUBPARTITION DESTINATION                             3     131072         16
SYS_SUBP26737                  TABLE SUBPARTITION DESTINATION                             4     196608         24
SYS_SUBP26737                  TABLE SUBPARTITION DESTINATION                             5    1048576        128
SYS_SUBP26737                  TABLE SUBPARTITION DESTINATION                             6    1048576        128
SYS_SUBP26737                  TABLE SUBPARTITION DESTINATION                             7    1048576        128
SYS_SUBP26737                  TABLE SUBPARTITION DESTINATION                             8    1048576        128
SYS_SUBP26737                  TABLE SUBPARTITION DESTINATION                             9    1048576        128
SYS_SUBP26737                  TABLE SUBPARTITION DESTINATION                            10    1048576        128
SYS_SUBP26737                  TABLE SUBPARTITION DESTINATION                            11    1048576        128
SYS_SUBP26737                  TABLE SUBPARTITION DESTINATION                            12    1048576        128
                                                                                            ---------- ----------
sum                                                                                            9109504       1112

So there you go-I don’t really know what to say other than “d’oh”. It probably doesn’t hurt you too much but it’s still worth knowing that if you are consolidating thousands of segments in a previously empty tablespace you can get into trouble. If I work out why it does this I’ll post a follow-up. For now the work around is to move segments serially. Yes I know.

Applying GI PSU 12.1.0.1.2 in the lab

In my previous posts about the first RAC Grid Infrastructure Patchset I document a few issues I encountered that were worth noting. But where things work as advertised I am more than happy to document it too. In a way, the January 2014 GI PSU works as you’d hope it would (at least in my lab for my 2 node cluster). Well-almost: if you have a non 12.1 database in your environment you might encounter this.

UPDATE: You might want review some additional information with regards to datapatch.

Admittedly it’s taken from an Oracle Restart (i.e. non cluster) environment but I can’t see this not happening in RAC:

[root@server1 stage]# opatchauto apply /u01/stage/12.1.0.1.2/17735306 -ocmrf /u01/stage/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /u01/app/grid/product/12.1.0.1/grid

opatchauto log file: /u01/app/grid/product/12.1.0.1/grid/cfgtoollogs/opatchauto/17735306/ -
opatch_gi_2014-02-17_20-04-54_deploy.log

Parameter Validation: Successful

System Configuration Collection failed: oracle.osysmodel.driver.crs.productdriver.ProductDriverException:
PRCD-1027 : Failed to retrieve database ora11
PRCD-1229 : An attempt to access configuration of database ora11 was rejected because its
 version 11.2.0.3.0 differs from the program version 12.1.0.1.0. Instead run the program
from /u01/app/oracle/product/11.2.0.3.

opatchauto failed with error code 2.
[root@server1 stage]#

The environment

I have a 2 node RAC cluster that hasn’t previously been patched. It uses GNS for name resolution and is probably one of the more complex setups. So I was keen to test the new GI Patch Set Update. Note that a PSU will roll back previous patches, it’s not a delta as you might know from RPM.

The cluster installation using GNS was slightly more challenging than the usual RAC system so I decided not to try separation of duties. In other words, the GRID and RDBMS owner are identical: oracle.

Staging

As always you have to update OPatch before you can apply the latest patch. I did this in all 4 homes: 2 RDBMS and 2 Grid homes. Download and stage the patch to a location you like by unzipping it.

Next you need the ocm response file. I simply copied it from the other cluster.

Patch application

I started the installation of the patch on node 1 of my cluster in a screen session as always. Node 1 was responsible for the GNS resource at the time. Following the instructions and some previous experience I decided to let opatchauto do all of the work. And this time Oracle even included the “apply” keyword in the docs! And it didn’t bail out immediately with obscure error messages either…

[root@rac12node1 temp]# opatchauto apply /u01/temp/17735306 -ocmrf /u01/temp/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /u01/app/12.1.0.1/grid

opatchauto log file: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/17735306/opatch_gi_2014-01-15_08-54-25_deploy.log

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/12.1.0.1/grid
RAC home(s):
/u01/app/oracle/product/12.1.0.1/dbhome_1

Configuration Validation: Successful

Patch Location: /u01/temp/17735306
Grid Infrastructure Patch(es): 17077442 17303297 17552800
RAC Patch(es): 17077442 17552800

Patch Validation: Successful

Stopping RAC (/u01/app/oracle/product/12.1.0.1/dbhome_1) ... Successful
Following database(s) were stopped and will be restarted later during the session: rac12c

Applying patch(es) to "/u01/app/oracle/product/12.1.0.1/dbhome_1" ...
Patch "/u01/temp/17735306/17077442" successfully applied to "/u01/app/oracle/product/12.1.0.1/dbhome_1".
Patch "/u01/temp/17735306/17552800" successfully applied to "/u01/app/oracle/product/12.1.0.1/dbhome_1".

Stopping CRS ... Successful

Applying patch(es) to "/u01/app/12.1.0.1/grid" ...
Patch "/u01/temp/17735306/17077442" successfully applied to "/u01/app/12.1.0.1/grid".
Patch "/u01/temp/17735306/17303297" successfully applied to "/u01/app/12.1.0.1/grid".
Patch "/u01/temp/17735306/17552800" successfully applied to "/u01/app/12.1.0.1/grid".

Starting CRS ... Successful

Starting RAC (/u01/app/oracle/product/12.1.0.1/dbhome_1) ... Successful

SQL changes, if any, are applied successfully on the following database(s): RAC12C

Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u01/app/12.1.0.1/grid: 17077442, 17303297, 17552800
RAC Home: /u01/app/oracle/product/12.1.0.1/dbhome_1: 17077442, 17552800

opatchauto succeeded.

Wow, I could hardly believe my eyes. I “tail”d the logfile in a different session to see what it was doing but the output seems less verbose these days than initial when “opatch auto” came out. See further down in the article about useful log locations.

A few remarks. It took quite a while for the patch to advance past “Starting CRS…” I remember staring at the screen anxiously in previous patch exercises and witnessed the CRS start sequence timing out. In this case there is a valid reason for the slowness, and it’s documented in $GRID_HOME/cfgtoollogs/crsconfig/crspatch*.log: it validates ACFS and applies patches to the -MGMTDB database. In fact you should open a third session to tail the crspatch log file to see what is happening as it provides a wealth of information about stopping and starting Clusterware resources.

With GI PSU 12.1.0.1.1 I ran into problems with CRS which got confused about who is mastering the OCR. With 12.1.0.1.2 this didn’t happen. CRS started successfully on the last node, and even started the database instance.

Log locations

Information about what’s going on can be found in many locations…

    • $GRID_HOME/cfgtoollogs/opatchauto/17735306/opatch_gi_timestamp_deploy.log for the opatchauto process
    • $GRID_HOME/cfgtoollogs/crsconfig/crspatch_hostname_timestamp.log records stop, unlocking, start and locking of Clusterware

More log information can be found in:

  • $GRID_HOME/cfgtoollogs/opatch contains logs for the opatch commands issued against the GRID home
  • $ORACLE_HOME/cfgtoollogs/opatch/opatchtimestamp.log – records output from the individual opatch commands issued against the RDBMS home.

Note these logs are not written to in that sequence. Start with the opatchauto logs, then refer to the more specific log locations for individual troubleshooting. The complete log file names are referenced at the end of the opatchauto log.

SQLPatch

The post installation steps require you to load SQL scripts into the database-it seems to have done that, as shown in the opatchauto log file:

2014-01-15_09-32-13 :
Ignoring the dbconsole command.
2014-01-15_09-32-17 :
Executing command:
/bin/bash -c 'ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1 \
ORACLE_SID=$(/u01/app/oracle/product/12.1.0.1/dbhome_1/bin/srvctl status instance -d RAC12C -n rac12node1 | cut -d " " -f 2) \
/u01/app/oracle/product/12.1.0.1/dbhome_1/OPatch/datapatch'
2014-01-15_09-33-45 :
Successfully executed the above command.

SQL changes, if any, are applied successfully on the following database(s): RAC12C

Except that I could not find a log file in $ORACLE_HOME/sqlpatch/17552800 directory in the RDBMS home on the first node. The -MGMTDB has been patched, the log is in the $GRID_HOME. The registry$history view didn’t show any sign of a patch either. Only after applying the PSU on the last node did Oracle patch the database.

Update

It appears that PSU 1 and 2 have issues when running datapatch. I found this in the output of my patch log:

Executing command:
/bin/bash -c 'ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_2 ORACLE_SID=$(/u01/app/oracle/product/12.1.0.1/dbhome_2/bin/srvctl sta
tus instance -d RCDB1 -n rac12node3 | cut -d " " -f 2) /u01/app/oracle/product/12.1.0.1/dbhome_2/OPatch/datapatch'
2014-03-21_11-25-36 :

COMMAND EXECUTION FAILURE :
SQL Patching tool version 12.1.0.1.0 on Fri Mar 21 11:23:26 2014
Copyright (c) 2013, Oracle.  All rights reserved.

Connecting to database...OK
Determining current state...done
For the following PDBs: CDB$ROOT
  Nothing to roll back
  The following patches will be applied: 17552800
For the following PDBs: PDB$SEED
  Nothing to roll back
  The following patches will be applied: 17552800
For the following PDBs: ACDEMOPDB
  Nothing to roll back
  The following patches will be applied: 17552800
For the following PDBs: DUBLIN
  Nothing to roll back
  The following patches will be applied: 17552800
Adding patches to installation queue...done
Installing patches...done
Validating logfiles...

ERROR:
mkdir: cannot create directory `/u01/app/grid/cfgtoollogs': Permission denied
mkdir: cannot create directory `/u01/app/grid/cfgtoollogs': Permission denied
mkdir: cannot create directory `/u01/app/grid/cfgtoollogs': Permission denied
mkdir: cannot create directory `/u01/app/grid/cfgtoollogs': Permission denied
DBD::Oracle::st execute failed: ORA-03113: end-of-file on communication channel
Process ID: 20551
Session ID: 21 Serial number: 9 (DBD ERROR: OCIStmtExecute) [for Statement "ALTER PLUGGABLE DATABASE pdb$seed
                 CLOSE IMMEDIATE INSTANCES=ALL"] at /u01/app/oracle/product/12.1.0.1/dbhome_2/sqlpatch/sqlpatch.pm line 448, <LOGFILE> line 6129.

The corresponding bug number is 17421502, and a patch exists. I downloaded the patch and applied it on all of my nodes (rolling!). After everything has come up, I re-ran datapatch and this time it seemed to have worked without an “end-of-file on communication channel”. See MOS Datapatch may fail while patching a RAC+PDB environment (Doc ID 1632818.1) for more information.


Note that you have to set your ORACLE_SID to the database to be patched for the above command to work.

Summary

Applying the January PSU to my cluster worked flawlessly, definitely a huge improvement over the last PSU which was a bit of a disaster.

Applying PSU 12.1.0.1.1 to Oracle Restart

I was actually hoping I had already written enough about applying the PSU for 12.1.0.1.0 but today I tried to apply the patch to my Oracle Restart home in my lab VM and guess what-it wasn’t as automatic as I thought.

I must have performed the classic copy & paste error and applied the patch to the GRID_HOME only. Please don’t do this! The README clearly states you can but if you do, you will run into this mess. At the time of writing there was no hit for the error on my favourite search engine, hence this post. If you see this on a real database and not on your personal lab VM you should of course consult support what to do. This is for educational purposes only, so to say.

I initiated the patch after upgrading OPatch on the RDBMS and GRID homes. The GI PSU (17272829) has been unzipped as the grid owner (“grid”) in /u01/app/grid/stage. I always use separation of duties in my play-VMs to test the edge cases. The path included the OPatch directory from the GRID_HOME. The command I executed was:

[root@server1 ~]$ opatchauto apply /u01/app/grid/stage/17272829 \
> -oh /u01/app/grid/product/12.1.0.1/grid -ocmrf /tmp/ocm.rsp

To my surprise this completed fine. After I realised my mistake-check the -oh flag-I wanted to patch the only RDBMS home too-should be simple. I unzipped the patch as oracle in /u01/app/oracle/stage and pointed all environment variables to the RDBMS home before I executed the patch command:

[root@server1 stage]# opatchauto apply /u01/app/oracle/stage/17272829 \
> -oh /u01/app/oracle/product/12.1.0.1/dbhome_1 -ocmrf /tmp/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /u01/app/oracle/product/12.1.0.1/dbhome_1

opatchauto log file: /u01/app/grid/product/12.1.0.1/grid/cfgtoollogs/opatchauto/17272829/opatch_gi_2013-11-18_04-01-04_deploy.log

Parameter Validation: Successful

System Configuration Collection failed: oracle.osysmodel.driver.crs.productdriver.ProductDriverException: 
Unknown host name in config:server1.example.com

opatchauto failed with error code 2.

Looks like a DNS error. The log file showed this:

$ cat /u01/app/grid/product/12.1.0.1/grid/cfgtoollogs/opatchauto/17272829/opatch_gi_2013-11-18_03-43-14_deploy.log
===============START of log file================

Initializing command-line Information:
Grid Home                     : /u01/app/grid/product/12.1.0.1/grid
Patch Location                : /u01/app/oracle/stage/17272829
OCM Response File location    : /tmp/ocm.rsp
Nodename                      : server1
Analyze mode                  : false
No-restart                    : false
Patching Home(s) selected     : /u01/app/oracle/product/12.1.0.1/dbhome_1, 
Database Name specified       : false

Initialization Successful.


Parameter Validation: Successful


2013-11-18_03-43-15 :
Collect Patch Data: In Progress...
2013-11-18_03-43-16 :
Collect Patch Data: Completed.
2013-11-18_03-43-16 :
Collect Target Data: In Progress...
2013-11-18_03-43-16 :
GIPatchingHelper::createSystemInstance() called for selected homes
2013-11-18_03-43-19 :
GIPatchingHelper::createSystemInstance() Failed

oracle.osysmodel.driver.crs.AbstractCrsProductDriver.findHostByName(AbstractCrsProductDriver.java:145)
oracle.osysmodel.driver.crs.AbstractCrsProductDriver.findOrMakeHome(AbstractCrsProductDriver.java:415)
oracle.osysmodel.driver.crs.AbstractCrsProductDriver.makeDBHomes(AbstractCrsProductDriver.java:507)
oracle.osysmodel.driver.crs.CrsProductDriver.buildConfigGraph(CrsProductDriver.java:295)
oracle.osysmodel.driver.crs.CrsProductDriver.buildConfigGraph(CrsProductDriver.java:143)
oracle.opatch.gi.GIPatchingHelper.createSystemInstance(GIPatchingHelper.java:251)
oracle.opatch.gi.GIPatchingSessionInfo.createSystemInstance(GIPatchingSessionInfo.java:310)
oracle.opatch.gi.GIPatchingSessionInfo.initializeConfiguration(GIPatchingSessionInfo.java:269)
oracle.opatch.gi.GIPatchingSessionInfo.initializeConfigData(GIPatchingSessionInfo.java:288)
oracle.opatch.gi.GIPatching.process(GIPatching.java:584)
oracle.opatch.GIPatchingSession.process(GIPatchingSession.java:89)
oracle.opatch.OPatchSession.main(OPatchSession.java:2871)
oracle.opatch.OPatch.main(OPatch.java:690)
System Configuration Collection failed: oracle.osysmodel.driver.crs.productdriver.ProductDriverException: 
Unknown host name in config:server1.example.com

I double checked DNS, /etc/resolv.conf, /etc/nsswitch.conf and many other but couldn’t get to the bottom of the problem in the time I allowed myself for troubleshooting. MOS wasn’t too helpful either. The note explaining how to manually apply the GI patch was for RAC, not Oracle Restart.

Until I decided to repeat the patch from the GRID_HOME just the way I performed the first patch-but without the -oh flag-and hurray, it worked:

[root@server1 stage]# opatchauto apply /u01/app/grid/stage/17272829 -ocmrf /tmp/ocm.rsp 
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation.  All rights reserved.

OPatchauto version : 12.1.0.1.2
OUI version        : 12.1.0.1.0
Running from       : /u01/app/grid/product/12.1.0.1/grid

opatchauto log file: /u01/app/grid/product/12.1.0.1/grid/cfgtoollogs/opatchauto/17272829/opatch_gi_2013-11-18_04-48-42_deploy.log

Parameter Validation: Successful

Grid Infrastructure home:
/u01/app/grid/product/12.1.0.1/grid
RAC home(s):
/u01/app/oracle/product/12.1.0.1/dbhome_1

Configuration Validation: Successful

Patch Location: /u01/app/grid/stage/17272829
Grid Infrastructure Patch(es): 17027533 17077442 17303297 
RAC Patch(es): 17027533 17077442 

Patch Validation: Successful

Stopping RAC (/u01/app/oracle/product/12.1.0.1/dbhome_1) ... Successful

Applying patch(es) to "/u01/app/oracle/product/12.1.0.1/dbhome_1" ...
Patch "/u01/app/grid/stage/17272829/17027533" successfully applied to "/u01/app/oracle/product/12.1.0.1/dbhome_1".
Patch "/u01/app/grid/stage/17272829/17077442" successfully applied to "/u01/app/oracle/product/12.1.0.1/dbhome_1".

Applying patch(es) to "/u01/app/grid/product/12.1.0.1/grid" ...
Patch "/u01/app/grid/stage/17272829/17027533" is already installed on "/u01/app/grid/product/12.1.0.1/grid". Please rollback the existing identical patch first.
Patch "/u01/app/grid/stage/17272829/17077442" is already installed on "/u01/app/grid/product/12.1.0.1/grid". Please rollback the existing identical patch first.
Patch "/u01/app/grid/stage/17272829/17303297" is already installed on "/u01/app/grid/product/12.1.0.1/grid". Please rollback the existing identical patch first.

Starting RAC (/u01/app/oracle/product/12.1.0.1/dbhome_1) ... Successful

[WARNING] The local database(s) on "/u01/app/oracle/product/12.1.0.1/dbhome_1" is not running. SQL changes, if any, cannot be applied.

Apply Summary:
Following patch(es) are successfully installed:
RAC Home: /u01/app/oracle/product/12.1.0.1/dbhome_1: 17027533, 17077442

opatchauto ran into some warnings during patch installation (Please see log file for details):
GI Home: /u01/app/grid/product/12.1.0.1/grid: 17027533, 17077442, 17303297

opatchauto succeeded.

It seemed to have worked, the patches in the GRID_HOME were identical and skipped. In a real live situation you should consult Oracle Support to get their blessing.

Summary

Moral of the story: don’t apply the patch to a single Oracle home only, let it do the whole patch! Also ensure you run datapatch to apply the SQL changes to any database from your RDBMS home.

HCC error on Exadata after partitioning maintenance

Recently I have been asked to investigate the following error on an Exadata system.

ORA-64307: hybrid columnar compression is not supported for tablespaces on this storage type

Well, that’s simple I thought! Must be (d)NFS mounted storage, right? Everyone knows that you can have HCC on Exadata (and a few other storage products). So I looked at the problem and soon found out that the data files in question all resided on the cells. Here is the sequence of events:

  1. HCC is introduced to a range-list partitioned table, residing on TBS1 (all segments, no exception). The range partitions are in fact interval partitions
  2. Subpartitions are moved to TBS2, this way they are compressed
  3. TBS1 is no longer needed and dropped

Now when we tried to insert a row into the table we get the above error. As with so many Oracle errors, this one is not very helpful in the situation. The error only occurs when a new partition is automatically created… so here is the test case to demonstrate the problem.

create table martin.T1 (id, c, name)
tablespace tbs1
partition by range (id)
interval (1000) store in (tbs1)
subpartition by list (c) subpartition template
(
  subpartition sp1 values ('0'),
  subpartition sp2 values ('1'),
  subpartition sp3 values ('2'),
  subpartition sp4 values ('3'),
  subpartition sp5 values ('4')
)
(
  partition p_man values less than (1000) tablespace tbs1
)
as select object_id, to_char(mod(rownum,4)) c, object_name from dba_objects;

You can see for yourself that everything is neatly stored on TBS1:

MARTIN:mbach1> select count(*),tablespace_name
  2 from dba_segments where segment_name = 'T1'
  3 group by tablespace_name;

  COUNT(*) TABLESPACE_NAME
---------- ------------------------------
       399 TBS1

Personal thoughts

Two comments about the above configuration.

There are 400 entries in user_tab_subpartitions, but only 399 in user_segments. It turned out that P_MAN_SP5 wasn’t created due to deferred segment creation which has been left at its default value of TRUE. Interestingly all subpartitions were created for the automatically created interval partition although I had a bug in my data generation query: mod(rownum,4) will NEVER generate a value of 4.

This leads to point 2-now you are wasting space! Due to _partition_large_extents default value in my database (actually from 11.2.0.2 onwards) any partition will be 8 MB in size, even if there are only 250 (or 0 as in this case!) records in it…

Enter HCC Compression

Now I enable compression (note this is propagated to partitions and subpartitions):

MARTIN:mbach1> alter table T1 compress for query high;

Table altered.

This doesn’t compress anything, it merely enables compression if rows are inserted using direct path operations. Therefore I need to move the subpartitions which causes them to be compressed. In this scenario I have a dedicated tablespace for the compressed data, TBS2.

MARTIN:mbach1> select count(*),tablespace_name from dba_segments where segment_name = 'T1' group by tablespace_name;

   COUNT(*) TABLESPACE_NAME
----------- ------------------------------
        399 TBS2

Since all segments for T1 are on TBS2, I can drop TBS1 which I did.

Now what has changed? The best answer you can get is from DBMS_METADATA:

MARTIN:mbach1> select dbms_metadata.get_ddl('TABLE','T1') from dual;

  CREATE TABLE "MARTIN"."T1"
   (	"ID" NUMBER,
	"C" VARCHAR2(40),
	"NAME" VARCHAR2(128)
   ) PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
 COMPRESS FOR QUERY HIGH
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TBS1"
  PARTITION BY RANGE ("ID") INTERVAL (1000) STORE IN ("TBS1")
  SUBPARTITION BY LIST ("C")
  SUBPARTITION TEMPLATE (
    SUBPARTITION "SP1" VALUES ( '0' ),
    SUBPARTITION "SP2" VALUES ( '1' ),
    SUBPARTITION "SP3" VALUES ( '2' ),
    SUBPARTITION "SP4" VALUES ( '3' ),
    SUBPARTITION "SP5" VALUES ( '4' ) )
 (PARTITION "P_MAN"  VALUES LESS THAN (1000)
PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TBS1"
 COMPRESS FOR QUERY HIGH
 ( SUBPARTITION "P_MAN_SP1"  VALUES ('0')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP2"  VALUES ('1')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP3"  VALUES ('2')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP4"  VALUES ('3')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP5"  VALUES ('4')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ) )

As you can see only the subpartition definition changed and correctly reflect the move to TBS2. What’s going to hurt me is the reference to TBS1 in the STORE IN clause for the interval partition. And indeed:

MARTIN:mbach1> insert into T1 values(150000,'1', 'ICH BINS');
insert into T1 values(150000,'1', 'ICH BINS')
            *
ERROR at line 1:
ORA-64307: hybrid columnar compression is not supported for tablespaces on this storage type

Well I wonder if you could compress data on a non-existing tablespace ….

I tried various ways to change the store-in clause of the interval partition but for some reason didn’t find the correct syntax in the documentation Thanks to Andy Klock I finally managed to get around the issue, he found the correct syntax in a MOS note. And it does even exist in the documentation as alter_interval_partitioning clause.

Now I can force the new interval partitions to be created on TBS2:

MARTIN@MBACH1:1> alter table martin.t1 set store in (tbs2);

Table altered.

The resulting metadata shows the change:

  CREATE TABLE "MARTIN"."T1"
   (	"ID" NUMBER,
	"C" VARCHAR2(40),
	"NAME" VARCHAR2(128)
   ) PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
 COMPRESS FOR QUERY HIGH
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TBS1"
  PARTITION BY RANGE ("ID") INTERVAL (1000) STORE IN ("TBS2")
  SUBPARTITION BY LIST ("C")
  SUBPARTITION TEMPLATE (
    SUBPARTITION "SP1" VALUES ( '0' ),
    SUBPARTITION "SP2" VALUES ( '1' ),
    SUBPARTITION "SP3" VALUES ( '2' ),
    SUBPARTITION "SP4" VALUES ( '3' ),
    SUBPARTITION "SP5" VALUES ( '4' ) )
 (PARTITION "P_MAN"  VALUES LESS THAN (1000)
PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TBS1"
 COMPRESS FOR QUERY HIGH
 ( SUBPARTITION "P_MAN_SP1"  VALUES ('0')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP2"  VALUES ('1')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP3"  VALUES ('2')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP4"  VALUES ('3')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP5"  VALUES ('4')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ) )

It now is possible to insert data again:

MARTIN@MBACH1:1> insert into T1 values(150000,'1', 'ICH BINS');

1 row created.

MARTIN@MBACH1:1> insert into T1 values(50,'2', 'ICH BINS');

1 row created.

MARTIN@MBACH1:1> insert into T1 values(2050,'2', 'ICH BINS');

1 row created.

MARTIN@MBACH1:1> commit;

Commit complete.

All is well again. You could now change the default attributes of the table and all partitions to TBS2 using commands similar to these:

MARTIN@MBACH1:1> alter table t1 modify default attributes tablespace tbs2;

Table altered.

MARTIN@MBACH1:1> alter table t1 modify default attributes for partition p_man tablespace tbs2;

Table altered.

The last command obviously has to be repeated for the partitions that have already been created. The DDL for the table now looks more healthy:

MARTIN@MBACH1:1> select dbms_metadata.get_ddl('TABLE','T1') from dual;

DBMS_METADATA.GET_DDL('TABLE','T1')
--------------------------------------------------------------------------------

  CREATE TABLE "MARTIN"."T1"
   (	"ID" NUMBER,
	"C" VARCHAR2(40),
	"NAME" VARCHAR2(128)
   ) PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
 COMPRESS FOR QUERY HIGH
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TBS2"
  PARTITION BY RANGE ("ID") INTERVAL (1000) STORE IN ("TBS2")
  SUBPARTITION BY LIST ("C")
  SUBPARTITION TEMPLATE (
    SUBPARTITION "SP1" VALUES ( '0' ),
    SUBPARTITION "SP2" VALUES ( '1' ),
    SUBPARTITION "SP3" VALUES ( '2' ),
    SUBPARTITION "SP4" VALUES ( '3' ),
    SUBPARTITION "SP5" VALUES ( '4' ) )
 (PARTITION "P_MAN"  VALUES LESS THAN (1000)
PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH )

Problem solved! Although it shouldn’t really matter what’s in user_tab_partitions as these segments do not exist (only the subpartitions exist!)

Summary

The partition maintenance operation followed by dropping the default tablespace for new (interval) partitions caused Oracle to raise a misleading error message. Instead of the ORA-64307 error (stating the HCC is not supported on a void tablespace) a better error message would have been to let the user know that the tablespace did not exist.

This was on 11.2.0.3, I will try and test on 11.2.0.4 and 12.1.0.1 later and update the post.

Interesting post-install steps for Oct 2013 12.1.0.1.1 PSU

I have already written about RAC/Grid Infrastructure related patching of 12.1.0.1.0 to 12.1.0.1.1, aka the October 2013 PSU for the database.

This post is a follow-up for pure RDBMS-only installations. I initially thought it wasn’t worth blogging about it (and hence the lag between the posts) but I came across an interesting post-apply step that is required for the databases: datapatch.

This is a new tool to run post the Patch Set Update installation against the non-CDB.

UPDATE: There are known problems with the tool for PSU 1 and PSU 2 Click here for a reference to MOS, an explanation and fix.

Here is the reference output for a non-CDB:

[oracle@server1 17027533]$ . oraenv
ORACLE_SID = [ora11] ? NCDB
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@server1 17027533]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Fri Jan 3 23:06:14 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1603411968 bytes
Fixed Size		    2288872 bytes
Variable Size		  905970456 bytes
Database Buffers	  687865856 bytes
Redo Buffers		    7286784 bytes
Database mounted.
Database opened.

SQL> select cdb from v$database;
CDB
---
NO

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@server1 17027533]$ cd /u01/app/oracle/product/12.1.0.1/dbhome_1/OPatch/
[oracle@server1 OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.1.0.1.0 on Fri Jan  3 23:06:38 2014
Copyright (c) 2013, Oracle.  All rights reserved.

Connecting to database...OK
Determining current state...
Currently installed SQL Patches:
Currently installed C Patches: 17027533
Nothing to roll back
The following patches will be applied: 17027533
Adding patches to installation queue...
Installing patches...
Validating logfiles...
Patch 17027533 apply: SUCCESS
  logfile: /u01/app/oracle/product/12.1.0.1/dbhome_1/sqlpatch/17027533/
17027533_apply_NCDB_NCDB_2014Jan03_23_06_45.log (no errors)
SQL Patching tool complete on Fri Jan  3 23:06:52 2014
[oracle@server1 OPatch]$

Interesting tool! Even more so if you look at the file, which calls “sqlpatch”-a utility I haven’t come across yet. It’s also very convenient that it reports “no errors” from the patch application!

[oracle@server1 OPatch]$ tail -n3 datapatch
# Call sqlpatch to do the real work
$ORACLE_HOME/sqlpatch/sqlpatch $@

Tracing down sqlpatch I found the following note on Metalink, which reminded me of a feature I read about but didn’t have time to test: Oracle Database 12.1 : FAQ on Queryable Patch Inventory (Doc ID 1530108.1). With this at my disposal I can finally query the database for installed patches!

SQL> desc DBMS_QOPATCH
FUNCTION ADD_OINV_JOB RETURNS BOOLEAN
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NNAME				VARCHAR2		IN
 INAME				VARCHAR2		IN
PROCEDURE CONFIG_OINV_JOBS
FUNCTION DROP_OINV_JOB RETURNS BOOLEAN
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NNAME				VARCHAR2		IN
 INAME				VARCHAR2		IN
FUNCTION GET_OPATCH_BUGS RETURNS XMLTYPE
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PNUM				VARCHAR2		IN     DEFAULT
FUNCTION GET_OPATCH_COUNT RETURNS XMLTYPE
FUNCTION GET_OPATCH_DATA RETURNS XMLTYPE
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PNUM				VARCHAR2		IN
FUNCTION GET_OPATCH_FILES RETURNS XMLTYPE
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PNUM				VARCHAR2		IN
FUNCTION GET_OPATCH_INSTALL_INFO RETURNS XMLTYPE
FUNCTION GET_OPATCH_LIST RETURNS XMLTYPE
FUNCTION GET_OPATCH_LSINVENTORY RETURNS XMLTYPE
FUNCTION GET_OPATCH_OLAYS RETURNS XMLTYPE
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PNUM				VARCHAR2		IN
FUNCTION GET_OPATCH_PREQS RETURNS XMLTYPE
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PNUM				VARCHAR2		IN
FUNCTION GET_OPATCH_XSLT RETURNS XMLTYPE
FUNCTION GET_PENDING_ACTIVITY RETURNS XMLTYPE
PROCEDURE GET_SQLPATCH_STATUS
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PNUM				VARCHAR2		IN     DEFAULT
FUNCTION IS_PATCH_INSTALLED RETURNS XMLTYPE
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PNUM				VARCHAR2		IN
PROCEDURE OPATCH_INV_REFRESH_JOB
PROCEDURE OPATCH_RUN_JOB
FUNCTION PATCH_CONFLICT_DETECTION RETURNS XMLTYPE
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 FILENAME			VARCHAR2		IN
PROCEDURE REFRESH_OPATCH_DATA
PROCEDURE REPLACE_DIRS_INT
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PF_ID				NUMBER			IN
PROCEDURE REPLACE_LOGSCRPT_DIRS
PROCEDURE SET_CURRENT_OPINST
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NODE_NAME			VARCHAR2		IN     DEFAULT
 INST_NAME			VARCHAR2		IN     DEFAULT
PROCEDURE SET_DEBUG
 Argument Name			Type			In/Out Default?
 ------------------------------ ----------------------- ------ --------
 DEBUG				BOOLEAN 		IN

Now to see which patches are present in the ORACLE_HOME my database is started from I can use this query:

SQL> select xmltransform(dbms_qopatch.get_opatch_list(), dbms_qopatch.GET_OPATCH_XSLT()) from dual ;

Patch Details:

Patch(sqlpatch) 17027533:   applied on 2014-01-03T23:02:36Z
Unique Patch ID: 16677152
  Patch Description: Database Patch Set Update : 12.1.0.1.1 (17027533)
  Created on	 : 27 Sep 2013, 05:30:33 hrs PST8PDT
  Files Touched:

With a similar syntax I can perform a lot more, I can even check if a given patch is installed or list overlay patches etc: cool stuff we didn’t have before. Although I should add that not every single feature provided meaningful output, there is still some work to be done before it’s really, really useful.

Debugging post-patch script application

For the more curious reader you can trace sqlpatch by using the -debug flag. I found it in the perl code which ultimately is executed.