Martins Blog

Trying to explain complex things in simple terms

Automatic Deployment of CDBs with non-default database options in 12.1.0.2

Posted by Martin Bach on June 20, 2016

I have written about the importance of automation in previous posts, and this one is following the same line. This time I wanted to test and document my experience with “golden database templates”. Wy? Because most DBAs will appreciate that there are more interesting things to do than to run “create database” scripts all day, so why not automate the process? For quite some time DBCA, or the Database Creation Assistant offers you the option to create templates. One approach you could use would be to

  • Create a “golden master” database
  • Use DBCA to create a template based on this
  • And enjoy the fact that every database is exactly the same

All of this is done in hope to make life for the administrators a lot easier.

Motivation

In many environments a database is not an island, it’s rather part of a wider ecosystem where monitoring solutions, auditing, and what-have-you need to integrate with each database from the moment it is deployed. A template – at least to my mind – is better suited to this task than executing a number of scripts post database creation. SQL scripts can always run into errors, and their execution might require review of the output by a human (which is what I’d like to avoid). Oh, and it seems to take forever, even on reasonably modern hardware, to create a database from scripts.

Furthermore, it is possible to create a CDB with fewer options in 12.1.0.2, as explained by Mike Dietrich and on MOS. I wanted to get a reference to myself on how to deploy a CDB exactly the same way, each time, without all the options in it. Please have a look at these references for more background information.

All of them are very good references, and they contain the necessary “ifs” and “buts”. Please make sure you understand them all, especially those in MOS DOC ID 1616554.1, section 1. If the referenced documents resonate with you, and the pros/cons are understood, read on :)

Further to the introductory question on why to do this, there is more to mention: I don’t want any extra components in my database, for 2 main reasons:

  1. A component that is not installed can’t be used as an attack vector
  2. Components that aren’t installed in the RDBMS can’t be used accidentally

And there are many more … For the purpose of this post I am going to create a database with all the extra components I ever need: none.

In case you found this via an Internet search engine, here are the facts about my environment.

[oracle@server1 OPatch]$ ./opatch lspatches
22674709;Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
22291127;Database Patch Set Update : 12.1.0.2.160419 (22291127)

OPatch succeeded.
[oracle@server1 OPatch]$ cat /etc/oracle-release
Oracle Linux Server release 6.8
[oracle@server1 OPatch]$ uname -r -o
4.1.12-37.4.1.el6uek.x86_64 GNU/Linux

This is not a RAC nor Oracle Restart database, just a single RDBMS home with the databases on traditional file system. This is a VM running on VirtualBox 5.0.20 in my lab.

Create the Golden Image database

Following MOS note 2001512.1 and understanding the comments in each section I consciously make the decision to remove everything that’s not needed for my purposes (these aren’t necessarily yours!) I decided to create the scripts for a CDB named CDBTEMPL. After exiting DBCA I can find these scripts in /u01/app/oracle/admin/CDBTEMPL/scripts/

[oracle@server1 scripts]$ ls -l
total 72
-rwxr-xr-x. 1 oracle oinstall  834 Jun 15 08:31 CDBTEMPL.sh
-rwxr-xr-x. 1 oracle oinstall 1260 Jun 15 08:31 CDBTEMPL.sql
-rw-r-----. 1 oracle oinstall  550 Jun 15 08:31 CreateClustDBViews.sql
-rw-r-----. 1 oracle oinstall 1552 Jun 15 08:31 CreateDB.sql
-rw-r-----. 1 oracle oinstall 1668 Jun 15 08:31 CreateDBCatalog.sql
-rw-r-----. 1 oracle oinstall  397 Jun 15 08:31 CreateDBFiles.sql
-rw-r-----. 1 oracle oinstall 1171 Jun 15 08:31 JServer.sql
-rw-r-----. 1 oracle oinstall  386 Jun 15 08:31 apex.sql
-rw-r-----. 1 oracle oinstall  998 Jun 15 08:31 context.sql
-rw-r-----. 1 oracle oinstall  353 Jun 15 08:31 cwmlite.sql
-rw-r-----. 1 oracle oinstall  365 Jun 15 08:31 datavault.sql
-rw-r-----. 1 oracle oinstall 2011 Jun 15 08:31 init.ora
-rw-r-----. 1 oracle oinstall  342 Jun 15 08:31 interMedia.sql
-rw-r-----. 1 oracle oinstall  344 Jun 15 08:31 labelSecurity.sql
-rw-r-----. 1 oracle oinstall 1155 Jun 15 08:31 lockAccount.sql
-rw-r-----. 1 oracle oinstall  360 Jun 15 08:31 ordinst.sql
-rw-r-----. 1 oracle oinstall  857 Jun 15 08:31 postDBCreation.sql
-rw-r-----. 1 oracle oinstall  335 Jun 15 08:31 spatial.sql

I need to update the $ORACLE_SID.sql file, which originally looks like this:

set verify off
ACCEPT sysPassword CHAR PROMPT 'Enter new password for SYS: ' HIDE
ACCEPT systemPassword CHAR PROMPT 'Enter new password for SYSTEM: ' HIDE
ACCEPT dvOwnerPassword CHAR PROMPT 'Enter  user password: ' HIDE
ACCEPT dvAccountManagerPassword CHAR PROMPT 'Enter  user password: ' HIDE
host /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/orapwd file=/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/orapwCDBTEMPL force=y format=12
@/u01/app/oracle/admin/CDBTEMPL/scripts/CreateDB.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/CreateDBFiles.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/CreateDBCatalog.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/JServer.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/context.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/ordinst.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/interMedia.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/cwmlite.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/spatial.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/labelSecurity.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/apex.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/datavault.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/CreateClustDBViews.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/lockAccount.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/postDBCreation.sql

Using the very useful dependency matrix in MOS note 2001512.1 I can safely remove all components I don’t need for my particular case (again yours might be different). The resulting file looks like this for me:

set verify off
ACCEPT sysPassword CHAR PROMPT 'Enter new password for SYS: ' HIDE
ACCEPT systemPassword CHAR PROMPT 'Enter new password for SYSTEM: ' HIDE
ACCEPT dvOwnerPassword CHAR PROMPT 'Enter  user password: ' HIDE
ACCEPT dvAccountManagerPassword CHAR PROMPT 'Enter  user password: ' HIDE
host /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/orapwd file=/u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/orapwCDBTEMPL force=y format=12
@/u01/app/oracle/admin/CDBTEMPL/scripts/CreateDB.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/CreateDBFiles.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/CreateDBCatalog.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/CreateClustDBViews.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/lockAccount.sql
@/u01/app/oracle/admin/CDBTEMPL/scripts/postDBCreation.sql

Another VERY important aspect is the use of ORACLE’s perl. You need to make sure that your PATH uses oracle’s perl, and not the one installed with the host. In other words, “which perl” typed into the command prompt should return /u01/app/oracle/product/12.1.0.2/dbhome_1/perl/bin/perl, as shown in this example:

[oracle@server1 ~]$ export PATH=$ORACLE_HOME/perl/bin:$PATH
[oracle@server1 ~]$ which perl
/u01/app/oracle/product/12.1.0.2/dbhome_1/perl/bin/perl

If you are on a recent chip and want try these steps in Virtualbox like me, you might run into issues with perl core-dumping on you. Have a look at http://dbaontap.com/2016/01/13/vbox5/ before you start to see if you are susceptible to the same problem.

When you are ready, start the database creation in a screen session.

[oracle@server1 dbs]$ cd /u01/app/oracle/admin/CDBTEMPL/scripts/
[oracle@server1 scripts]$ export PATH=$ORACLE_HOME/perl/bin:$PATH
[oracle@server1 scripts]$ which perl
/u01/app/oracle/product/12.1.0.2/dbhome_1/perl/bin/perl
[oracle@server1 scripts]$ bash CDBTEMPL.sh

Don’t be startled by having to enter a password for USER twice, this is a slight irregularity with the script, looking at the source it appears it’s prompting you for Data Vault and Oracle Label Security information:

ACCEPT sysPassword CHAR PROMPT 'Enter new password for SYS: ' HIDE
ACCEPT systemPassword CHAR PROMPT 'Enter new password for SYSTEM: ' HIDE
ACCEPT dvOwnerPassword CHAR PROMPT 'Enter  user password: ' HIDE
ACCEPT dvAccountManagerPassword CHAR PROMPT 'Enter  user password: ' HIDE

The last 2 don’t quite make sense, and going by the extra space character there should have been a username …

Once the script has been started, go and get some coffee. It will take a while to finish. After it completed, it’s time to see what we have. The alias “sq” is just a shortcut for “sqlplus / as sysdba”.

[oracle@server1 ~]$ sq

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jun 15 10:20:10 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO

SQL> select comp_name, version, status from dba_server_registry;

COMP_NAME                                VERSION                        STATUS
---------------------------------------- ------------------------------ -----------
Oracle Workspace Manager                 12.1.0.2.0                     VALID
Oracle XML Database                      12.1.0.2.0                     VALID
Oracle Database Catalog Views            12.1.0.2.0                     VALID
Oracle Database Packages and Types       12.1.0.2.0                     VALID
Oracle Real Application Clusters         12.1.0.2.0                     OPTION OFF

SQL> select cdb, name from v$database;

CDB NAME
--- ---------
YES CDBTEMPL

SQL> select * From v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
SYSDG                          FALSE FALSE FALSE FALSE TRUE  FALSE          1
SYSBACKUP                      FALSE FALSE FALSE TRUE  FALSE FALSE          1
SYSKM                          FALSE FALSE FALSE FALSE FALSE TRUE           1

So this looks like it worked in my favour: the database is created, and I can merely see the bare minimum of options present. Well actually it seems that you could potentially also remove workspace manager and the RAC views, but they are ok for me to keep. Refer to MOS for more details about these two.

The database is in fact a CDB, and it’s name is CDBTEMPL. All the users I might want are in the password file as well.

Customising the Golden Image

First I want a new PDB that I use as en lieu of the PDB$SEED as the basis for new PDBs. Let’s create it:

SQL> !mkdir /u01/oradata/CDBTEMPL/MASTER

SQL> create pluggable database MASTER admin user PDBMASTER identified by PDBMASTERPWD
  2  default tablespace users
  3  datafile '/u01/oradata/CDBTEMPL/MASTER/users01.dbf'
  4  size 250m autoextend on next 250m maxsize 1G
  5  file_name_convert = ('/u01/oradata/CDBTEMPL/pdbseed','/u01/oradata/CDBTEMPL/MASTER')
  6  storage (maxsize 20G);

Pluggable database created.

While I’m still connected to the CDB$ROOT I create a few common user accounts that I need:

SQL> create user c##backupadmin identified by backupadminpwd;

User created.

SQL> grant sysbackup to c##backupadmin;

Grant succeeded.

SQL> create user c##monitor identified by monitorpassword;

User created.

-- grants as needed for the monitoring account

Switching to the MASTER-PDB in the next step I create components that I might need, for example STATSPACK or SQLT. Or anything else your standards require you to put in there. For example:

SQL> alter session set container = MASTER;
Session altered.
SQL> show con_name

CON_NAME
------------------------------
MASTER

SQL> create tablespace statpack_tbs datafile
  2  '/u01/oradata/CDBTEMPL/MASTER/statpack_tbs01.dbf'
  3  size 250m autoextend on next 250m maxsize 5g;

Tablespace created.

SQL> start ?/rdbms/admin/spcreate

Choose the Default tablespace for the PERFSTAT user
---------------------------------------------------
Below is the list of online tablespaces in this database which can
store user data.  Specifying the SYSTEM tablespace for the user's
default tablespace will result in the installation FAILING, as
using SYSTEM for performance data is not supported.

Choose the PERFSTAT users's default tablespace.  This is the tablespace
in which the STATSPACK tables and indexes will be created.

TABLESPACE_NAME                CONTENTS  STATSPACK DEFAULT TABLESPACE
------------------------------ --------- ----------------------------
STATPACK_TBS                   PERMANENT
SYSAUX                         PERMANENT *
USERS                          PERMANENT

Pressing <return> will result in STATSPACK's recommended default
tablespace (identified by *) being used.

Enter value for default_tablespace: STATPACK_TBS

Using tablespace STATPACK_TBS as PERFSTAT default tablespace.

Choose the Temporary tablespace for the PERFSTAT user
-----------------------------------------------------
Below is the list of online tablespaces in this database which can
store temporary data (e.g. for sort workareas).  Specifying the SYSTEM
tablespace for the user's temporary tablespace will result in the
installation FAILING, as using SYSTEM for workareas is not supported.

Choose the PERFSTAT user's Temporary tablespace.

TABLESPACE_NAME                CONTENTS  DB DEFAULT TEMP TABLESPACE
------------------------------ --------- --------------------------
TEMP                           TEMPORARY *

Pressing <return> will result in the database's default Temporary
tablespace (identified by *) being used.

Enter value for temporary_tablespace: TEMP

Using tablespace TEMP as PERFSTAT temporary tablespace.

... Creating PERFSTAT user

Create the Template

With the Golden Image complete, it is time to create a template from this. You might think that all you need to do is to execute dbca and create a clone template but wait! I did this too but ran into an issue. Fast forward a bit (don’t worry I’ll detail the steps later), when I created a database off the template, dbca errored out with this message:

[oracle@server1 ~]$ dbca -silent -createDatabase -templateName CDB_MASTER_20160615.dbc \
> -gdbName CDB1 -sysPassword sysPassword -systemPassword systemPassword \
> -datafileDestination /u01/oradata -recoveryAreaDestination /u01/fra -totalMemory 4096
Copying database files
1% complete
3% complete
37% complete
Creating and starting Oracle instance
40% complete
45% complete
50% complete
55% complete
DBCA Operation failed.
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/CDB1/CDB1.log" for further details.

[oracle@server1 ~]$ cat /u01/app/oracle/cfgtoollogs/dbca/CDB1/CDB1.log

Unique database identifier check passed.

/u01/ has enough space. Required space is 8660 MB , available space is 14332 MB.
File Validations Successful.
Copying database files
DBCA_PROGRESS : 1%
DBCA_PROGRESS : 3%
DBCA_PROGRESS : 37%
Creating and starting Oracle instance
DBCA_PROGRESS : 40%
DBCA_PROGRESS : 45%
DBCA_PROGRESS : 50%
DBCA_PROGRESS : 55%
Error while executing "/u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/admin/dbmssml.sql".
Refer to "/u01/app/oracle/cfgtoollogs/dbca/CDB1/dbmssml0.log" for more details.
Error in Process: /u01/app/oracle/product/12.1.0.2/dbhome_1/perl/bin/perl
DBCA_PROGRESS : DBCA Operation failed.

Hmmm, not too good. Funny enough there was no content in the log file mentioned in the error message, but there was a similarly named file nearby:

[oracle@server1 ~]$ ls -l /u01/app/oracle/cfgtoollogs/dbca/CDB1/dbmssml*
-rw-r-----. 1 oracle oinstall   0 Jun 15 11:51 /u01/app/oracle/cfgtoollogs/dbca/CDB1/dbmssml0.log
-rw-r-----. 1 oracle oinstall 521 Jun 15 11:51 /u01/app/oracle/cfgtoollogs/dbca/CDB1/dbmssml_catcon_11711.lst
[oracle@server1 ~]$

Looking at dbmssml_catcon_11711.lst I saw this message:

[oracle@server1 ~]$ cat /u01/app/oracle/cfgtoollogs/dbca/CDB1/dbmssml_catcon_11711.lst
catcon: See /u01/app/oracle/cfgtoollogs/dbca/CDB1/dbmssml*.log files for output generated by scripts
catcon: See /u01/app/oracle/cfgtoollogs/dbca/CDB1/dbmssml_*.lst files for spool files, if any
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

        catconInit: start logging catcon output at 2016-06-15 11:51:11

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

validate_con_names: MASTER is not open
catconInit: Unexpected error returned by validate_con_names

So it seems I have to make sure that the clone database opens the MASTER PDB as part of the process. By default, a PDB is in mount mode when the database instance (re)starts. But I can force it to open. That’s not hard since 12.1.0.2, I can save a PDB’s state like this:

SQL> alter pluggable database master open;

Pluggable database altered.

SQL> alter pluggable database master save state;

Pluggable database altered.

SQL> SELECT con_name, instance_name, state FROM dba_pdb_saved_states;

CON_NAME                       INSTANCE_NAME        STATE
------------------------------ -------------------- --------------
MASTER                         CDBTEMPL             OPEN

That’s it-a quick test reveals that the MASTER PDB opens when I bounce the instance:

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size                  2932632 bytes
Variable Size             377487464 bytes
Database Buffers          687865856 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MASTER                         READ WRITE NO
SQL>

Now I can create the clone!

[oracle@server1 ~]$ dbca -createCloneTemplate -help
Create a clone template from an existing database by specifying the following parameters:
        -createCloneTemplate
                -sourceSID    <Database unique name for RAC Database and SID for Single
                               Instance Database>
                -templateName      <new template name>
                [-sysDBAUserName     <user name  with SYSDBA privileges>
                 -sysDBAPassword     <password for sysDBAUserName user name>]
                [-maxBackupSetSizeInMB     <Maximum backup set size (MB)>]
                [-rmanParallelism  <Parallelism value>]
                [-datafileJarLocation       <directory to place the datafiles in a compressed
                                             format>]

[oracle@server1 ~]$ dbca -silent -createCloneTemplate -sourceSID CDBTEMPL -templateName \
> CDB_MASTER_20160615 -sysDBAPassword sysDBAPassword \
> -rmanParallelism 2 -datafileJarLocation /u01/templates

Gathering information from the source database
4% complete
8% complete
13% complete
17% complete
22% complete
Backup datafiles
28% complete
88% complete
Creating template file
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/silent.log_2016-06-15_11-31-28-AM"
for further details.

[oracle@server1 ~]$ cat /u01/app/oracle/cfgtoollogs/dbca/silent.log_2016-06-15_11-31-28-AM
Gathering information from the source database
DBCA_PROGRESS : 4%
DBCA_PROGRESS : 8%
DBCA_PROGRESS : 13%
DBCA_PROGRESS : 17%
DBCA_PROGRESS : 22%
Backup datafiles
DBCA_PROGRESS : 28%
DBCA_PROGRESS : 88%
Creating template file
DBCA_PROGRESS : 100%
The generation of the clone database template "CDB_MASTER_20160615" is successful.

That was a success!

Create a CDB from Template

This step is almost exactly the same as for the non-custom template. All you need to do is to invoke dbca in silent mode and point it to the new template. Here is an example:

[oracle@server1 ~]$ dbca -silent -createDatabase -templateName CDB_MASTER_20160615.dbc \
> -gdbName CDB1 -sysPassword sysPassword -systemPassword systemPassword \
> -datafileDestination /u01/oradata -recoveryAreaDestination /u01/fra -totalMemory 4096
Copying database files
1% complete
3% complete
37% complete
Creating and starting Oracle instance
40% complete
45% complete
50% complete
55% complete
56% complete
60% complete
62% complete
Completing Database Creation
66% complete
70% complete
73% complete
85% complete
96% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/CDB1/CDB10.log" for further details.

[oracle@server1 ~]$ cat /u01/app/oracle/cfgtoollogs/dbca/CDB1/CDB10.log

Unique database identifier check passed.

/u01/ has enough space. Required space is 10270 MB , available space is 13312 MB.
File Validations Successful.
Copying database files
DBCA_PROGRESS : 1%
DBCA_PROGRESS : 3%
DBCA_PROGRESS : 37%
Creating and starting Oracle instance
DBCA_PROGRESS : 40%
DBCA_PROGRESS : 45%
DBCA_PROGRESS : 50%
DBCA_PROGRESS : 55%
DBCA_PROGRESS : 56%
DBCA_PROGRESS : 60%
DBCA_PROGRESS : 62%
Completing Database Creation
DBCA_PROGRESS : 66%
DBCA_PROGRESS : 70%
DBCA_PROGRESS : 73%
DBCA_PROGRESS : 85%
DBCA_PROGRESS : 96%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/CDB1.
Database Information:
Global Database Name:CDB1
System Identifier(SID):CDB1

Unlike in my first attempt this worked. But there are a couple of post-creation steps now to be taken. First, I would like to use MASTER as the golden master copy for each PDB. That requires the PDB to be opened read-only. But I would also like to check if everything worked

[oracle@server1 ~]$ echo $ORACLE_SID $ORACLE_HOME
CDB1 /u01/app/oracle/product/12.1.0.2/dbhome_1
[oracle@server1 ~]$ sq

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jun 15 13:07:51 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MASTER                         READ WRITE NO
SQL> alter session set container = MASTER;

Session altered.

SQL> select username, account_status
  2    from dba_users
  3   where oracle_maintained <> 'Y';

USERNAME                       ACCOUNT_STATUS
------------------------------ --------------------------------
SQLTXPLAIN                     OPEN
SQLTXADMIN                     EXPIRED & LOCKED
PERFSTAT                       OPEN
C##BACKUPADMIN                 EXPIRED & LOCKED
C##MONITOR                     EXPIRED & LOCKED
PDBMASTER                      OPEN

6 rows selected.

SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS
STATPACK_TBS
SQLT_TBS

This seems to have worked. Notice how the common users are expired and locked. Now to enable cloning, I simply repeat the save state command but this time I want the PDB to be open read-only.

SQL> alter pluggable database master close immediate;

Pluggable database altered.

SQL> alter pluggable database master open read only;

Pluggable database altered.

SQL> alter pluggable database master save state;

Pluggable database altered.

-- create a new PDB

SQL> create pluggable database userpdb from master...;

Pluggable database created.

SQL> alter pluggable database userpdb open;

Pluggable database altered.

SQL> conn perfstat/perfstatPWD@localhost/userpdb
Connected.
SQL> show con_name

CON_NAME
------------------------------
USERPDB
SQL>

That seems to have worked as well. Of course there are more variations to the theme, and you certainly need to perform intensive testing before you can roll out a strategy like this but I think you get the idea. The process cannot stop here of course, there are many more important tasks (backup, monitoring, auditing, logging, …) to be configured but those are left out for now.

Posted in 12c Release 1 | Tagged: , , | Leave a Comment »

Using OSWatcher for system diagnostics

Posted by Martin Bach on June 14, 2016

OSWatcher is a superb tool that gathers information about your system in the background and stores it in an (optionally compressed) archive directory. As an Oracle DBA I like the analogy with statspack: you make the tool available on the host in a location with – very important – enough available disk space and then start it. Most users add it to the startup mechanism their O/S uses- SysV init, upstart, or systemd for example on Linux to allow it to start in the background. OSWatcher will then gather a lot of the interesting O/S related statistics that you so desperately need in an “after the fact” situation. There are plenty of reasons where you might want that information.

Example use cases

Say for example the system experienced an outage-a RAC node rebooted. The big question is “why?”. There might be information in the usual Grid Infrastructure logs but they might be inconclusive and a look at the O/S can be necessary. The granularity SAR offers is often not enough, you are bound to loose detail information in the 10 minute average. What if you had 30 second granularity to see what happened more or less right before the crash?

Or maybe you want to see the system’s activity during testing for a new software release? And then compare to yesterday’s test cycle? The use cases are endless, and I’m sure you can think of one, too.

ExaWatcher as the role model

In Expert Oracle Exadata (both editions) the authors covered ExaWatcher quite extensively. ExaWatcher is (not really surprisingly) OSWatcher for Exadata, or at least sort of. And it’s there by default on cells and compute nodes. I have always taken the stance that if something is implemented in Exadata then someone must have spent a few brain cells working out why it’s a good idea to deploy that tool to every Exadata in the world. That’s a big responsibility to have and you’d hope that there was some serious thinking behind decisions to deploy such a tool.

OSWatcher deployment

I deploy OSWatcher by default on every VM I create in my lab. It’s part of the automated build process I use. That ensures that I’ll have a log of recent O/S information whenever I need it. Again the analogy to Statspack: just like OSWatcher Statspack isn’t installed by default and has to be created manually. In so many cases however it was deployed after the problem occurred. It could be that last night’s problems don’t re-appear the following execution cycle. It might be better to have OSWatcher deployed and recording information proactively.

Deploying OSWatcher is dead simple-get it from MOS and copy the tarball to the host to monitor. In this post I am using Oracle Linux 7.1, at first without the compatibility tools for the network stack. I customarily use the minimal installation that doesn’t come with net-tools. What are the net-tools again? They are the tools we have come to love over the years:

[oracle@rac12node2 ~]$ yum info net-tools
Available Packages
Name        : net-tools
Arch        : x86_64
Version     : 2.0
Release     : 0.17.20131004git.el7
Size        : 303 k
Repo        : local
Summary     : Basic networking tools
URL         : http://sourceforge.net/projects/net-tools/
Licence     : GPLv2+
Description : The net-tools package contains basic networking tools,
            : including ifconfig, netstat, route, and others.
            : Most of them are obsolete. For replacement check iproute package.

But they are obsolete. You could argue that it might be just the time to get used to iproute as it’s the future, but for OSWatcher on OL 7 net-tools are needed and I installed them. If you don’t OSWatcher won’t abort but can’t collect information netstat and ifconfig provide.

In case you are using RAC you need to make OSWatcher aware of the private interconnect-this is detailed in the OSWatcher user’s guide.

Starting OSWatcher

Once the tarball is unzipped in a location of your choice you need to start OSWatcher. The script startOSWbb.sh is responsible for starting the tool, and it can take a number of arguments that are explained in the shell script for convenience and the user guide available from MOS. The first parameter is used to set the snapshot interval in seconds. The second indicates the archive duration in hours (eg how long the information should be stored) and the optional third which compression tool to use for compressing the raw data. Parameters 2 and 3 have direct implications to space usage. The optional fourth parameter is used to set a custom location for the archive directory that will contain the data collected.

I want to have 30 second snapshot intervals and retain data for 2 days or 48 hours. Be aware that using a tool such as compress or gzip as the third argument will save space by compressing older files, but when you are trying to use the analyser (more on that later) then you’ll have to decompress the files first. Clever use of the find (1) command can help you uncompressing only the raw data files you need.

WARNING Depending on the system you monitor and the amount of activity you might end up generating quite a lot of data. And by a lot I mean it! The location you choose for storing the archive directory must be independent of anything important. In other words, should the location fill up despite all the efforts you put in to prevent that from happening, it must not have an impact on availability. It is imperative to keep a keen eye on space usage. You certainly don’t want to fill up important mount points with performance information.

 

Starting OSWatcher from the command line

With that said it’s time to start the tool:

[oracle@rac12node1 oswbb]$ ./startOSWbb.sh 30 48 gzip
[oracle@rac12node1 oswbb]$ Info...Zip option IS specified.
Info...OSW will use gzip to compress files.
Setting the archive log directory to/some/mount/point/oswbb/archive

Testing for discovery of OS Utilities...
VMSTAT found on your system.
IOSTAT found on your system.
MPSTAT found on your system.
IFCONFIG found on your system.
NETSTAT found on your system.
TOP found on your system.
Warning... /proc/slabinfo not found on your system.

Testing for discovery of OS CPU COUNT
oswbb is looking for the CPU COUNT on your system
CPU COUNT will be used by oswbba to automatically look for cpu problems

CPU COUNT found on your system.
CPU COUNT = 2

Discovery completed.

Starting OSWatcher v7.3.3  on Wed Dec 16 10:02:49 GMT 2015
With SnapshotInterval = 30
With ArchiveInterval = 48

OSWatcher - Written by Carl Davis, Center of Expertise,
Oracle Corporation
For questions on install/usage please go to MOS (Note:301137.1)

...

Data is stored in directory: /some/mount/point/oswbb/archive

Starting Data Collection...

oswbb heartbeat:Wed Dec 16 10:02:54 GMT 2015

Launching OSWatcher from the command line is probably the exception, most users will start OSW during the boot process as part of the multi-user runlevel.

Note that if you haven’t installed net-tools on Oracle Linux 7 you will see errors when starting OSWatcher as it can’t find netstat and ifconfig. If memory serves me right then you get net-tools with previous releases of Oracle Linux by default so this is OL 7 specific.

You will also notice an error that /proc/slabinfo does not exist. It does exist, but the permissions have changed to 0400 and the file is owned by root:root. Not having slab information is not a problem for me, I always struggle to make sense of it anyway.

What it does

The OSWatcher daemon now happily monitors my system and places information into the archive directory:

[oracle@rac12node1 oswbb]$ ls -l archive/
total 40
drwxr-xr-x. 2 oracle oinstall 4096 Dec 16 10:02 oswifconfig
drwxr-xr-x. 2 oracle oinstall 4096 Dec 16 10:02 oswiostat
drwxr-xr-x. 2 oracle oinstall 4096 Dec 16 10:02 oswmeminfo
drwxr-xr-x. 2 oracle oinstall 4096 Dec 16 10:02 oswmpstat
drwxr-xr-x. 2 oracle oinstall 4096 Dec 16 10:02 oswnetstat
drwxr-xr-x. 2 oracle oinstall 4096 Dec 16 10:02 oswprvtnet
drwxr-xr-x. 2 oracle oinstall 4096 Dec 16 10:02 oswps
drwxr-xr-x. 2 oracle oinstall 4096 Dec 16 10:02 oswslabinfo
drwxr-xr-x. 2 oracle oinstall 4096 Dec 16 10:02 oswtop
drwxr-xr-x. 2 oracle oinstall 4096 Dec 16 10:02 oswvmstat

There is a directory per tool used – ifconfig, iostat, meminfo, …. , vmstat. Inside the directory you find the output. The file format is plain text (optionally compressed if not the current file if you specified gzip or compress as the third parameter to the start script), and each snapshot is indicated by a line starting zzz. Here is an example for iostat taken while swingbench was running.

...

zzz ***Wed Dec 16 10:32:30 GMT 2015
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          12.23    0.00    7.45   76.06    0.00    4.26

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
vda               0.00     0.00    0.00    2.00     0.00     4.00     4.00     0.24  128.00    0.00  128.00 121.00  24.20
vdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
vdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
vdd               0.00     0.00    2.00    1.00     1.00     0.50     1.00     0.11   36.67   33.00   44.00  36.67  11.00
vde               0.00     0.00    2.00    1.00     1.00     0.50     1.00     0.04   14.67    0.00   44.00  14.67   4.40
vdf               0.00     0.00    2.00    1.00     1.00     0.50     1.00     0.10   31.67   25.50   44.00  31.67   9.50
vdg               0.00     0.00   43.00   23.00   400.00    97.00    15.06     3.83   57.05   62.37   47.09  15.15 100.00
vdh               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.24    0.00    0.00    0.00   0.00  24.20
dm-2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
vdi               0.00     0.00   39.00   23.00   320.00    78.50    12.85     1.11   17.40    2.85   42.09  12.39  76.80

zzz ***Wed Dec 16 10:33:00 GMT 2015
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          10.58    0.00    7.94   81.48    0.00    0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
vda               0.00    17.00    0.00    1.00     0.00    72.00   144.00     0.01    1.00    0.00    1.00  11.00   1.10
vdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
vdc               0.00     0.00   12.00    2.00   192.00    16.00    29.71     0.13    9.57    8.25   17.50   9.57  13.40
vdd               0.00     0.00    2.00    2.00     1.00     0.50     0.75     0.03   14.50    8.50   20.50   7.25   2.90
vde               0.00     0.00    2.00    2.00     1.00     0.50     0.75     0.02   13.25    2.00   24.50   6.00   2.40
vdf               0.00     0.00    2.00    2.00     1.00     0.50     0.75     0.11   34.50   23.00   46.00  27.25  10.90
vdg               0.00     0.00   32.00   31.00   272.00   123.00    12.54     4.96   80.46   90.50   70.10  15.68  98.80
vdh               0.00     0.00    0.00    1.00     0.00     0.00     0.00     0.01   37.00    0.00   37.00   8.00   0.80
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00   19.00     0.00    76.00     8.00     0.03    0.95    0.00    0.95   0.63   1.20
dm-2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
vdi               0.00     0.00   22.00   22.00   184.00    78.50    11.93     1.28   30.45    2.00   58.91  17.23  75.80

zzz ***Wed Dec 16 10:33:30 GMT 2015
...

As you can see there are snapshots every 30 seconds, just as requested. Let’s not focus on the horrendous I/O times-this is a virtualised RAC system on a host that struggles with a CPU bottleneck, and it does not have underlying SSD for the VMs…

You can navigate the archive directory and browse the files you are interested in. They all have timestamps in their names making it easy to identify each file’s contents.

The OSWatcher Analyser

Looking at text files is one way of digesting information. There is another option, the OSWatcher analyser. Here is an example of its use (it requires the DISPLAY variable to be set):

[oracle@rac12node1 oswbb]$ java -jar oswbba.jar -i archive -b "Dec 16 10:25:00 2015" \
> -e "Dec 16 10:35:00 2015" -P swingbench -s

Validating times in the archive...

Scanning file headers for version and platform info...


Parsing file rac12node1_iostat_15.12.16.1000.dat ...


Parsing file rac12node1_vmstat_15.12.16.1000.dat ...


Parsing file rac12node1_netstat_15.12.16.1000.dat ...


Parsing file rac12node1_top_15.12.16.1000.dat ...


Parsing file rac12node1_ps_15.12.16.1000.dat ...


A new analysis file analysis/rac12node1_1450262374129.txt has been created.
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_Run_Queue.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_Block_Queue.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_Cpu_Idle.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_Cpu_System.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_Cpu_User.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_Cpu_Wa.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_Cpu_Interrupts.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_Context_Switches.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_Memory_Swap.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_Memory_Free.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_Memory_Page_In_Rate.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_IO_ST.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_IO_RPS.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_IO_WPS.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_IO_PB.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_IO_PBTP_1.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_IO_PBTP_2.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_IO_PBTP_3.gif
Generating file profile/rac12node1_swingbench/OSW_profile_files/OSWg_OS_IO_TPS.gif
[oracle@rac12node1 oswbb]$ 

In this example I asked the analyser tool to limit the search to specific time ranges and to create a “profile”. Refer to the MOS note about the OSWatcher Analyser for more information about the command line options.

The analyser will create a HTML “Profile” which is then stored in the profile directory as you can see in the last part of the output. If you transfer this to a system with a web-browser you can enjoy the graphical representation of the raw data. Very neat if you want to check on the O/S level if anything unusual might have happened.

Note also how there was a new analysis file created-have a look at it as it can provide very valuable information. In my example the following contents was recorded:

############################################################################
# Contents Of This Report:
#
# Section 1: System Status
# Section 2: System Slowdowns
#   Section 2.1: System Slowdown RCA Process Level Ordered By Impact
# Section 3: System General Findings
# Section 4: CPU Detailed Findings
#   Section 4.1: CPU Run Queue:
#   Section 4.2: CPU Utilization: Percent Busy
#   Section 4.3: CPU Utilization: Percent Sys
# Section 5: Memory Detailed Findings
#   Section 5.1: Memory: Process Swap Queue
#   Section 5.2: Memory: Scan Rate
#   Section 5.3  Memory: Page In:
#   Section 5.4  Memory: Page Tables (Linux only):
#   Section 5.5: Top 5 Memory Consuming Processes Beginning
#   Section 5.6: Top 5 Memory Consuming Processes Ending
# Section 6: Disk Detailed Findings
#   Section 6.1: Disk Percent Utilization Findings
#   Section 6.2: Disk Service Times Findings
#   Section 6.3: Disk Wait Queue Times Findings
#   Section 6.4: Disk Throughput Findings
#   Section 6.5: Disk Reads Per Second
#   Section 6.6: Disk Writes Per Second
#   Section 6.7: Disk Percent CPU waiting on I/O
# Section 7: Network Detailed Findings
#   Section 7.1  Network Data Link Findings
#   Section 7.2: Network IP Findings
#   Section 7.3: Network UDP Findings
#   Section 7.4: Network TCP Findings
# Section 8: Process Detailed Findings
#   Section 8.1: PS Process Summary Ordered By Time
#   Section 8.2: PS for Processes With Status = D or T Ordered By Time
#   Section 8.3: PS for (Processes with CPU > 0) When System Idle CPU < 30% Ordered By Time
#   Section 8.4: Top VSZ Processes Increasing Memory Per Snapshot
#   Section 8.5: Top RSS Processes Increasing Memory Per Snapshot
#
############################################################################

Summary

OSWatcher is a great tool that can be used on Oracle Linux 7 and other supported platforms. It provides a wealth of information both in textual as well as graphical format. It is invaluable in many situations where you need to retrospectively have a look at the state of the O/S in a given period and records lots of useful information.

As always, read the documentation found on MOS and make sure you understand the implications of using the tool. Also make sure you test it thoroughly first. Please ensure that you have sufficient disk space for your archive directory on a mount point that cannot affect the availability of your system.

Posted in Oracle | Tagged: | Leave a Comment »

Little things worth knowing: auto-DOP or hint-which one takes precedence?

Posted by Martin Bach on June 2, 2016

This is just another short post about one of the little things worth knowing. Assume you are on 12.1.0.2 for example and you want to know if parallel_degree_limit reliably keeps the degree of parallelism at bay, for all sessions and queries across the board. After all, every machine has finite CPU (and other) resources that shouldn’t be exhausted. Setting parallel_degree_limit of course only has an effect if you set parallel_degree_policy to limited, auto or adaptive. The latter is the Big Knob to enable Automatic Degree Of Parallelism (DOP), and the post assumes that Auto DOP is enabled for the session. In this configuration, what happens to hinted statements?

Documentation

The documentation (12c Reference Guide, section on parallel_degree_limit) states that you can provide an integer in addition to “CPU” and “IO”:

A numeric value for this parameter specifies the maximum degree of parallelism the optimizer can choose for a SQL statement when automatic degree of parallelism is active. Automatic degree of parallelism is only enabled if PARALLEL_DEGREE_POLICY is set to ADAPTIVE, AUTO, or LIMITED.

Without further context this may sound a bit ambiguous-does enabling Auto DOP plus setting parallel_degree_limit to 8 ensure no query can exceed that DOP? And does that setting actually stop users from going over and above that value?

Testing

I am using a 12.1.0.2 RAC database for the test. I would like to limit the maximum DOP of a given statement to 8. Here are my settings for my lab system (emphasis on lab):

SQL> show parameter parallel

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback         string      LOW
parallel_adaptive_multi_user         boolean     FALSE
parallel_automatic_tuning            boolean     FALSE
parallel_degree_level                integer     100
parallel_degree_limit                string      8
parallel_degree_policy               string      AUTO
parallel_execution_message_size      integer     16384
parallel_force_local                 boolean     TRUE
parallel_instance_group              string
parallel_io_cap_enabled              boolean     FALSE
parallel_max_servers                 integer     128
parallel_min_percent                 integer     0
parallel_min_servers                 integer     16
parallel_min_time_threshold          string      AUTO
parallel_server                      boolean     TRUE
parallel_server_instances            integer     2
parallel_servers_target              integer     128
parallel_threads_per_cpu             integer     2
recovery_parallelism                 integer     0

Now I need a table to query – after looking around in my demo-schema I found T1 which looks like a good candidate.

SQL> select table_name,num_rows,partitioned,compression,degree from tabs where table_name = 'T1';

TABLE_NAME                       NUM_ROWS PAR COMPRESS DEGREE
------------------------------ ---------- --- -------- ----------
T1                               32000000 NO  DISABLED          1

Unlike when setting parallel_degree_policy to limited I don’t need to worry about decorating the table with a default DOP. With parallel_degree_policy set to auto a parallel plan can be chosen even with the DOP on the table set to 1, provided the optimiser reckons that statement execution exceeds parallel_min_time_threshold. In my case I left its value at its default, which is 10 seconds. When querying the table I do not have to specify a hint to enable PX, or define PX on the session level as you can see in this example:

SQL> select count(*) from T1;

  COUNT(*)
----------
  32000000

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
SQL_ID  byb4cbw4vy1cw, child number 1
-------------------------------------
select count(*) from T1

Plan hash value: 2755517917

----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name           | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                |       |   200K(100)|          |        |      |            |
|   1 |  SORT AGGREGATE                |                |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR               |                |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000       |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE             |                |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR         |                |    32M|   200K  (1)| 00:00:04 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS STORAGE FULL| T1             |    32M|   200K  (1)| 00:00:04 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - storage(:Z>=:Z AND :Z<=:Z)

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 8 because of degree limit

The important bit here is the information in the Note section: Oracle computed the DOP to be 8 because of the degree limit. Exactly what I wanted.

Now what if I use the hint?

No changes to the setup, this is the next statement I typed in:

SQL> select /*+ parallel(32) */ count(*) from T1;


  COUNT(*)
----------
  32000000

SQL> SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  9d9aha2bdd5zc, child number 0
-------------------------------------
select /*+ parallel(32) */ count(*) from T1

Plan hash value: 2755517917

----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name           | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                |       | 50217 (100)|          |        |      |            |
|   1 |  SORT AGGREGATE                |                |     1 |            |          |        |      |            |
|   2 |   PX COORDINATOR               |                |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)         | :TQ10000       |     1 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE             |                |     1 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR         |                |    32M| 50217   (1)| 00:00:01 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS STORAGE FULL| T1             |    32M| 50217   (1)| 00:00:01 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - storage(:Z>=:Z AND :Z<=:Z)

Note
-----
   - Degree of Parallelism is 32 because of hint

So based on this experiment it appears as if the hint took precedence over parallel_degree_limit in a session that had Auto DOP enabled. That doesn’t mean the DBA is out of luck, other tools come to mind to limit the DOP such as Database Resource Manager (DBRM). I have written about using DBRM to limit the DOP for all users of a consumer group some time ago. If you read that post make sure you look at Randolf’s comments.

Posted in 11g Release 1, 12c Release 1, Linux | Tagged: | Leave a Comment »

Experimenting with the ZFSSA’s snapshot capability using the simulator part 2

Posted by Martin Bach on May 31, 2016

In my last post I wrote down some notes about my experience while experimenting with the ZFSSA simulator. A simulator is a great way to get familiar with the user interface and general usability of a product. What I wanted to find out using the ZFSSA simulator was the answer to the question: “what happens to a clone of a database when I roll the master copy forward?”

In the first part of the series I explained how I created a clone of a database, named CLONE1. It is based on a backup of my NCDB database. On top of the backup I have created a snapshot as the basis for my clone. A clone in ZFS(SA) terminology is a writeable snapshot, and CLONE1 uses it. But what would happen to CLONE1 if I modified the source database, NCDB? And can I create a new clone-CLONE2-based on a new backup of the source without modifying the first clone? Let’s try this.

Changing the Source Database

Let’s change the source database a little by creating a new tablespace with a couple of data files. To add a little bit of spice to the scenario I decided to create a new user. Its only schema object will be created on the new tablespace.

[oracle@oraclelinux7 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Mar 3 17:25:57 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create tablespace dropme datafile size 5m;

Tablespace created.

SQL> alter tablespace dropme add datafile size 5m;

Tablespace altered.

SQL> grant dba to new_user identified by ...;

Grant succeeded.

SQL> create table new_user.t1 tablespace dropme as 
  2  select * from dba_source where rownum <= 100; 

Table created. 

SQL> 

This should be enough for this quick test.

Backing Up

The next step is to roll my existing image copies forward to reflect the changes. That’s not too hard, essentially you create an incremental backup “for recover of copy” … followed by a recover “copy of database with tag” … When the backup/recover command combination completed you also need to back up the archived logs in a second step.

At the risk of repeating myself, please be careful: adding this procedure to an existing backup strategy might have adverse side effects-as always make sure you understand the implications of this technique and its impact. Test thoroughly!

As with the previous backup I stored the incremental backup “for recover” of my image copies in ‘/zfssa/ncdb_bkp/data/’. That’s a location mounted via NFS from the ZFSSA. This is the same location I previously used for the image copies. There is nothing too exciting to report about the backup.

Just as with the previous post my archived logs went to ‘/zfssa/ncdb_bkp/archive/’ to complete the preparations. Here are the files that were created:

[oracle@oraclelinux7 ~]$ ls -lrt /zfssa/ncdb_bkp/data/
total 3035612
-rw-r-----. 1 oracle asmdba    7192576 Mar  3 17:26 6qqvijpt_1_1
-rw-r-----. 1 oracle asmdba    3702784 Mar  3 17:26 6pqvijps_1_1
-rw-r-----. 1 oracle asmdba    1851392 Mar  3 17:40 6tqvikjv_1_1
-rw-r-----. 1 oracle asmdba     442368 Mar  3 17:40 6sqvikjv_1_1
-rw-r-----. 1 oracle asmdba    5251072 Mar  3 17:40 data_D-NCDB_I-3358649481_TS-DROPME_FNO-7_6uqvikk2
-rw-r-----. 1 oracle asmdba  650125312 Mar  3 17:40 data_D-NCDB_I-3358649481_TS-SYSAUX_FNO-3_6kqvie3k
-rw-r-----. 1 oracle asmdba    5251072 Mar  3 17:40 data_D-NCDB_I-3358649481_TS-DROPME_FNO-2_6rqvijq0
-rw-r-----. 1 oracle asmdba  828383232 Mar  3 17:40 data_D-NCDB_I-3358649481_TS-SYSTEM_FNO-1_6jqvie1q
-rw-r-----. 1 oracle asmdba    5251072 Mar  3 17:40 data_D-NCDB_I-3358649481_TS-USERS_FNO-6_6mqvie57
-rw-r-----. 1 oracle asmdba  293609472 Mar  3 17:40 data_D-NCDB_I-3358649481_TS-UNDOTBS1_FNO-4_6lqvie4d
-rw-r-----. 1 oracle asmdba 1304174592 Mar  3 17:40 data_D-NCDB_I-3358649481_TS-EXAMPLE_FNO-5_6iqvie1p
[oracle@oraclelinux7 ~]$ ls -lrt /zfssa/ncdb_bkp/archive/
total 8204
-r--r-----. 1 oracle asmdba    1024 Mar  3 16:06 1_118_905507850.arc
-r--r-----. 1 oracle asmdba 2869760 Mar  3 16:06 1_117_905507850.arc
-r--r-----. 1 oracle asmdba    1024 Mar  3 17:49 1_120_905507850.arc
-r--r-----. 1 oracle asmdba 5426688 Mar  3 17:49 1_119_905507850.arc
[oracle@oraclelinux7 ~]$ 

The image copies are now current, and you can see the 2 data files for the new tablespace “DROPME” that didn’t exist before.

Creating the Second clone Database

With the preparations in place it is time to see if I can create a new clone that reflects the new tablespace and users. I also would like to see if the following steps have any implications on my database CLONE1.

On the ZFSSA Simulator

I headed over to the ZFSSA (simulator) and navigated to shares -> projects. After selecting “NCDB_BKP” I chose Snapshots and hit the (+) button to create snap1. You should see snap0 on that view as well if you are following the examples.

The next task is to create a new project. You should already see the projects pane on the left hand side. Click on the (+) sign next to ALL to create a new one. I named the project NCDB_CLONE2 to stay in line with the naming convention I used previously. With the project created, you should set the properties as needed. I moved the mount point to /export/ncdb_clone2/. On a real ZFSSA you’d set others as well, but that is out of scope of this post. Consult the relevant white papers for more information.

Just as described in the first post now you need to create clones based on snap1. To do so, switch back to the NCDB_BKP project and select (the list of) shares. You should see alert, archive, data and redo. Create clones for each, by following these steps per share:

  • Hover the mouse over the share name
  • Click on the share’s pencil icon to edit share properties
  • Select “snapshots”
  • Hover the mouse over the snapshot name, snap1
  • In the “Clones” column, click on the [+] sign
  • In the resulting pop-up, make sure to create the clone in NCDB_CLONE2 and give it the same name as the share you are creating the snapshot for

The end result should be 4 shares shown in the new project, all of them based on snapshots of their cousins from NCDB_BKP. That concludes the work on the ZFSSA for now.

NB: you can script this :)

On the database Server

If you haven’t done so yet, create the directories to mount the new shares. Everything I mount from the ZFSSA goes to /zfssa on the database server, and as a result I have my files in /zfssa/ncdb_clone2/{alert,archive,data,redo}. Update /etc/fstab accordingly and mount the shares.

The remaining steps are the same as for the creation of CLONE1, and I am not repeating them here. Because it’s very important, here is the warning again: Be careful with the path information in the create controlfile statement and make sure they point to /zfssa/ncdb_clone2/!

Reviewing the Outcome

After CLONE2 is created, my initial questions may be answered.

  1. Is the new tablespace part of CLONE2?
  2. Can I access the newly created table new_user.t1 in CLONE2?
  3. Did anything change for CLONE1?

Starting with CLONE2, I can see the new tablespace, and the data in new_user.t1 is available as well:

SQL> select name from v$database;

NAME
---------
CLONE2

SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
EXAMPLE
DROPME

7 rows selected.

SQL> select count(*) from new_user.t1;

  COUNT(*)
----------
       100

SQL>

And what about CLONE1?

SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
EXAMPLE

6 rows selected.

SQL> select count(*) from new_user.t1;
select count(*) from new_user.t1
                              *
ERROR at line 1:
ORA-00942: table or view does not exist

Well that looks ok!

Posted in Linux, Oracle | Tagged: | Leave a Comment »

Experimenting with the ZFSSA’s snapshot capability using the simulator

Posted by Martin Bach on May 17, 2016

Recently I have been asked how the Copy-on-Write cloning works on the ZFS Storage Appliance. More specifically, the question was about the “master” copy: did it have to be static or could it be rolled forward? What better than a test to work out how it works. Unfortunately I don’t have an actual system available to me at home so I had to revert to the simulator, hoping that it represents the real appliance accurately.

Setup

First I downloaded the ZFS Storage Appliance Simulator from the Oracle website and created a nice, new, shiny storage system (albeit virtual). Furthermore I have an Oracle Linux 7 system with UEK3 that will attach to the ZFSSA using dNFS. The appliance has an IP address of 192.168.56.101 while the Linux system is accessible via 192.168.56.20. This is of course a virtual toy environment, a real life setup would be quite different using IPMP and multiple paths preferably over Infiniband.

Configuration

Configuring the system is a two step process. The first is to create a storage pool on the ZFSSA that will host database backups, snapshots and clones. The second part is the configuration of the database server to use dNFS. I have written about that in detail in a previous blog post: https://martincarstenbach.wordpress.com/2014/07/09/setting-up-direct-nfs-on-oracle-12c/.

Step 1: Configure the ZFSSA

Again this is the simulator and I can only wish I had the real thing :) I have created a mirrored pool across all disks (that’s possible at this point because I skipped the pool creation during the initial appliance configuration). Navigating to Configuration -> Storage I clicked on the + button to create the pool and assigned all disks to it. I used a mirrored configuration, which again is owned to my lab setup. Depending on your type of (Exadata) backup you would probably choose something else. There are white papers that explain the best data profile based on the workload.

Next I created a new project, named NCDB_BKP to have a common location to set attributes. I tend to set a different mount point, in this case /export/ncdb_bkp to group all shares about to be created. Set the other attributes (compression, record size, access permissions etc) according to your workload. Following the recommendation in the white paper listed in the reference section I created 4 shares under the NCDB_BKP project:
– data
– redo
– alert
– archive

You probably get where this is heading … With those steps I took from the white paper listed in the reference section, the setup of the ZFSSA Simulator is done, at least for now. Head over to the database server.

Step 2: configure the database server

On the database server I customarily create a /zfssa/<projectName>/ mount point where I’m intending to mount the project’s shares. In other words I have this:

[oracle@oraclelinux7 ~]$ ls -l /zfssa/ncdb_bkp
total 0
drwxr-xr-x. 2 oracle oinstall 6 Jan 12 16:28 alert
drwxr-xr-x. 2 oracle oinstall 6 Jan 12 16:28 archive
drwxr-xr-x. 2 oracle oinstall 6 Jan 12 16:28 data
drwxr-xr-x. 2 oracle oinstall 6 Jan 12 16:28 redo

These will be mounted from the ZFSSA – edit your fstab to mount the shares from the appliance (simulator). When mounted, you’d see something like that:

[root@oraclelinux7 ~]# mount | awk '/^[0-9].*/ {print $1}'
192.168.56.101:/export/ncdb_bkp/alert
192.168.56.101:/export/ncdb_bkp/archive
192.168.56.101:/export/ncdb_bkp/data
192.168.56.101:/export/ncdb_bkp/redo

The next step is to add those to the oranfstab which I covered in the previous post I referred to. That should be it for now! Time to take a backup of the source database in preparation for the cloning. Be careful: adding image copies to an existing backup strategy might have adverse side effects-as always make sure you understand the implications of this technique and its impact. As always, test thoroughly!

Creating a backup

Let’s have a look at the database before taking a backup:

[oracle@oraclelinux7 ~]$ NLS_DATE_FORMAT="dd.mm.yyyy hh24:mi:ss" rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Mar 3 10:12:28 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: NCDB (DBID=3358649481)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name NCDB

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    790      SYSTEM               YES     +DATA/NCDB/DATAFILE/system.279.905507757
3    610      SYSAUX               NO      +DATA/NCDB/DATAFILE/sysaux.273.905507723
4    280      UNDOTBS1             YES     +DATA/NCDB/DATAFILE/undotbs1.259.905507805
5    1243     EXAMPLE              NO      +DATA/NCDB/DATAFILE/example.283.905507865
6    5        USERS                NO      +DATA/NCDB/DATAFILE/users.266.905507803

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    60       TEMP                 32767       +DATA/NCDB/TEMPFILE/temp.282.905507861

RMAN>

Nothing special, just a standard DBCA-created General Purpose database … Time to take the image copy.

RMAN> @/u01/app/oracle/admin/NCDB/scripts/imagecopy.rman

RMAN> run {
2>   allocate channel c1 device type disk format '/zfssa/ncdb_bkp/data/%U';
3>   allocate channel c2 device type disk format '/zfssa/ncdb_bkp/data/%U';
4>   backup incremental level 1 for recover of copy with tag 'zfssa' database ;
5>   recover copy of database with tag 'zfssa';
6> }

allocated channel: c1
channel c1: SID=36 device type=DISK

allocated channel: c2
channel c2: SID=258 device type=DISK

Starting backup at 03.03.2016 14:48:25
no parent backup or copy of datafile 5 found
no parent backup or copy of datafile 1 found
no parent backup or copy of datafile 3 found
no parent backup or copy of datafile 4 found
no parent backup or copy of datafile 6 found
channel c1: starting datafile copy
input datafile file number=00005 name=+DATA/NCDB/DATAFILE/example.283.905507865
channel c2: starting datafile copy
input datafile file number=00001 name=+DATA/NCDB/DATAFILE/system.279.905507757
output file name=/zfssa/ncdb_bkp/data/data_D-NCDB_I-3358649481_TS-SYSTEM_FNO-1_6jqvie1q 
  tag=ZFSSA RECID=36 STAMP=905525356
channel c2: datafile copy complete, elapsed time: 00:00:55
channel c2: starting datafile copy
input datafile file number=00003 name=+DATA/NCDB/DATAFILE/sysaux.273.905507723
output file name=/zfssa/ncdb_bkp/data/data_D-NCDB_I-3358649481_TS-EXAMPLE_FNO-5_6iqvie1p 
  tag=ZFSSA RECID=37 STAMP=905525386
channel c1: datafile copy complete, elapsed time: 00:01:23
channel c1: starting datafile copy
input datafile file number=00004 name=+DATA/NCDB/DATAFILE/undotbs1.259.905507805
output file name=/zfssa/ncdb_bkp/data/data_D-NCDB_I-3358649481_TS-UNDOTBS1_FNO-4_6lqvie4d
  tag=ZFSSA RECID=39 STAMP=905525414
channel c1: datafile copy complete, elapsed time: 00:00:25
channel c1: starting datafile copy
input datafile file number=00006 name=+DATA/NCDB/DATAFILE/users.266.905507803
output file name=/zfssa/ncdb_bkp/data/data_D-NCDB_I-3358649481_TS-SYSAUX_FNO-3_6kqvie3k
  tag=ZFSSA RECID=38 STAMP=905525409
channel c2: datafile copy complete, elapsed time: 00:00:51
output file name=/zfssa/ncdb_bkp/data/data_D-NCDB_I-3358649481_TS-USERS_FNO-6_6mqvie57
  tag=ZFSSA RECID=40 STAMP=905525416
channel c1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 03.03.2016 14:50:18

Starting recover at 03.03.2016 14:50:18
no copy of datafile 1 found to recover
no copy of datafile 3 found to recover
no copy of datafile 4 found to recover
no copy of datafile 5 found to recover
no copy of datafile 6 found to recover
Finished recover at 03.03.2016 14:50:19
released channel: c1
released channel: c2

RMAN> **end-of-file**

The Oracle white paper I just mentioned proposes using the %b flag as part of the RMAN formatSpec in the backup command – which interestingly does not work. The second time I ran it it failed like this:

RMAN> @/u01/app/oracle/admin/NCDB/scripts/imagecopy.rman

RMAN> run {
2>   allocate channel c1 device type disk format '/zfssa/ncdb_bkp/data/%b';
3>   allocate channel c2 device type disk format '/zfssa/ncdb_bkp/data/%b';
4>   backup incremental level 1 for recover of copy with tag 'zfssa' database ;
5>   recover copy of database with tag 'zfssa';
6> }

allocated channel: c1
channel c1: SID=36 device type=DISK

allocated channel: c2
channel c2: SID=258 device type=DISK

Starting backup at 03.03.2016 14:42:04
channel c1: starting incremental level 1 datafile backup set
channel c1: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA/NCDB/DATAFILE/example.283.905507865
input datafile file number=00006 name=+DATA/NCDB/DATAFILE/users.266.905507803
input datafile file number=00004 name=+DATA/NCDB/DATAFILE/undotbs1.259.905507805
channel c1: starting piece 1 at 03.03.2016 14:42:04
RMAN-03009: failure of backup command on c1 channel at 03/03/2016 14:42:04
ORA-19715: invalid format b for generated name
ORA-27302: failure occurred at: slgpn
continuing other job steps, job failed will not be re-run
channel c2: starting incremental level 1 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/NCDB/DATAFILE/system.279.905507757
input datafile file number=00003 name=+DATA/NCDB/DATAFILE/sysaux.273.905507723
channel c2: starting piece 1 at 03.03.2016 14:42:04
released channel: c1
released channel: c2
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on c2 channel at 03/03/2016 14:42:04
ORA-19715: invalid format b for generated name
ORA-27302: failure occurred at: slgpn

RMAN> **end-of-file**

This makes sense-Oracle tries to create an incremental backup with the same name as the data file copy, which would erase the copy and replace it with the incremental backup. Thankfully RMAN does not allow that to happen. This is why I chose the %U flag in the formatSpec, as it allows the incremental backup to be created successfully in addition to the datafile image copies. I am conscious of the fact that the image copies have somewhat ugly names.

After this little digression it’s time to back up the archived logs. When using copies of archived logs you need to make sure that you don’t have overlapping backups. The Oracle white paper has the complete syntax, I spare you the detail as it’s rather boring.

After some time the result is a set of image copies of the database plus the archived logs:

[oracle@oraclelinux7 ~]$ ls -lR /zfssa/ncdb_bkp/
/zfssa/ncdb_bkp/:
total 14
drwxr-xr-x. 2 oracle dba 2 Mar  3 10:32 alert
drwxr-xr-x. 2 oracle dba 4 Mar  3  2016 archive
drwxr-xr-x. 2 oracle dba 7 Mar  3  2016 data
drwxr-xr-x. 2 oracle dba 2 Mar  3 10:32 redo

/zfssa/ncdb_bkp/alert:
total 0

/zfssa/ncdb_bkp/archive:
total 2821
-r--r-----. 1 oracle asmdba 2869760 Mar  3  2016 1_117_905507850.arc
-r--r-----. 1 oracle asmdba    1024 Mar  3  2016 1_118_905507850.arc

/zfssa/ncdb_bkp/data:
total 3001657
-rw-r-----. 1 oracle asmdba 1304174592 Mar  3  2016 data_D-NCDB_I-3358649481_TS-EXAMPLE_FNO-5_6iqvie1p
-rw-r-----. 1 oracle asmdba  639639552 Mar  3  2016 data_D-NCDB_I-3358649481_TS-SYSAUX_FNO-3_6kqvie3k
-rw-r-----. 1 oracle asmdba  828383232 Mar  3  2016 data_D-NCDB_I-3358649481_TS-SYSTEM_FNO-1_6jqvie1q
-rw-r-----. 1 oracle asmdba  293609472 Mar  3  2016 data_D-NCDB_I-3358649481_TS-UNDOTBS1_FNO-4_6lqvie4d
-rw-r-----. 1 oracle asmdba    5251072 Mar  3  2016 data_D-NCDB_I-3358649481_TS-USERS_FNO-6_6mqvie57

/zfssa/ncdb_bkp/redo:
total 0

[oracle@oraclelinux7 ~]$

Create a Clone

Now that the database is backed up in form of an image copy and I have archived redo logs I can create a clone of it. This requires you to jump back to the ZFSSA interface (either CLI or BUI) and create snapshots followed by clones on the directories used.

One way is to log in to the BUI, select the NCDB_BKP project, navigate to “snapshots” and creating one by clicking on the + button. I named it snap0. If you plan on doing this more regularly it can also be scripted.

A clone is a writeable snapshot and is as easy to create. Add a new project – for example NCDB_CLONE1 – and set it up as required for your workload. In the next step you need to switch back to the backup project and for each of the 4 shares create a clone. To do so, navigate to the list of shares underneath NCDB_BKP and click on the pencil icon. This takes you to the share settings. Click on snapshots and you should see snap0. Hovering the mouse over the snapshot’s name reveals additional icons on the right, one of which allows you to create a clone (“clone snapshot as a new share”). Hit that plus sign and change the project to your clone (NCDB_CLONE1) and assign a name to the clone. I tend to use the same name as the source. The mount point should automatically be updated to /export/ncdb_clone1/sharename.

Now you need to get back to the database server and add the mount points for the recently created clones: /zfssa/ncdb_clone1/{data,redo,arch,alert}. Edit the fstab and oranfstab files, then mount the new shares.

Finishing the clone creation

The following procedure is most likely familiar to DBAs who created databases as file system copies. The steps are somewhere along the line of this:
– register the database in oratab
– create an initialisation file
– create a password file
– back up the source controlfile to trace in order to create a “create controlfile” statement for the clone
– start the clone
– create the controlfile
– recover the clone using the newly created backup controlfile
– open the clone with the resetlogs option
– add temp file(s)

Based on my source database’s parameter file, I created the following for the clone database which I’ll call CLONE1:

*.audit_file_dest='/u01/app/oracle/admin/CLONE1/adump'
*.audit_trail='db'
*.compatible='12.1.0.2.0'
*.control_files='/zfssa/ncdb_clone1/redo/control1.ctl'
*.db_block_size=8192
*.db_create_file_dest='/zfssa/ncdb_clone1/data'
*.db_domain=''
*.db_name='CLONE1'
*.db_recovery_file_dest='/zfssa/ncdb_clone1/archive'
*.db_recovery_file_dest_size=4560m
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=CLONE1XDB)'
*.nls_language='ENGLISH'
*.nls_territory='UNITED KINGDOM'
*.open_cursors=300
*.pga_aggregate_target=512m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=1024m
*.undo_tablespace='UNDOTBS1'

Notice how some of the filesystem related parameters changed to point to the mount points exported via NFS from the ZFSSA. I have refrained from changing diagnostic_dest to the ZFSSA simulator, this prevented the database from starting (perf told me that the sqlplus session spent all the time trying to use the network).

I’ll spare you the details of the create controlfile command, just make sure you change paths and point to the data files on the cloned shares (/zfssa/ncdb_clone1/data/*) and NOT on the ones where the master copy resides. After the controlfile is created, recover the database using the backup controlfile, then open it. Voila! You have just created CLONE1. Just add a temp file and you are almost good to go.

In the next part

The post has already become a bit too long, so I’ll stop here and split it into this one and a second part. In the next part you will read about changes to the source database (NCDB) and how I’ll roll the image copies forward.

References

http://www.oracle.com/technetwork/articles/systems-hardware-architecture/cloning-solution-353626.pdf

Posted in Linux, Oracle | Tagged: | 1 Comment »

Trouble with multiple SCAN listeners in 12c Release 1

Posted by Martin Bach on May 14, 2016

Prompted by comments made by readers about my posts describing how to add a second SCAN in 12c Release 1 and problems regarding the listener_networks parameter I thought it was time to revisit the situation. I’m still running Oracle Linux 7.1/UEK3 (although that should not matter) but upgraded to 12.1.0.2.5. This is the release that is reportedly showing the odd behaviour. I don’t recall my exact version when I wrote the original posts back in April 2014, but by looking at them I think it all worked ok at the time. Here is my current patchlevel after the upgrade to the troublesome PSU.

 SQL> select patch_id,description,status from DBA_REGISTRY_SQLPATCH;

  PATCH_ID DESCRIPTION                                                                      STATUS
---------- -------------------------------------------------------------------------------- ---------------
  21555660 Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)                       SUCCESS
  21359755 Database Patch Set Update : 12.1.0.2.5 (21359755)                                SUCCESS

This is not the latest patch level! After having seen a presentation by Ludovico Caldara (@ludodba) just this week I would think that the April 2016 patch which was current at the time of writing is the one you want to be on :) I have an upgrade to the April 2016 PSU planned but first wanted to test against 12.1.0.2.5 to see if I could reproduce the issue.

Problem Statement

The parameter listener_networks should be populated automatically by CRS when the database comes up-and indeed you can see evidence of that in the respective node’s crsd_oraagent_oracle.trc file. However, with 2 SCANs present, there is something not quite right. You can see this in the usual locations:

– SCAN listeners
– listener_networks parameter
– CRSD trace

The most obvious clue is that you cannot connect to the database any more using one of the SCANs. To avoid you having to flick back and forth between my posts, here is the network configuration again:

[oracle@ron12cprinode2 ~]$ srvctl config scan -netnum 1
SCAN name: ron12cpri-scan, Network: 1
Subnet IPv4: 192.168.100.0/255.255.255.0/eth0, static
Subnet IPv6: 
SCAN 0 IPv4 VIP: 192.168.100.65
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 1 IPv4 VIP: 192.168.100.66
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 2 IPv4 VIP: 192.168.100.64
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 

[oracle@ron12cprinode2 ~]$ srvctl config scan -netnum 2
SCAN name: ron12cpri-dg-scan, Network: 2
Subnet IPv4: 192.168.102.0/255.255.255.0/eth2, static
Subnet IPv6: 
SCAN 0 IPv4 VIP: 192.168.102.66
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 1 IPv4 VIP: 192.168.102.64
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 
SCAN 2 IPv4 VIP: 192.168.102.65
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes: 
SCAN VIP is individually disabled on nodes: 

It’s essentially using the 192.168.100/24 network for the “public” traffic and 192.168.102/24 for Data Guard. I still use my RAC One Node database RON, which is currently active on node 2. All of my SCAN listeners should know about its services, RON_SRV and RON_DG_SRV. However, that’s not the case, as a quick check reveals:

[oracle@ron12cprinode1 ~]$ lsnrctl status LISTENER_SCAN1

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-MAY-2016 19:56:14

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER_SCAN1
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                14-MAY-2016 19:21:39
Uptime                    0 days 0 hr. 34 min. 35 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ron12cprinode1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.65)(PORT=1521)))
Services Summary...
Service "-MGMTDBXDB" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "RON" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RONXDB" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RON_DG_SRV" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RON_SRV" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "_mgmtdb" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "ron12cpri" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
The command completed successfully
[oracle@ron12cprinode1 ~]$ lsnrctl status DGSCANLSNR_SCAN2_NET2

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-MAY-2016 19:56:23

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=DGSCANLSNR_SCAN2_NET2)))
STATUS of the LISTENER
------------------------
Alias                     DGSCANLSNR_SCAN2_NET2
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                14-MAY-2016 19:21:42
Uptime                    0 days 0 hr. 34 min. 41 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ron12cprinode1/dgscanlsnr_scan2_net2/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=DGSCANLSNR_SCAN2_NET2)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.102.64)(PORT=1521)))
The listener supports no services
The command completed successfully
[oracle@ron12cprinode1 ~]$ 

Before generating the above output I specifically “reset” the listener_networks settings on both instances, and ensured that they were created dynamically. After the database restart I couldn’t make out ANY entry for listener_networks:

SQL> select inst_id,name,value from gv$parameter where name = 'listener_networks';

   INST_ID NAME                           VALUE
---------- ------------------------------ ------------------------------
         1 listener_networks

SQL> select inst_id,name,value from gv$spparameter where name = 'listener_networks';

   INST_ID NAME                           VALUE
---------- ------------------------------ ------------------------------
         1 listener_networks

Nothing at all. I manage to reach the system using the RON_SRV service that’s known to the first (public) network’s SCAN:

[oracle@ron12cprinode1 ~]$ sqlplus a/b@ron12cpri-scan/RON_SRV

SQL*Plus: Release 12.1.0.2.0 Production on Sat May 14 20:18:02 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied


Enter user-name: 
[oracle@ron12cprinode1 ~]$ 

The same test failed for connections against ron12cpri-dg-scan, stating that the listener didn’t know about the service. Checking the CRSD trace (on the node the instance runs!) I could see the reason:

2016-05-14 19:47:49.637611 : USRTHRD:2023044864: {1:58687:2893} Endp=ron12cpri-dg-scan:1521
2016-05-14 19:47:49.637638 : USRTHRD:2023044864: {1:58687:2893} Final Endp=(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521)), remoteEndp= ron12cpri-scan:1521','ron12cpri-scan:1521','ron12cpri-scan:1521
2016-05-14 19:47:49.637662 : USRTHRD:2023044864: {1:58687:2893} sqlStmt = ALTER SYSTEM SET REMOTE_LISTENER=' ron12cpri-scan:1521','ron12cpri-scan:1521','ron12cpri-scan:1521' SCOPE=MEMORY SID='RON_1' /* db agent *//* {1:58687:2893} */
2016-05-14 19:47:49.645739 : USRTHRD:2023044864: {1:58687:2893} sqlStmt = ALTER SYSTEM SET LISTENER_NETWORKS='(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521))' SCOPE=MEMORY SID='RON_1' /* db agent *//* {1:58687:2893} */
2016-05-14 19:47:49.655035 : USRTHRD:2023044864: {1:58687:2893} ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00119: invalid specification for system parameter LISTENER_NETWORKS
ORA-00137: invalid LISTENER_NETWORKS specification with NAME=net2
ORA-00138: all addresses specified for attribute REMOTE_LISTENER are invalid
ORA-00132: syntax error or unresolved network name 'ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521'

2016-05-14 19:47:49.655191 : USRTHRD:2023044864: {1:58687:2893} DbAgent::DedicatedThread::run setRemoteListener Exception OCIException
2016-05-14 19:47:49.655207 : USRTHRD:2023044864: {1:58687:2893} ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00119: invalid specification for system parameter LISTENER_NETWORKS
ORA-00137: invalid LISTENER_NETWORKS specification with NAME=net2
ORA-00138: all addresses specified for attribute REMOTE_LISTENER are invalid
ORA-00132: syntax error or unresolved network name 'ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521ron12cpri-dg-scan:1521'

Looking at this output it appears that setting the remote_listener worked, although I thought we’d only set the host once and not thrice? This looks fishy. It appears to work though, as confirmed in v$parameter and the fact that I can connect against the system.

Interestingly setting listener_networks fails with an ORA-02097: parameter cannot be modified because specified value is invalid. This makes sense: there are white spaces missing in the alter system command, and even if the spaces were correct, the command would fail. Trying manually confirms that thought:

SQL> ALTER SYSTEM SET LISTENER_NETWORKS='(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521))' SCOPE=MEMORY SID='RON_1';
ALTER SYSTEM SET LISTENER_NETWORKS='(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521))' SCOPE=MEMORY SID='RON_1'
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00119: invalid specification for system parameter LISTENER_NETWORKS
ORA-00137: invalid LISTENER_NETWORKS specification with NAME=net2
ORA-00138: all addresses specified for attribute REMOTE_LISTENER are invalid
ORA-00132: syntax error or unresolved network name 'ron12cpri-dg-scan:1521
ron12cpri-dg-scan:1521 ron12cpri-dg-scan:1521'

OK – that’s all I need to know. When changing the command to look like what I thought it should look like in the first place (the remote listener specifies only 1 host:port) it works:

SQL> ALTER SYSTEM SET LISTENER_NETWORKS='(( NAME=net2)(LOCAL_LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.102.63)(PORT=1523)))))','(( NAME=net2)(REMOTE_LISTENER=ron12cpri-dg-scan:1521))' SCOPE=MEMORY SID='RON_1';

System altered.

SQL> alter system register;

System altered.

What does that mean? Back to my SCAN listeners again:

[oracle@ron12cprinode1 ~]$ lsnrctl status LISTENER_SCAN1

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-MAY-2016 20:07:19

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------   
Alias                     LISTENER_SCAN1
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                14-MAY-2016 19:21:39
Uptime                    0 days 0 hr. 45 min. 40 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ron12cprinode1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.65)(PORT=1521)))
Services Summary...
Service "-MGMTDBXDB" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "RON" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RONXDB" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RON_DG_SRV" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RON_SRV" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "_mgmtdb" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
Service "ron12cpri" has 1 instance(s).
  Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
The command completed successfully

No negative change there, but would the DG_SCAN listener also pick it up?

[oracle@ron12cprinode1 ~]$ lsnrctl status DGSCANLSNR_SCAN2_NET2

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-MAY-2016 20:07:21

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=DGSCANLSNR_SCAN2_NET2)))
STATUS of the LISTENER
------------------------   
Alias                     DGSCANLSNR_SCAN2_NET2
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                14-MAY-2016 19:21:42
Uptime                    0 days 0 hr. 45 min. 39 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ron12cprinode1/dgscanlsnr_scan2_net2/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=DGSCANLSNR_SCAN2_NET2)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.102.64)(PORT=1521)))
Services Summary...
Service "RON" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RONXDB" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RON_DG_SRV" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
Service "RON_SRV" has 1 instance(s).
  Instance "RON_1", status READY, has 1 handler(s) for this service...
The command completed successfully

Well it seems it did. Now the question is: can I connect?

[oracle@ron12cprinode1 ~]$ sqlplus system/secretPWD@ron12cpri-scan/RON_SRV

SQL*Plus: Release 12.1.0.2.0 Production on Sat May 14 20:08:45 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-28002: the password will expire within 7 days


Last Successful login time: Sat May 14 2016 19:58:28 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
[oracle@ron12cprinode1 ~]$ sqlplus system/secretPWD@ron12cpri-dg-scan/RON_DG_SRV

SQL*Plus: Release 12.1.0.2.0 Production on Sat May 14 20:08:54 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

ERROR:
ORA-28002: the password will expire within 7 days


Last Successful login time: Sat May 14 2016 20:08:45 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> 

Summary

Well I can! So that should solve the problem for the active instance, however there are problems bound to happen when the instance restarts. Since I don’t really have control over the instance name in RAC One Node (RON_1 can be started on node 1 and node 2) I can’t hard-code the value for listener_networks into the spfile. As an end result I’d lock myself out just like CRS did. This is likely a similar issue for multi-node RAC using policy managed databases.

I have repeated the test with the latest version of the stack (upgraded in place), and got the same result. Here are the version numbers:

[oracle@ron12cprinode2 ~]$ /u01/app/oracle/product/12.1.0.2/dbhome_1/OPatch/opatch lspatches
22674709;Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
22502555;OCW Patch Set Update : 12.1.0.2.160419 (22502555)
22291127;Database Patch Set Update : 12.1.0.2.160419 (22291127)

OPatch succeeded.

[oracle@ron12cprinode2 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch lspatches
22674709;Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)
22502555;OCW Patch Set Update : 12.1.0.2.160419 (22502555)
22291127;Database Patch Set Update : 12.1.0.2.160419 (22291127)

OPatch succeeded.

SQL> select action_time, patch_id,description,status from DBA_REGISTRY_SQLPATCH order by action_time;

ACTION_TIME                                PATCH_ID DESCRIPTION                                                            STATUS
---------------------------------------- ---------- ---------------------------------------------------------------------- ---------------
29-OCT-15 12.46.55.763581                  21555660 Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)             SUCCESS
29-OCT-15 12.46.55.939750                  21359755 Database Patch Set Update : 12.1.0.2.5 (21359755)                      SUCCESS
14-MAY-16 21.32.15.211167                  21555660 Database PSU 12.1.0.2.5, Oracle JavaVM Component (Oct2015)             SUCCESS
14-MAY-16 21.32.15.233105                  22674709 Database PSU 12.1.0.2.160419, Oracle JavaVM Component (Apr2016)        SUCCESS
14-MAY-16 21.32.15.591460                  22291127 Database Patch Set Update : 12.1.0.2.160419 (22291127)                 SUCCESS

The same values for remote_listener and listener_networks as with 12.1.0.2.5 have been observed, and the error about setting listener_networks in the CRSD log was identical to the previous release. I guess that needs to be raised with Oracle …

Posted in 12c Release 1, Linux, RAC | 1 Comment »

TABLE ACCESS INMEMORY FULL – but there may be more

Posted by Martin Bach on November 13, 2015

While preparing to teach a class this week I have had the time to look into the In Memory (cost) option a bit closer. I noticed a few interesting things along the way and would like to share one of these here.

Background

One of the questions I was asking myself was:

“What happens if I scan segments that are within the IM area, and some are not?”

I was primarily thinking of joins in a DWH environment, but in order to keep the test case simple enough and reproducible I decided to go with a partitioned table where the current partition is assigned to the IMCS, and the rest is not. For this test I’m relying on the Swingbench SH schema. All of this runs on my laptop in a VM so I had to be selective when it comes to indexes. I also chose to NOT partition the tables at this stage, I wanted to chose my own partitioning scheme. For reference, here is the command that created the SH schema:

[oracle@oraclelinux7 bin]$ ./shwizard -cl -create -cs //localhost/sh_pdb \
> -dba system -dbap somePassword -nopart -pkindexes -scale 1 -tc 2 -u sh -p sh
...
============================================
|           Datagenerator Run Stats        |
============================================
Connection Time                        0:00:00.005
Data Generation Time                   0:04:35.890
DDL Creation Time                      0:00:14.063
Total Run Time                         0:04:49.962
Rows Inserted per sec                       64,707
Data Generated (MB) per sec                    5.3
Actual Rows Generated                   17,848,007

...

I wanted to create a range partitioned version of the sales table. After the SH-wizard completed the data distribution is as shown:

SQL> select count(*), trunc(time_id,'yy') year from sales group by trunc(time_id,'yy') order by year;

  COUNT(*) YEAR
---------- ---------
    130653 01-JAN-95
    149319 01-JAN-96
    251974 01-JAN-97
    326632 01-JAN-98
    365547 01-JAN-99
    388318 01-JAN-00
    393919 01-JAN-01
    406703 01-JAN-02
    483605 01-JAN-03
    566384 01-JAN-04
    422289 01-JAN-05
    619858 01-JAN-06
    782244 01-JAN-07
   1099551 01-JAN-08
   1249340 01-JAN-09
   1346025 01-JAN-10
   1690302 01-JAN-11
   2028048 01-JAN-12
   2028048 01-JAN-13

19 rows selected.

So based on this information I can create a table that has data from 2013 in the IMCS and everything else will be excluded from it. For the sake of the following discussion 2013 is considered the “current” partition. Here is the partitioned sales table’s DDL with my changes to enable my partitioning scheme.

CREATE TABLE SALES_PART
  (
    PROD_ID            NUMBER NOT NULL ENABLE,
    CUST_ID            NUMBER NOT NULL ENABLE,
    TIME_ID            DATE NOT NULL ENABLE,
    CHANNEL_ID         NUMBER NOT NULL ENABLE,
    PROMO_ID           NUMBER NOT NULL ENABLE,
    QUANTITY_SOLD      NUMBER(10,2) NOT NULL ENABLE,
    SELLER             NUMBER(6,0) NOT NULL ENABLE,
    FULFILLMENT_CENTER NUMBER(6,0) NOT NULL ENABLE,
    COURIER_ORG        NUMBER(6,0) NOT NULL ENABLE,
    TAX_COUNTRY        VARCHAR2(3) NOT NULL ENABLE,
    TAX_REGION         VARCHAR2(3),
    AMOUNT_SOLD        NUMBER(10,2) NOT NULL ENABLE
  )
  partition BY range (time_id)
  (
    partition p_old VALUES less than       (DATE '2001-01-01'),
    partition p_2000_2010 VALUES less than (DATE '2011-01-01'),
    partition p_2011 VALUES less than      (DATE '2012-01-01'),
    partition p_2012 VALUES less than      (DATE '2013-01-01'),
    partition p_2013 VALUES less than      (DATE '2014-01-01') inmemory
  )
  TABLESPACE SH;

SQL> alter session enable parallel dml;

Session altered.

SQL> insert /*+ parallel append */ into sales_part select /*+ parallel */ * from sales
  2  /

14728759 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(user,'SALES_PART')

PL/SQL procedure successfully completed.

SQL> select partition_name, num_rows, inmemory
  2  from user_tab_partitions
  3  where table_name = 'SALES_PART'
  4  order by partition_position;

PARTITION_NAME                   NUM_ROWS INMEMORY
------------------------------ ---------- --------
P_OLD                             1612443 DISABLED
P_2000_2010                       7369918 DISABLED
P_2011                            1690302 DISABLED
P_2012                            2028048 DISABLED
P_2013                            2028048 ENABLED

So that should give me what I needed. The IMCS was now populated with the segment as well:

SQL> select segment_name, partition_name, bytes_not_populated, populate_status from v$im_user_segments;

SEGMENT_NAME         PARTITION_NAME                 BYTES_NOT_POPULATED POPULATE_
-------------------- ------------------------------ ------------------- ---------
SALES_PART           P_2013                                           0 COMPLETED

Test

With the setup work complete I am ready to test. First of all, a simple SQL trace should show me what is happening. Note that I am specifically targeting the “current” partition.

SQL> sho user
USER is "SH"
SQL> alter session set events 'sql_trace level 8';

Session altered.

SQL> select /* current_partition */ count(*) from sales_part
  2  where time_id > DATE '2013-01-02'
  3  and promo_id = 316;

  COUNT(*)
----------
      3947

SQL> alter session set events 'sql_trace off';

Session altered.

SQL> select value from v$diag_info where name like 'Def%';

VALUE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ora_14034.trc

The tkprof’d trace shows this result:

********************************************************************************

select /* current_partition */ count(*) from sales_part
where time_id > DATE '2013-01-02'
and promo_id = 316

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          2          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.01       0.01          0          5          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 117  
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=3 pr=0 pw=0 time=8441 us)
      3947       3947       3947   PARTITION RANGE SINGLE PARTITION: 5 5 (cr=3 pr=0 pw=0 time=7583 us cost=214 size=48252 card=4021)
      3947       3947       3947    TABLE ACCESS INMEMORY FULL SALES_PART PARTITION: 5 5 (cr=3 pr=0 pw=0 time=6972 us cost=214 size=48252 card=4021)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     2        6.79          6.79
********************************************************************************

As you can see, my partitioning strategy paid off-only the 5th partition is accessed (that’s p_2013 or the “current” partition based on the data). This partition is the one entirely found in the IMCS.

A result like the one above is what I’d expect and the access path “TABLE ACCESS INMEMORY FULL” does it justice. But now the actual reason for the blog post: mixing scans against segments in memory and on disk. Logging on again I ran the same query but without the restriction to 2013.

SQL> alter session set events 'sql_trace level 8';

Session altered.

SQL> select /* every_partition */ count(*) from sales_part
  2  where promo_id = 316;

  COUNT(*)
----------
     28993

SQL> alter session set events 'sql_trace off';

Session altered.

SQL> select value from v$diag_info where name like 'Def%';

VALUE
----------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb/CDB/trace/CDB_ora_14437.trc

The tkprof’d result is shown here:

select /* every_partition */ count(*) from sales_part
where promo_id = 316

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      3.15       3.84      94878      94901          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      3.15       3.84      94878      94901          0           1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 117  
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE (cr=94901 pr=94878 pw=0 time=3848392 us)
     28993      28993      28993   PARTITION RANGE ALL PARTITION: 1 5 (cr=94901 pr=94878 pw=0 time=4503583 us cost=26217 size=117128 card=29282)
     28993      28993      28993    TABLE ACCESS INMEMORY FULL SALES_PART PARTITION: 1 5 (cr=94901 pr=94878 pw=0 time=4254867 us cost=26217 size=117128 card=29282)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  Disk file operations I/O                        4        0.00          0.00
  db file sequential read                         6        0.00          0.01
  direct path read                               42        0.01          0.18
  SQL*Net message from client                     2        7.63          7.63
********************************************************************************

Again the access path is clearly indicated as “TABLE ACCESS INMEMORY FULL SALES_PART”. This time it references all 5 partitions (which is expected since I didn’t have a date in my predicate list). And I can see direct path reads plus some other I/O related information! Direct Path Reads are quite likely going to be Smart Scans on Exadata by the way…

Of course “TABLE ACCESS INMEMORY FULL” is correct (because one partition is accessed that way), but I guess that you cannot see the disk-based I/O against the other segments from the regular execution plan as shown in SQLPlus for example .

Is there any other instrumentation I could use?

The tkprof output shows that data retrieval is possible for segments that are not (entirely) found in the IMCS. Where else could I learn about this fact? Session statistics (V$STATNAME, V$SESSTAT, etc) are another useful source of information. I am a great fan of session snapper (who isn’t?). Snapper can be used to display the change in the session counters while the session you are monitoring is running. Another option is Adrian Billington’s mystats script. It can help you capture the changes in session statistics during the execution of a SQL command. I executed the same query again, sandwiched into calls to mystats. Note that the statistics shown next do not match those of to the execution of the query above. I also tried to limit the output only to the relevant ones, output is sorted by statistic name.


------------------------------------------------------------------------------------------
2. Statistics Report
------------------------------------------------------------------------------------------
Type    Statistic Name                                                               Value
------  ----------------------------------------------------------------  ----------------

STAT    IM scan CUs columns accessed                                                     4
STAT    IM scan CUs columns theoretical max                                             48
STAT    IM scan CUs memcompress for query low                                            4
STAT    IM scan CUs predicates applied                                                   4
STAT    IM scan CUs predicates received                                                  4
STAT    IM scan CUs split pieces                                                         4
STAT    IM scan bytes in-memory                                                 47,683,308
STAT    IM scan bytes uncompressed                                              99,118,124
STAT    IM scan rows                                                             2,028,048
STAT    IM scan rows projected                                                       3,963
STAT    IM scan rows valid                                                       2,028,048
STAT    IM scan segments minmax eligible                                                 4

STAT    consistent gets                                                             95,093
STAT    consistent gets direct                                                      94,872
STAT    consistent gets examination                                                      4
STAT    consistent gets examination (fastpath)                                           4
STAT    consistent gets from cache                                                     221
STAT    consistent gets pin                                                            217
STAT    consistent gets pin (fastpath)                                                 217

STAT    logical read bytes from cache                                            9,666,560
STAT    no work - consistent read gets                                              94,871

STAT    physical read IO requests                                                      749
STAT    physical read bytes                                                    777,191,424
STAT    physical read total IO requests                                                749
STAT    physical read total bytes                                              777,191,424
STAT    physical read total multi block requests                                       749
STAT    physical reads                                                              94,872
STAT    physical reads direct                                                       94,872

STAT    session logical reads                                                      111,201
STAT    session logical reads - IM                                                  15,149

STAT    table scan blocks gotten                                                    94,872
STAT    table scan disk non-IMC rows gotten                                     12,700,711
STAT    table scan rows gotten                                                  14,728,759
STAT    table scans (IM)                                                                 1
STAT    table scans (direct read)                                                        4
STAT    table scans (long tables)                                                        5

STAT    temp space allocated (bytes)                                             1,048,576

STAT    workarea executions - optimal                                                    1
STAT    workarea memory allocated                                                        5
------------------------------------------------------------------------------------------
3. About
------------------------------------------------------------------------------------------
- MyStats v2.01 by Adrian Billington (http://www.oracle-developer.net)
- Based on the SNAP_MY_STATS utility by Jonathan Lewis
==========================================================================================
End of report
==========================================================================================

The highlights are:

  • You can see the compression ratio of the data in the IMCS (IM scan bytes in-memory and IM scan bytes uncompressed)
  • All rows in the partition were valid (the number of IM Scan rows equals the number of rows in the partition)
  • One segment was scanned using IM, and 4 were scanned as direct (path) reads. A total of 5 segments were scanned

Hope this helps!

Posted in 12c Release 1, Performance | 3 Comments »

Little things worth knowing: automatic generation of extended statistics in 12c Part II

Posted by Martin Bach on November 6, 2015

In my last post I demonstrated an example how to use a PL/SQL package to monitor a workload in order to identify correlated columns. Such columns are good candidates for the creation of extended statistics since the optimiser does not assume that information stored in different columns may be related. Before starting my tests I invoked DBMS_STATS.SEED_COL_USAGE, which in turn increased Oracle’s attention level trying to find correlated columns. Eventually these have been identified (as shown in the output of DBMS_STATS.REPORT_COL_USAGE) and a subsequent call to DBMS_STATS.GATHER_TABLE_STATS caused extended statistics to be created, including histograms. This is one way you can get extended statistics automatically, but it requires you to enable monitoring of a workload by invoking a PL/SQL API. In other words, it is a conscious effort.

More Ways to Extended Statistics

The other way of ending up with extended statistics is more subtle as it does not require user intervention. As described in the documentation (Database SQL Tuning Guide, chapter 13 “Managing Optimizer Statistics: Advanced Topics”) there is another way to gather extended statistics automatically. It is based on statistics feedback and SQL Plan Directives, both new to 12c (actually statistics feedback is not quite so new, it seems to be an evolution of cardinality feedback).

Demo

To start with a clean sheet I dropped the table I had used previously, which got rid of all the extended stats and SQL Plan Directives (SPD) from the previous example. To be really thorough I also flushed the shared pool.

SQL> drop table martin.customers purge;

Table dropped.

SQL> select * from dba_stat_extensions where owner = user;

no rows selected

SQL> select count(*) from DBA_SQL_PLAN_DIR_OBJECTS where owner = user;

  COUNT(*)
----------
         0

SQL> alter system flush shared_pool;

System altered.

BTW I have switched environments to an Oracle Restart 12.1.0.2 database with the JUL 2015 PSU applied. The second patch you see here is the OJVM patch.

SQL> select patch_id, version, action from dba_registry_sqlpatch;

  PATCH_ID VERSION              ACTION
---------- -------------------- ---------------
  20831110 12.1.0.2             APPLY
  21068507 12.1.0.2             APPLY

Back to creating the table… I am using the same technique as before, but this time without the call to DBMS_STATS.SEED_COL_USAGE:

SQL> create table martin.customers as select * from sh.customers;

Table created.

SQL> select num_rows, last_analyzed from user_tables where table_name = 'CUSTOMERS';

  NUM_ROWS LAST_ANALYZED
---------- -------------------
     55500 15.10.2015 16:57:42

I love the 12c feature where stats are gathered during a CTAS operation …

And now a few queries to tickle the optimiser. Apologies for the wide output but that makes it so much easier to use RLWRAP and command line history. Credit again to
the blog post by Maria Colgan
for the query examples.

SQL> select /*+ gather_plan_statistics */ count(*) from customers where cust_city = 'Los Angeles' and cust_state_province = 'CA' and country_id = 52790;

  COUNT(*)
----------
       932

SQL>
SQL> select * from table(dbms_xplan.display_cursor(format => 'allstats last'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------
SQL_ID  34zmr3acgz06g, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from customers where
cust_city = 'Los Angeles' and cust_state_province = 'CA' and country_id
= 52790

Plan hash value: 296924608

---------------------------------------------------------------------------------------------------
| Id  | Operation          | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |           |      1 |        |      1 |00:00:00.03 |    1521 |   1516 |
|   1 |  SORT AGGREGATE    |           |      1 |      1 |      1 |00:00:00.03 |    1521 |   1516 |
|*  2 |   TABLE ACCESS FULL| CUSTOMERS |      1 |      1 |    932 |00:00:00.04 |    1521 |   1516 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))


22 rows selected.

There is nothing too new here-the optimiser’s cardinality estimate is not great. Following the example from the SQL Tuning Guide-chapter 13 (btw country_id is a number, not a varchar2 field) we can now check if there is anything fancy going to happen next. And sure enough there is:

SQL> select sql_id, child_number, is_obsolete, is_reoptimizable, is_resolved_adaptive_plan from v$sql where sql_id = '34zmr3acgz06g';

SQL_ID        CHILD_NUMBER I I I
------------- ------------ - - -
34zmr3acgz06g            0 N Y

The second flag, is_reoptimisable, is interesting. It indicates the optimiser’s intention to apply information found at run-time to the next execution of the cursor. Let’s run the original query again.

SQL> select /*+ gather_plan_statistics */ count(*) from customers where cust_city = 'Los Angeles' and cust_state_province = 'CA' and country_id = 52790;

  COUNT(*)
----------
       932

So what did that do? Let’s have a look at the diagnostics:


SQL> select sql_id, child_number, is_obsolete, is_reoptimizable, is_resolved_adaptive_plan from v$sql where sql_id = '34zmr3acgz06g';

SQL_ID        CHILD_NUMBER I I I
------------- ------------ - - -
34zmr3acgz06g            0 N Y
34zmr3acgz06g            1 N N

SQL> select * from table(dbms_xplan.display_cursor(sql_id=>'34zmr3acgz06g',cursor_child_no=>1,format => 'allstats last'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  34zmr3acgz06g, child number 1
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from customers where
cust_city = 'Los Angeles' and cust_state_province = 'CA' and country_id
= 52790

Plan hash value: 296924608

------------------------------------------------------------------------------------------
| Id  | Operation          | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |           |      1 |        |      1 |00:00:00.01 |    1521 |
|   1 |  SORT AGGREGATE    |           |      1 |      1 |      1 |00:00:00.01 |    1521 |
|*  2 |   TABLE ACCESS FULL| CUSTOMERS |      1 |    932 |    932 |00:00:00.01 |    1521 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))

Note
-----
   - statistics feedback used for this statement


26 rows selected.

As you can see another child cursor has been created. Why?

SQL> select xmltype(reason) from v$sql_shared_cursor where dbms_lob.getlength(reason) > 1 and sql_id = '34zmr3acgz06g';

XMLTYPE(REASON)
-----------------------------------------------------------------------------------------------------------------------------
<ChildNode>
  <ChildNumber>0</ChildNumber>
  <ID>49</ID>
  <reason>Auto Reoptimization Mismatch(1)</reason>
  <size>3x4</size>
  <kxscflg>32</kxscflg>
  <kxscfl4>4194560</kxscfl4>
  <dnum_kksfcxe>262144</dnum_kksfcxe>
</ChildNode>

Ah – the cursor was created because of a reoptimisation mismatch. Checking against v$sql you can see that the optimiser is now happy with the cardinality estimate (not anticipating further reoptimisation for this statement). Note that depending on the query’s complexity many more child cursors can be created as part of the reoptimisation. Also note that for child cursor 1 the A and E rows are perfect matches. Our work here is done. But wait- what about that column usage?


SQL> select dbms_stats.report_col_usage(user, 'CUSTOMERS') from dual;

DBMS_STATS.REPORT_COL_USAGE(USER,'CUSTOMERS')
--------------------------------------------------------------------------------
LEGEND:
.......

EQ         : Used in single table EQuality predicate
RANGE      : Used in single table RANGE predicate
LIKE       : Used in single table LIKE predicate
NULL       : Used in single table is (not) NULL predicate
EQ_JOIN    : Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
FILTER     : Used in single table FILTER predicate
JOIN       : Used in JOIN predicate
GROUP_BY   : Used in GROUP BY expression
...............................................................................

###############################################################################

COLUMN USAGE REPORT FOR MARTIN.CUSTOMERS
........................................

1. COUNTRY_ID                          : EQ
2. CUST_CITY                           : EQ
3. CUST_STATE_PROVINCE                 : EQ
###############################################################################

SQL> select * from dba_stat_extensions where owner = user;

no rows selected

Nothing here except some equality predicates! This is the big difference to part I of this article where the filter on the 3 columns was detected thanks to the call to DBMS_STATS.SEED_COL_USAGE.

It took me a while to connect the dots and understand what needed to be done next. In the meantime, hidden from view, and in the background, Oracle created a SQL Plan Directive for that table which I failed to notice for a while. SQL Plan Directives are not persisted immediately, the documentation states they are written to SYSAUX every 15 minutes. I must have thought about this for more than 15 minutes, as you will see shortly. If you are a bit more impatient then force the write of the SPD to disk using DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE now.

To verify if there were any SPDs I used a query taken more or less literally from the documentation, and yes, there are:

SQL> SELECT TO_CHAR(d.DIRECTIVE_ID) dir_id, o.OBJECT_NAME,
  2         o.SUBOBJECT_NAME col_name, o.OBJECT_TYPE, d.TYPE, d.STATE, d.REASON
  3  FROM   DBA_SQL_PLAN_DIRECTIVES d, DBA_SQL_PLAN_DIR_OBJECTS o
  4  WHERE  d.DIRECTIVE_ID=o.DIRECTIVE_ID
  5  AND    o.OWNER = user
  6  ORDER BY 1,2,3,4,5;

DIR_ID                         OBJECT_NAME          COL_NAME             OBJECT TYPE             STATE    REASON
------------------------------ -------------------- -------------------- ------ ---------------- -------- ------------------------------------
15273172249382976180           CUSTOMERS            COUNTRY_ID           COLUMN DYNAMIC_SAMPLING USABLE   SINGLE TABLE CARDINALITY MISESTIMATE
15273172249382976180           CUSTOMERS            CUST_CITY            COLUMN DYNAMIC_SAMPLING USABLE   SINGLE TABLE CARDINALITY MISESTIMATE
15273172249382976180           CUSTOMERS            CUST_STATE_PROVINCE  COLUMN DYNAMIC_SAMPLING USABLE   SINGLE TABLE CARDINALITY MISESTIMATE
15273172249382976180           CUSTOMERS                                 TABLE  DYNAMIC_SAMPLING USABLE   SINGLE TABLE CARDINALITY MISESTIMATE

Once you can see the SPD in the dictionary, you can also see them in action when you run a statement with the same predicates but different SQL_ID, as in this example.

SQL> select /*+ gather_plan_statistics new_parse_please */ count(*) from customers where cust_city = 'Los Angeles' and cust_state_province = 'CA' and country_id = 52790;

  COUNT(*)
----------
       932

SQL> select * from table(dbms_xplan.display_cursor(format => 'allstats last'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  85qvryzgzj57q, child number 0
-------------------------------------
select /*+ gather_plan_statistics new_parse_please */ count(*) from
customers where cust_city = 'Los Angeles' and cust_state_province =
'CA' and country_id = 52790

Plan hash value: 296924608

------------------------------------------------------------------------------------------
| Id  | Operation          | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |           |      1 |        |      1 |00:00:00.01 |    1521 |
|   1 |  SORT AGGREGATE    |           |      1 |      1 |      1 |00:00:00.01 |    1521 |
|*  2 |   TABLE ACCESS FULL| CUSTOMERS |      1 |    829 |    932 |00:00:00.01 |    1521 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement


27 rows selected.

The SQL Plan directive stays even if you flush the cursor with SQL ID 34zmr3acgz06g from the cursor cache or supply a different set of predicates. I used Kerry Osborne’s flush_sql.sql script for this.

SQL> @flush_sql 
Enter value for sql_id: 34zmr3acgz06g
old  14:   where sql_id like '&sql_id';
new  14:   where sql_id like '34zmr3acgz06g';

PL/SQL procedure successfully completed.

SQL> select sql_id, child_number, is_obsolete, is_reoptimizable, is_resolved_adaptive_plan from v$sql where sql_id = '34zmr3acgz06g';

no rows selected

SQL> select /*+ gather_plan_statistics */ count(*) from customers where cust_city = 'Los Angeles' and cust_state_province = 'CA' and country_id = 52790;

  COUNT(*)
----------
       932

SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  34zmr3acgz06g, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from customers where
cust_city = 'Los Angeles' and cust_state_province = 'CA' and country_id
= 52790

Plan hash value: 296924608

------------------------------------------------------------------------------------------
| Id  | Operation          | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |           |      1 |        |      1 |00:00:00.01 |    1521 |
|   1 |  SORT AGGREGATE    |           |      1 |      1 |      1 |00:00:00.01 |    1521 |
|*  2 |   TABLE ACCESS FULL| CUSTOMERS |      1 |    953 |    932 |00:00:00.01 |    1521 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement


27 rows selected.


SQL> select /*+ gather_plan_statistics */ count(*) from customers where cust_city = 'Palmdale' and cust_state_province = 'FL' and country_id = 52790;

  COUNT(*)
----------
       250

SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  dbkfpchpfwap3, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from customers where
cust_city = 'Palmdale' and cust_state_province = 'FL' and country_id =
52790

Plan hash value: 296924608

------------------------------------------------------------------------------------------
| Id  | Operation          | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |           |      1 |        |      1 |00:00:00.01 |    1521 |
|   1 |  SORT AGGREGATE    |           |      1 |      1 |      1 |00:00:00.01 |    1521 |
|*  2 |   TABLE ACCESS FULL| CUSTOMERS |      1 |    287 |    250 |00:00:00.02 |    1521 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(("CUST_CITY"='Palmdale' AND "CUST_STATE_PROVINCE"='FL' AND
              "COUNTRY_ID"=52790))

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement


27 rows selected.

And NOW you get the update to REPORT_COL_USAGE:

SQL> select dbms_stats.report_col_usage(user, 'CUSTOMERS') from dual;

DBMS_STATS.REPORT_COL_USAGE(USER,'CUSTOMERS')
--------------------------------------------------------------------------------
LEGEND:
.......

EQ         : Used in single table EQuality predicate
RANGE      : Used in single table RANGE predicate
LIKE       : Used in single table LIKE predicate
NULL       : Used in single table is (not) NULL predicate
EQ_JOIN    : Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
FILTER     : Used in single table FILTER predicate
JOIN       : Used in JOIN predicate
GROUP_BY   : Used in GROUP BY expression
...............................................................................

###############################################################################

COLUMN USAGE REPORT FOR MARTIN.CUSTOMERS
........................................

1. COUNTRY_ID                          : EQ
2. CUST_CITY                           : EQ
3. CUST_STATE_PROVINCE                 : EQ
4. (CUST_CITY, CUST_STATE_PROVINCE,
    COUNTRY_ID)                        : FILTER
###############################################################################

From here on it’s the same as in part 1 of this article. You gather stats, either manually like me or automatically like Oracle would, and the end result are extended statistics on the “filter” shown in line 4.


SQL> exec dbms_stats.gather_table_stats(user, 'CUSTOMERS')

PL/SQL procedure successfully completed.

SQL> select table_name, extension_name, extension from dba_stat_extensions where owner = user;

TABLE_NAME                     EXTENSION_NAME                           EXTENSION
------------------------------ ---------------------------------------- --------------------------------------------------------------------------------
CUSTOMERS                      SYS_STSMZ$C3AIHLPBROI#SKA58H_N           ("CUST_CITY","CUST_STATE_PROVINCE","COUNTRY_ID")

Voila! Extended statistics.

There are many more interesting implications to this whole concept, which is something I’ll write about in another post. It’s incredibly interesting, I can’t believe I’m writing optimiser posts …

PS: thanks to @Mautro for casting an eye over this article!

Posted in 12c Release 1, Performance | 1 Comment »

Little things worth knowing: automatic generation of extended statistics in 12c

Posted by Martin Bach on October 14, 2015

When you are migrating to Oracle 12c I hope you might this post useful. I came across this feature when researching what’s new with Oracle 12c (and yes I still find lots of new ones I haven’t noticed before). This one is a bit hidden away in section 2.2.4.3 Automatic Column Group Detection of the 12c New Features Guide. And it’s a lot more complex than I first thought! In this first post I’ll try and show the generation of extended statistics in 12c. I am planning on another post to explain how the rest of the adaptive optimisations that are new with 12c fit into the picture.

What is the motivation?

Previously, in Oracle versions up to 12c you needed to be on the lookout for candidates for correlated columns. Extended statistics on groups of columns allow the optimiser to come up with better cardinality estimates if columns in a table are correlated. Instead of me trying to explain the concept in depth I’d like to link to a post written by Maria Colgan. I strongly recommend you have a look at this post if you haven’t used extended statistics a lot.

When putting my post together I referred to the same SH.CUSTOMERS table as Maria. The queries I am using are based on another excellent post by Maria Colgan that builds on the foundations of the one I just referenced. In fact I really needed a data set that had correlated columns but I couldn’t come up with an example that was easy-to-follow until I found that blog entry. I suggest you have a look at this post first as it explains a nifty tool for helping you finding candidates for extended statistics in 11.2. Oracle 12c takes this concept a step further as you will see.

My test case

My test environment is 12.1.0.2.3 on Exadata (but that shouldn’t matter), and this is how it goes. First I create a table in my schema which is an exact replica of the table in the SH schema.

SQL> create table martin.customers as select * from sh.customers;

Table created.

The next step is to tell Oracle to monitor column usage. I have to accelerate this a bit, Oracle does this over the cause of the day anyway.

SQL> exec dbms_stats.seed_col_usage(null,null,300)

PL/SQL procedure successfully completed.

By calling seed_col_usage() without a SQLSET and OWNER I can report column usage. The PL/SQL API documentation reads:

This procedure also records group of columns. Extensions for the recorded group of columns can be created using the CREATE_EXTENDED_STATS Function procedure. If sqlset_name and owner_name are NULL, it records the column (group) usage information for the statements executed in the system in next time_limit seconds.

That sounds like what is needed. To start with a level playing field I gather statistics again using dbms_stats.gather_table_stats(user, ‘customers’).

Let there be queries

The next step is to run a few queries. Cardinality estimates are not correct-the optimiser does not “know” that L.A. can only be in California in this example. Pay attention to the number of rows returned and the cardinality estimates:

SQL> select count(*) from customers
  2  where cust_city = 'Los Angeles'
  3  and cust_state_province = 'CA'
  4  and country_id = 52790;

  COUNT(*)
----------
       932

Elapsed: 00:00:00.03
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID  ap71092cqnj1y, child number 0
-------------------------------------
select count(*) from customers where cust_city = 'Los Angeles' and
cust_state_province = 'CA' and country_id = 52790

Plan hash value: 296924608

----------------------------------------------------------------------------------------
| Id  | Operation                  | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |           |       |       |   423 (100)|          |
|   1 |  SORT AGGREGATE            |           |     1 |    26 |            |          |
|*  2 |   TABLE ACCESS STORAGE FULL| CUSTOMERS |     1 |    26 |   423   (1)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))
       filter(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold

27 rows selected.

Following advice I have been given by people who know a lot more about SQL tuning than I will ever comprehend, I gather execution statistics and then display actual and expected rows:

SQL> select /*+ gather_plan_statistics */ count(*) from customers
  2  where cust_city = 'Los Angeles'
  3  and cust_state_province = 'CA'
  4  and country_id = 52790;

  COUNT(*)
----------
       932

SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID  0qmjk7qm3zwkw, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from customers  where
cust_city = 'Los Angeles'  and cust_state_province = 'CA'  and
country_id = 52790

Plan hash value: 296924608

--------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |           |      1 |        |      1 |00:00:00.01 |    1521 |
|   1 |  SORT AGGREGATE            |           |      1 |      1 |      1 |00:00:00.01 |    1521 |
|*  2 |   TABLE ACCESS STORAGE FULL| CUSTOMERS |      1 |      1 |    932 |00:00:00.01 |    1521 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))
       filter(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold

The number of actual rows in plan line 2 is quite different from what the optimiser expected it to be.

I also executed the next example from Maria’s blog post:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------

SQL_ID  5h284vjm2xtp9, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ country_id, cust_state_province,
count(*) from customers group by country_id, cust_state_province

Plan hash value: 1577413243

-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |           |      1 |        |    145 |00:00:00.02 |    1521 |       |       |          |
|   1 |  HASH GROUP BY             |           |      1 |   1949 |    145 |00:00:00.02 |    1521 |  1015K|  1015K| 1371K (0)|
|   2 |   TABLE ACCESS STORAGE FULL| CUSTOMERS |      1 |  55500 |  55500 |00:00:00.02 |    1521 |  1025K|  1025K|          |
-----------------------------------------------------------------------------------------------------------------------------

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold

Again cardinality estimates differ in plan line 1: instead of 1949 rows the optimiser expects to be returned there are only 145. This is a simple example, but quite often incorrect cardinality estimates lead to sub-optimal plan generation (although 12c has another new feature that tries to mitigate the effects of these incorrect cardinality estimates as well)

Column Groups

On the other hand the column usage monitor picked up information based on the “workload”. You can see this in sys.col_group_usage$

SQL> select u.*, o.object_name, object_type
  2  from sys.col_group_usage$ u, dba_objects o
  3  where u.obj# = o.data_object_id
  4  and owner = 'MARTIN';

      OBJ# COLS                           TIMESTAMP      FLAGS OBJECT_NAME                    OBJECT_TYPE
---------- ------------------------------ --------- ---------- ------------------------------ -----------------------
    111118 11,13                          26-JUN-15         36 CUSTOMERS                      TABLE
    111118 9,11,13                        26-JUN-15         33 CUSTOMERS                      TABLE

This is of course not the way to view this information-there’s an API for that.

SQL> select dbms_stats.report_col_usage(user, 'CUSTOMERS') from dual;

DBMS_STATS.REPORT_COL_USAGE(USER,'CUSTOMERS')
----------------------------------------------------------------------------------------------
LEGEND:
.......

EQ         : Used in single table EQuality predicate
RANGE      : Used in single table RANGE predicate
LIKE       : Used in single table LIKE predicate
NULL       : Used in single table is (not) NULL predicate
EQ_JOIN    : Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
FILTER     : Used in single table FILTER predicate
JOIN       : Used in JOIN predicate
GROUP_BY   : Used in GROUP BY expression
...............................................................................

###############################################################################

COLUMN USAGE REPORT FOR MARTIN.CUSTOMERS
........................................

1. COUNTRY_ID                          : EQ
2. CUST_CITY                           : EQ
3. CUST_STATE_PROVINCE                 : EQ
4. (CUST_CITY, CUST_STATE_PROVINCE,
    COUNTRY_ID)                        : FILTER
5. (CUST_STATE_PROVINCE, COUNTRY_ID)   : GROUP_BY
###############################################################################

So there is a potential candidates for extended stats in line 5. So far, nothing new. In version before 12c you now had to create the column groups manually. Again, please refer to Maria’s post for an example.

The difference in 12c

In 12c you don’t need to create these column groups manually. Consider this sequence of events:

SQL> select * from dba_stat_extensions where owner = 'MARTIN';

no rows selected

Elapsed: 00:00:00.28

SQL> exec dbms_stats.gather_table_stats(user,'CUSTOMERS')

PL/SQL procedure successfully completed.

SQL> select * from user_stat_extensions

TABLE_NAME                     EXTENSION_NAME                 EXTENSION                                          CREATO DRO
------------------------------ ------------------------------ -------------------------------------------------- ------ ---
CUSTOMERS                      SYS_STU#S#WF25Z#QAHIHE#MOFFMM_ ("CUST_STATE_PROVINCE","COUNTRY_ID")               USER   YES
CUSTOMERS                      SYS_STUMZ$C3AIHLPBROI#SKA58H_N ("CUST_CITY","CUST_STATE_PROVINCE","COUNTRY_ID")   USER   YES

2 rows selected.

SQL> SELECT column_name,
  2    num_distinct,
  3    num_buckets,
  4    histogram
  5  FROM dba_tab_col_statistics
  6  WHERE table_name = 'CUSTOMERS'
  7  AND owner        = 'MARTIN'
  8  AND column_name LIKE 'SYS%';

COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS HISTOGRAM
------------------------------ ------------ ----------- ---------------
SYS_STUMZ$C3AIHLPBROI#SKA58H_N          620         254 HYBRID
SYS_STU#S#WF25Z#QAHIHE#MOFFMM_          145           1 NONE

2 rows selected.

So in the above output you can see that there are no extended stats on the CUSTOMERS table before gathering stats. There is nothing special about the stats gathering command, the preferences are the defaults (reformatted here for readability)

SQL> l
  1  declare
  2    type prefs_t is table of varchar2(150);
  3    prefs prefs_t := prefs_t(
  4      'AUTOSTATS_TARGET','CASCADE','CONCURRENT','DEGREE','ESTIMATE_PERCENT',
  5      'METHOD_OPT','NO_INVALIDATE','GRANULARITY','PUBLISH','INCREMENTAL',
  6      'INCREMENTAL_STALENESS','INCREMENTAL_LEVEL','STALE_PERCENT',
  7      'GLOBAL_TEMP_TABLE_STATS','TABLE_CACHED_BLOCKS','OPTIONS');
  8    prefs_value varchar2(100);
  9  begin
 10    for p in prefs.first .. prefs.last loop
 11      select dbms_stats.get_prefs(prefs(p), user,'CUSTOMERS') into prefs_value from dual;
 12      dbms_output.put_line(prefs(p) || ': ' || prefs_value);
 13    end loop;
 14* end;
SQL> /
AUTOSTATS_TARGET:        AUTO
CASCADE:                 DBMS_STATS.AUTO_CASCADE
CONCURRENT:              MANUAL
DEGREE:                  NULL
ESTIMATE_PERCENT:        DBMS_STATS.AUTO_SAMPLE_SIZE
METHOD_OPT:              FOR ALL COLUMNS SIZE AUTO
NO_INVALIDATE:           DBMS_STATS.AUTO_INVALIDATE
GRANULARITY:             AUTO
PUBLISH:                 TRUE
INCREMENTAL:             FALSE
INCREMENTAL_STALENESS:
INCREMENTAL_LEVEL:       PARTITION
STALE_PERCENT:           10
GLOBAL_TEMP_TABLE_STATS: SESSION
TABLE_CACHED_BLOCKS:     1
OPTIONS:                 GATHER

PL/SQL procedure successfully completed.

Did you see the extended stats have histograms? There is even one of the new fancy “hybrid” ones. The effect is noticeable:

SQL> select /*+ gather_plan_statistics */ count(*) from customers
  2   where cust_city = 'Los Angeles'
  3   and cust_state_province = 'CA'
  4   and country_id = 52790;

SQL> select * from table(dbms_xplan.display_cursor(format => 'allstats last'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------
SQL_ID  0qmjk7qm3zwkw, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from customers  where
cust_city = 'Los Angeles'  and cust_state_province = 'CA'  and
country_id = 52790

Plan hash value: 296924608

--------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |           |      1 |        |      1 |00:00:00.01 |    1521 |
|   1 |  SORT AGGREGATE            |           |      1 |      1 |      1 |00:00:00.01 |    1521 |
|*  2 |   TABLE ACCESS STORAGE FULL| CUSTOMERS |      1 |    969 |    932 |00:00:00.01 |    1521 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))
       filter(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement

The cardinality estimate is a lot closer now. What surprised me was the SQL Plan Directive referenced. But this is material for another blog post.

Posted in 12c Release 1, Linux, Performance | 7 Comments »

Example of Full Transportable Export to create a 12c PDB

Posted by Martin Bach on October 8, 2015

The Cool Stuff

Oracle has introduced a new way of transporting data from one platform to another in 12c. The new feature is called “Full Transportable Export”. This enhancement works from 11.2.0.3 and later and is a great way to move data, as it allows for an easy-to-use combination of Export Data Pump and Transportable Tablespaces. The documentation specifically praises it as a means to move from an 11g Release 2 database into a 12c Pluggable Database. And Pluggable Databases I like :)

Unfortunately the Full Transportable Export/Import process does not perform endianness conversion for you. And in fact, if you have encrypted data you want to move you cannot do so unless you are on the same endianness.

Limitations

Before we begin, here is a list of restrictions pertaining to the new Full Transportable Export, taken from the 12c Utilities Guide:

  • You must use a privileged account, at least DATAPUMP_EXP_FULL_DATABASE must be granted to user. That user’s default tablespace must not be part of the transport set
  • Encrypted data can be transported only between same endianness.
  • Endianness conversion not transparently taken care off: you must do so using dbms_file_transfer or RMAN convert
  • Export is not restartable
  • Objects to be transported must be on user-created tablespaces
  • LONG and LONG RAW objects in administrative tablespaces (SYSTEM, SYSAUX) cannot be transported across the network
  • All the user defined tablespaces must be placed in read-only mode: this is an outage!

There might be more, check the documentation for an updated version.

Let’s test

In order to assess the suitability of the concept, two users have been created in an Oracle 11.2.0.3 database on Solaris/Intel. I would have liked to test the procedure with a big endian source but my last SPARC box was a Sun Blade 100 from 10 years ago that finally went to the skip, or in other words I don’t have a SPARC server available in my lab right now…

To make it more interesting I am going to migrate these accounts into a 12.1.0.2 PDB on Linux. The source database is 11.2.0.3 with the April 2015 database PSU. Here is the data creation part in case you would like to follow the demo. First the Order Entry schema, part of the Swingbench suite is created.

oracle@solaris:~/swingbench/bin$ ./oewizard -allindexes -cl -create -cs //solaris/sol11203 -dba system \
-dbap secretpwd -ts soe_tbs -part -p soe -scale 1 -tc 2 -u soe
SwingBench Wizard
Author  :        Dominic Giles
Version :        2.5.0.971

Running in Lights Out Mode using config file : oewizard.xml
The following statement failed : GRANT EXECUTE ON dbms_lock TO soe :
   Due to : ORA-01031: insufficient privileges

============================================
|           Datagenerator Run Stats        |
============================================
Connection Time                        0:00:00.002
Data Generation Time                   0:06:53.268
DDL Creation Time                      0:03:17.259
Total Run Time                         0:10:10.533
Rows Inserted per sec                       29,294
Data Generated (MB) per sec                    2.4
Actual Rows Generated                   13,004,989

Post Creation Validation Report
===============================
The schema appears to have been created successfully.

Valid Objects
=============
Valid Tables : 'ORDERS','ORDER_ITEMS','CUSTOMERS','WAREHOUSES','ORDERENTRY_METADATA',
'INVENTORIES','PRODUCT_INFORMATION','PRODUCT_DESCRIPTIONS','ADDRESSES','CARD_DETAILS'
Valid Indexes : 'PRD_DESC_PK','PROD_NAME_IX','PRODUCT_INFORMATION_PK','PROD_SUPPLIER_IX',
'PROD_CATEGORY_IX','INVENTORY_PK','INV_PRODUCT_IX','INV_WAREHOUSE_IX','ORDER_PK',
'ORD_SALES_REP_IX','ORD_CUSTOMER_IX','ORD_ORDER_DATE_IX','ORD_WAREHOUSE_IX',
'ORDER_ITEMS_PK','ITEM_ORDER_IX','ITEM_PRODUCT_IX','WAREHOUSES_PK','WHS_LOCATION_IX',
'CUSTOMERS_PK','CUST_EMAIL_IX','CUST_ACCOUNT_MANAGER_IX','CUST_FUNC_LOWER_NAME_IX',
'ADDRESS_PK','ADDRESS_CUST_IX','CARD_DETAILS_PK','CARDDETAILS_CUST_IX'
Valid Views : 'PRODUCTS','PRODUCT_PRICES'
Valid Sequences : 'CUSTOMER_SEQ','ORDERS_SEQ','ADDRESS_SEQ','LOGON_SEQ','CARD_DETAILS_SEQ'
Valid Code : 'ORDERENTRY'
Schema Created

The Swingbench benchmark suite does not need introducing any more. I have used it many times and am truly grateful for @dominic_giles having written this for the Oracle community.

About to cause grief-deliberately

A second user is created to test all sorts of segment types to see how these are handled. All data is bundled under the MARTIN account. The idea is to test against (some of) the limitations documented against Transportable Tablespaces, which is invoked under the covers. It would be nice if segments that cannot be transported using TTS were part of the export dump-let’s see if that is going to be the case. And although the transport might work without problems for me, it will do so because I tested it the way I did. It does not mean that for example objects using XMLDB will always be migrated successfully! As always, test, test, test … with your systems. There is nothing like a few dummy runs to hone in the procedures.

Back to the user creation:

SQL> show parameter compatible

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
compatible                           string      11.2.0.3

SQL> create user martin identified by secretpwd quota unlimited on users
  2  /

User created.

SQL> begin
  2   for i in 1..10 loop
  3    execute immediate 'create or replace procedure martin.p' || i || ' as begin null; end;';
  4   end loop;
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL> select owner, object_name from dba_objects
  2  where owner = 'MARTIN'
  3* and object_name like 'P%'

OWNER                          OBJECT_NAME
------------------------------ ------------------------------
MARTIN                         P9
MARTIN                         P8
MARTIN                         P7
MARTIN                         P6
MARTIN                         P5
MARTIN                         P4
MARTIN                         P3
MARTIN                         P2
MARTIN                         P10
MARTIN                         P1

10 rows selected.

SQL> create table martin.xmltest of xmltype tablespace users;

Table created.

SQL> insert into martin.xmltest values ('<toto/>');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from martin.xmltest;

SYS_NC_ROWINFO$
--------------------------------------------------------------------------------
<toto/>

SQL> create table martin.iot_test (
  2    object_id,
  3    owner,
  4    object_name,
  5    subobject_name,
  6    object_type,
  7    constraint pk_iot_test primary key (object_id))
  8  organization index tablespace users as
  9  select object_id,owner,object_name,subobject_name,
 10  object_type from dba_objects;

Table created.

SQL> create table martin.lobtest (
  2   id number primary key,
  3   l_data clob)
  4  lob (l_data)
  5 store as secrefile;

Table created.

SQL> insert into martin.lobtest values (1, 'firstlob');

1 row created.

SQL> commit;

Commit complete.

SQL> select table_name, column_name,in_row, tablespace_name
  2  from dba_lobs
  3* where owner = 'MARTIN'

TABLE_NAME           COLUMN_NAME                    IN_ TABLESPACE_NAME
-------------------- ------------------------------ --- --------------------
XMLTEST              XMLDATA                        YES USERS
LOBTEST              L_DATA                         YES USERS

SQL> create table martin.tztest (id number primary key, d timestamp with local time zone)
  2   tablespace users;

SQL> insert into martin.tztest values (1, systimestamp);

SQL> commit;

Commit complete

That’s all I could think of. I may have added nested tables but I am not aware of their use in an Oracle application outside of PL/SQL. Now in this place I am deliberately causing trouble, and I am also working against the limitations and recommendations taken from the documentation. The following is documented behaviour, my interest was to see how it actually works (and also give you a clue when you search the Internet for error messages).

The Migration

Before migrating the database you need to check if the data is self-contained. Since there are two tablespaces to be migrated, I need to check them both.


SQL> select name,PLATFORM_NAME from v$database

NAME      PLATFORM_NAME
--------- --------------------------------------------------
SOL11203  Solaris Operating System (x86-64)

SQL> exec sys.dbms_tts.transport_set_check('users,soe_tbs',true)

PL/SQL procedure successfully completed.

SQL> select * from transport_set_violations;

no rows selected

Just as with “regular” transportable tablespaces, you need to create a directory to store the export dump and put the tablespaces to, or use the default DATA_PUMP_DIR.

You then need to place the user-defined tablespaces into read-only mode. Essentially all non-Oracle managed tablespaces are affected, causing an outage at this point.


SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
SOE_TBS

6 rows selected.

SQL> alter tablespace users read only;

Tablespace altered.

SQL> c.users.soe_tbs
  1* alter tablespace soe_tbs read only
SQL> r
  1* alter tablespace soe_tbs read only

Tablespace altered.

The exciting bit is when you start the export. Here is a sample command:

oracle@solaris:~/full_tts$ expdp system full=y transportable=always version=12 directory=full_tts_dir \
> dumpfile=exp_full_transportable.dmp logfile=exp_full_transportable.log 

Export: Release 11.2.0.3.0 - Production on Wed Oct 7 21:04:10 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** full=y  
 transportable=always version=12 directory=full_tts_dir 
 dumpfile=exp_full_transportable.dmp 
 logfile=exp_full_transportable.log 
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/PLUGTS_FULL/FULL/PLUGTS_TABLESPACE
Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/TABLE_DATA
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 80.43 MB
Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
...
. . exported "SYSTEM"."REPCAT$_REPCATLOG"                    0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_REPCOLUMN"                    0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_REPGROUP_PRIVS"               0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_REPOBJECT"                    0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_REPPROP"                      0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_REPSCHEMA"                    0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_RESOLUTION"                   0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_RESOLUTION_STATISTICS"        0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_RESOL_STATS_CONTROL"          0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_RUNTIME_PARMS"                0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_SITES_NEW"                    0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_SITE_OBJECTS"                 0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_SNAPGROUP"                    0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_OBJECTS"             0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_PARMS"               0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_REFGROUPS"           0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_SITES"               0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_TARGETS"             0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_USER_AUTHORIZATIONS"          0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_USER_PARM_VALUES"             0 KB       0 rows
. . exported "SYSTEM"."SQLPLUS_PRODUCT_PROFILE"              0 KB       0 rows
Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /export/home/oracle/full_tts/exp_full_transportable.dmp
******************************************************************************
Datafiles required for transportable tablespace SOE_TBS:
  /u01/oradata/SOL11203/datafile/o1_mf_soe_tbs_bomhd219_.dbf
Datafiles required for transportable tablespace USERS:
  /u01/oradata/SOL11203/datafile/o1_mf_users_bolbn3sn_.dbf
Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at 21:09:33

A quick check on the logfile revealed that objects owned by XDB, SYS, WMSYS, SYSTEM, APEX, SYSMAN, OLAPSYS, ORDDATA were exported. I did not see any non-Oracle provided objects exported in the list of tables. So the import will be interesting!

The next step is to move the dump and data files across to the target system. In this example NFS is used, but any other efficient means of transportation will do.

Importing into a PDB

In preparation for the metadata import you need to either create a directory or point impdp to an existing one. In the example, FULL_TTS_DIR is used. The directory points to the NFS mount. The files stored in the directory include the data files (/u01/oradata/SOL11203/datafile/o1_mf_soe_tbs_bomhd219_.dbf and /u01/oradata/SOL11203/datafile/o1_mf_users_bolbn3sn_.dbf plus the dump file exp_full_transportable.dmp). Since both systems are little endian no conversion is necessary at this stage. Before starting here is some information about the destination. The PDB created for the import is named SOE_PDB.

SQL> select name,cdb, platform_name from v$database;

NAME      CDB PLATFORM_NAME
--------- --- ----------------------------------------------------------------------------------
CDB       YES Linux x86 64-bit

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MASTER                         MOUNTED

SQL> create pluggable database soe_pdb admin user pdb_admin
  2  identified by secretpwd;

Pluggable database created.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MASTER                         MOUNTED
         4 SOE_PDB                        MOUNTED

SQL> alter pluggable database SOE_PDB open ;

Pluggable database altered.

Since the database is using ASM the data files first have to be moved into ASM before they can be made known to the destination. The files originate from little endian Solaris/Intel so no conversion is needed. There are many ways to move data files to ASM, and asmcmd is one of them. The file naming convention on ASM for PDBs is diskgroupName/CDBName/PDBGUID/datafile/fileName.dbf’. The GUID can be found in v$pdbs for example.

SQL> select guid, name from v$pdbs where name = 'SOE_PDB';

GUID                             NAME
-------------------------------- ------------------------------
218B3FD3B43B146EE0531438A8C06EE5 SOE_PDB

[oracle@oraclelinux7 full_tts_dir]$ asmcmd cp /m/oracle/full_tts_dir/o1_mf_soe_tbs_bomhd219_.dbf  +data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/soe_tbs.dbf
copying /m/oracle/full_tts_dir/o1_mf_soe_tbs_bomhd219_.dbf -> +data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/soe_tbs.dbf
[oracle@oraclelinux7 full_tts_dir]$ asmcmd cp /m/oracle/full_tts_dir/o1_mf_users_bolbn3sn_.dbf +data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/users.dbf
copying /m/oracle/full_tts_dir/o1_mf_users_bolbn3sn_.dbf -> +data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/users.dbf
[oracle@oraclelinux7 full_tts_dir]$ 

Although the data files are now in ASM, they are not part of the database yet. This is done in the next step.

[oracle@oraclelinux7 full_tts_dir]$ impdp system/secretpwd@localhost/SOE_PDB full=y \
> dumpfile=exp_full_transportable.dmp directory=FULL_TTS_DIR \
> transport_datafiles= \
> '+data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/soe_tbs.dbf', \
> '+data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/users.dbf' \
> logfile=imp_full_transportable.log

Import: Release 12.1.0.2.0 - Production on Wed Oct 7 22:40:55 2015

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Source time zone is +00:00 and target time zone is -07:00.
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@localhost/SOE_PDB 
  full=y dumpfile=exp_full_transportable.dmp directory=FULL_TTS_DIR 
  transport_datafiles=+data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/soe_tbs.dbf, 
  +data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/users.dbf  
  logfile=imp_full_transportable.log 
Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK
Processing object type DATABASE_EXPORT/TABLESPACE
ORA-39083: Object type TABLESPACE:"UNDOTBS1" failed to create with error:
ORA-01516: nonexistent log file, data file, or temporary file "/u01/oradata/SOL11203/datafile/o1_mf_undotbs1_bolbn3ro_.dbf"
Failing sql is:
 ALTER DATABASE DATAFILE '/u01/oradata/SOL11203/datafile/o1_mf_undotbs1_bolbn3ro_.dbf' RESIZE 844103680
ORA-31684: Object type TABLESPACE:"TEMP" already exists
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
...
Processing object type DATABASE_EXPORT/NORMAL_POST_INSTANCE_IMPCALLOU/MARKER
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
ORA-39360: Table "SOE"."ORDERS" was skipped due to transportable import and TSLTZ issues resulting from time zone mismatch.
ORA-39151: Table "ORDDATA"."ORDDCM_INTERNAL_TAGS" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
ORA-39151: Table "ORDDATA"."ORDDCM_DOC_TYPES" exists. All dependent metadata and data will be skipped due to table_exists_action of skip

ORA-39151: Table "OLAPSYS"."XML_LOAD_LOG" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
ORA-39360: Table "MARTIN"."TZTEST" was skipped due to transportable import and TSLTZ issues resulting from time zone mismatch.
ORA-39151: Table "ORDDATA"."ORDDCM_MAPPING_DOCS" exists. All dependent metadata and data will be skipped due to table_exists_action of skip

ORA-39083: Object type TABLE:"MARTIN"."XMLTEST" failed to create with error:
ORA-39945: Token conflicting with existing tokens.
Failing sql is:
CREATE TABLE "MARTIN"."XMLTEST" OF XMLTYPE  OID '165A17E90A5B0857E054080027F26450'  XMLTYPE STORE AS SECUREFILE BINARY XML ( TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 8192 NOCACHE LOGGING  NOCOMPRESS  KEEP_DUPLICATES  STORAGE(SEG_FILE 4 SEG_BLOCK 177 OBJNO_REUSE 76699 INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 BUFFER_POOL DEFAU
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
. . imported "APEX_030200"."WWV_FLOW_PAGE_PLUGS"         3.834 MB    7416 rows


Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
ORA-39112: Dependent object type INDEX:"SOE"."ORDER_PK" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_SALES_REP_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_CUSTOMER_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_ORDER_DATE_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_WAREHOUSE_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX

The import command took a fair amount of time to complete, but this is at least partially so because of the underlying hardware which could do with 2 more cores. Or 4 actually … As expected, some errors were thrown by the process. As stated in the documentation timestamp with local time zone tables will not be transportable using TTS. This is visible from the log, the subsequent errors are a result of this. You can’t create an index on a non-existing table.

[oracle@oraclelinux7 full_tts_dir]$ grep SOE imp_full_transportable.log
ORA-39360: Table "SOE"."ORDERS" was skipped due to transportable import and TSLTZ issues resulting from time zone mismatch.
ORA-39082: Object type PACKAGE:"SOE"."ORDERENTRY" created with compilation warnings
ORA-39112: Dependent object type INDEX:"SOE"."ORDER_PK" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_SALES_REP_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_CUSTOMER_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_ORDER_DATE_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_WAREHOUSE_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type CONSTRAINT:"SOE"."ORDER_MODE_LOV" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type CONSTRAINT:"SOE"."ORDER_PK" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type CONSTRAINT:"SOE"."ORDER_TOTAL_MIN" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39083: Object type REF_CONSTRAINT:"SOE"."ORDER_ITEMS_ORDER_ID_FK" failed to create with error:
ALTER TABLE "SOE"."ORDER_ITEMS" ADD CONSTRAINT "ORDER_ITEMS_ORDER_ID_FK" FOREIGN KEY ("ORDER_ID") REFERENCES "SOE"."ORDERS" ("ORDER_ID") ON DELETE CASCADE ENABLE NOVALIDATE
ORA-39112: Dependent object type REF_CONSTRAINT:"SOE"."ORDERS_CUSTOMER_ID_FK" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39082: Object type PACKAGE BODY:"SOE"."ORDERENTRY" created with compilation warnings

In addition to that some of the objects created in the MARTIN schema did not survive the operation either:

ORA-31684: Object type USER:"MARTIN" already exists
ORA-39360: Table "MARTIN"."TZTEST" was skipped due to transportable import and TSLTZ issues resulting from time zone mismatch.
ORA-39083: Object type TABLE:"MARTIN"."XMLTEST" failed to create with error:
CREATE TABLE "MARTIN"."XMLTEST" OF XMLTYPE  OID '165A17E90A5B0857E054080027F26450'  XMLTYPE
 STORE AS SECUREFILE BINARY XML ( TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 8192 NOCACHE
 LOGGING  NOCOMPRESS  KEEP_DUPLICATES  STORAGE(SEG_FILE 4 SEG_BLOCK 177 OBJNO_REUSE 76699
 INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 BUFFER_POOL
 DEFAU
ORA-39112: Dependent object type CONSTRAINT:"MARTIN"."SYS_C0011162" skipped, base object
 type TABLE:"MARTIN"."TZTEST" creation failed

You can’t blame Oracle for that, this is expected behaviour! On the other hand it does also imply that the process is indeed a wrapper around (cross-platform) transportable tablespaces. Two out of four tables from the MARTIN schema made it across: LOBTEST and IOT_TEST. The XML table did not, and neither did the one with the timestamp with local time zone. When using the new Full Tranportable Export/Import, be aware of what the tool can and cannot do!

The white paper at http://www.oracle.com/technetwork/database/enterprise-edition/full-transportable-wp-12c-1973971.pdf also states that beginning with 12c administrative information stored on SYSTEM and SYSAUX is neither exported nor imported. This probably applies to a 12c->12c migration. In my log file I can see a number of account creation commands, or rather, how they failed.

[oracle@oraclelinux7 ~]$ egrep -i "object type user.*already exists" imp_full_transportable.log
ORA-31684: Object type USER:"OUTLN" already exists
ORA-31684: Object type USER:"ORDDATA" already exists
ORA-31684: Object type USER:"OLAPSYS" already exists
ORA-31684: Object type USER:"MDDATA" already exists
ORA-31684: Object type USER:"SPATIAL_WFS_ADMIN_USR" already exists
ORA-31684: Object type USER:"SPATIAL_CSW_ADMIN_USR" already exists
ORA-31684: Object type USER:"FLOWS_FILES" already exists
ORA-31684: Object type USER:"APEX_PUBLIC_USER" already exists

Time permitting I will try again with a 12c non-CDB and import into a 12c PDB.

Posted in 11g Release 2, 12c Release 1 | Tagged: | 2 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 3,259 other followers