Monthly Archives: October 2015

Little things worth knowing: automatic generation of extended statistics in 12c

When you are migrating to Oracle 12c I hope you might this post useful. I came across this feature when researching what’s new with Oracle 12c (and yes I still find lots of new ones I haven’t noticed before). This one is a bit hidden away in section 2.2.4.3 Automatic Column Group Detection of the 12c New Features Guide. And it’s a lot more complex than I first thought! In this first post I’ll try and show the generation of extended statistics in 12c. I am planning on another post to explain how the rest of the adaptive optimisations that are new with 12c fit into the picture.

What is the motivation?

Previously, in Oracle versions up to 12c you needed to be on the lookout for candidates for correlated columns. Extended statistics on groups of columns allow the optimiser to come up with better cardinality estimates if columns in a table are correlated. Instead of me trying to explain the concept in depth I’d like to link to a post written by Maria Colgan. I strongly recommend you have a look at this post if you haven’t used extended statistics a lot.

When putting my post together I referred to the same SH.CUSTOMERS table as Maria. The queries I am using are based on another excellent post by Maria Colgan that builds on the foundations of the one I just referenced. In fact I really needed a data set that had correlated columns but I couldn’t come up with an example that was easy-to-follow until I found that blog entry. I suggest you have a look at this post first as it explains a nifty tool for helping you finding candidates for extended statistics in 11.2. Oracle 12c takes this concept a step further as you will see.

My test case

My test environment is 12.1.0.2.3 on Exadata (but that shouldn’t matter), and this is how it goes. First I create a table in my schema which is an exact replica of the table in the SH schema.

SQL> create table martin.customers as select * from sh.customers;

Table created.

The next step is to tell Oracle to monitor column usage. I have to accelerate this a bit, Oracle does this over the cause of the day anyway.

SQL> exec dbms_stats.seed_col_usage(null,null,300)

PL/SQL procedure successfully completed.

By calling seed_col_usage() without a SQLSET and OWNER I can report column usage. The PL/SQL API documentation reads:

This procedure also records group of columns. Extensions for the recorded group of columns can be created using the CREATE_EXTENDED_STATS Function procedure. If sqlset_name and owner_name are NULL, it records the column (group) usage information for the statements executed in the system in next time_limit seconds.

That sounds like what is needed. To start with a level playing field I gather statistics again using dbms_stats.gather_table_stats(user, ‘customers’).

Let there be queries

The next step is to run a few queries. Cardinality estimates are not correct-the optimiser does not “know” that L.A. can only be in California in this example. Pay attention to the number of rows returned and the cardinality estimates:

SQL> select count(*) from customers
  2  where cust_city = 'Los Angeles'
  3  and cust_state_province = 'CA'
  4  and country_id = 52790;

  COUNT(*)
----------
       932

Elapsed: 00:00:00.03
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID  ap71092cqnj1y, child number 0
-------------------------------------
select count(*) from customers where cust_city = 'Los Angeles' and
cust_state_province = 'CA' and country_id = 52790

Plan hash value: 296924608

----------------------------------------------------------------------------------------
| Id  | Operation                  | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |           |       |       |   423 (100)|          |
|   1 |  SORT AGGREGATE            |           |     1 |    26 |            |          |
|*  2 |   TABLE ACCESS STORAGE FULL| CUSTOMERS |     1 |    26 |   423   (1)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))
       filter(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold

27 rows selected.

Following advice I have been given by people who know a lot more about SQL tuning than I will ever comprehend, I gather execution statistics and then display actual and expected rows:

SQL> select /*+ gather_plan_statistics */ count(*) from customers
  2  where cust_city = 'Los Angeles'
  3  and cust_state_province = 'CA'
  4  and country_id = 52790;

  COUNT(*)
----------
       932

SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID  0qmjk7qm3zwkw, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from customers  where
cust_city = 'Los Angeles'  and cust_state_province = 'CA'  and
country_id = 52790

Plan hash value: 296924608

--------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |           |      1 |        |      1 |00:00:00.01 |    1521 |
|   1 |  SORT AGGREGATE            |           |      1 |      1 |      1 |00:00:00.01 |    1521 |
|*  2 |   TABLE ACCESS STORAGE FULL| CUSTOMERS |      1 |      1 |    932 |00:00:00.01 |    1521 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))
       filter(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold

The number of actual rows in plan line 2 is quite different from what the optimiser expected it to be.

I also executed the next example from Maria’s blog post:

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------

SQL_ID  5h284vjm2xtp9, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ country_id, cust_state_province,
count(*) from customers group by country_id, cust_state_province

Plan hash value: 1577413243

-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |           |      1 |        |    145 |00:00:00.02 |    1521 |       |       |          |
|   1 |  HASH GROUP BY             |           |      1 |   1949 |    145 |00:00:00.02 |    1521 |  1015K|  1015K| 1371K (0)|
|   2 |   TABLE ACCESS STORAGE FULL| CUSTOMERS |      1 |  55500 |  55500 |00:00:00.02 |    1521 |  1025K|  1025K|          |
-----------------------------------------------------------------------------------------------------------------------------

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold

Again cardinality estimates differ in plan line 1: instead of 1949 rows the optimiser expects to be returned there are only 145. This is a simple example, but quite often incorrect cardinality estimates lead to sub-optimal plan generation (although 12c has another new feature that tries to mitigate the effects of these incorrect cardinality estimates as well)

Column Groups

On the other hand the column usage monitor picked up information based on the “workload”. You can see this in sys.col_group_usage$

SQL> select u.*, o.object_name, object_type
  2  from sys.col_group_usage$ u, dba_objects o
  3  where u.obj# = o.data_object_id
  4  and owner = 'MARTIN';

      OBJ# COLS                           TIMESTAMP      FLAGS OBJECT_NAME                    OBJECT_TYPE
---------- ------------------------------ --------- ---------- ------------------------------ -----------------------
    111118 11,13                          26-JUN-15         36 CUSTOMERS                      TABLE
    111118 9,11,13                        26-JUN-15         33 CUSTOMERS                      TABLE

This is of course not the way to view this information-there’s an API for that.

SQL> select dbms_stats.report_col_usage(user, 'CUSTOMERS') from dual;

DBMS_STATS.REPORT_COL_USAGE(USER,'CUSTOMERS')
----------------------------------------------------------------------------------------------
LEGEND:
.......

EQ         : Used in single table EQuality predicate
RANGE      : Used in single table RANGE predicate
LIKE       : Used in single table LIKE predicate
NULL       : Used in single table is (not) NULL predicate
EQ_JOIN    : Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
FILTER     : Used in single table FILTER predicate
JOIN       : Used in JOIN predicate
GROUP_BY   : Used in GROUP BY expression
...............................................................................

###############################################################################

COLUMN USAGE REPORT FOR MARTIN.CUSTOMERS
........................................

1. COUNTRY_ID                          : EQ
2. CUST_CITY                           : EQ
3. CUST_STATE_PROVINCE                 : EQ
4. (CUST_CITY, CUST_STATE_PROVINCE,
    COUNTRY_ID)                        : FILTER
5. (CUST_STATE_PROVINCE, COUNTRY_ID)   : GROUP_BY
###############################################################################

So there is a potential candidates for extended stats in line 5. So far, nothing new. In version before 12c you now had to create the column groups manually. Again, please refer to Maria’s post for an example.

The difference in 12c

In 12c you don’t need to create these column groups manually. Consider this sequence of events:

SQL> select * from dba_stat_extensions where owner = 'MARTIN';

no rows selected

Elapsed: 00:00:00.28

SQL> exec dbms_stats.gather_table_stats(user,'CUSTOMERS')

PL/SQL procedure successfully completed.

SQL> select * from user_stat_extensions

TABLE_NAME                     EXTENSION_NAME                 EXTENSION                                          CREATO DRO
------------------------------ ------------------------------ -------------------------------------------------- ------ ---
CUSTOMERS                      SYS_STU#S#WF25Z#QAHIHE#MOFFMM_ ("CUST_STATE_PROVINCE","COUNTRY_ID")               USER   YES
CUSTOMERS                      SYS_STUMZ$C3AIHLPBROI#SKA58H_N ("CUST_CITY","CUST_STATE_PROVINCE","COUNTRY_ID")   USER   YES

2 rows selected.

SQL> SELECT column_name,
  2    num_distinct,
  3    num_buckets,
  4    histogram
  5  FROM dba_tab_col_statistics
  6  WHERE table_name = 'CUSTOMERS'
  7  AND owner        = 'MARTIN'
  8  AND column_name LIKE 'SYS%';

COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS HISTOGRAM
------------------------------ ------------ ----------- ---------------
SYS_STUMZ$C3AIHLPBROI#SKA58H_N          620         254 HYBRID
SYS_STU#S#WF25Z#QAHIHE#MOFFMM_          145           1 NONE

2 rows selected.

So in the above output you can see that there are no extended stats on the CUSTOMERS table before gathering stats. There is nothing special about the stats gathering command, the preferences are the defaults (reformatted here for readability)

SQL> l
  1  declare
  2    type prefs_t is table of varchar2(150);
  3    prefs prefs_t := prefs_t(
  4      'AUTOSTATS_TARGET','CASCADE','CONCURRENT','DEGREE','ESTIMATE_PERCENT',
  5      'METHOD_OPT','NO_INVALIDATE','GRANULARITY','PUBLISH','INCREMENTAL',
  6      'INCREMENTAL_STALENESS','INCREMENTAL_LEVEL','STALE_PERCENT',
  7      'GLOBAL_TEMP_TABLE_STATS','TABLE_CACHED_BLOCKS','OPTIONS');
  8    prefs_value varchar2(100);
  9  begin
 10    for p in prefs.first .. prefs.last loop
 11      select dbms_stats.get_prefs(prefs(p), user,'CUSTOMERS') into prefs_value from dual;
 12      dbms_output.put_line(prefs(p) || ': ' || prefs_value);
 13    end loop;
 14* end;
SQL> /
AUTOSTATS_TARGET:        AUTO
CASCADE:                 DBMS_STATS.AUTO_CASCADE
CONCURRENT:              MANUAL
DEGREE:                  NULL
ESTIMATE_PERCENT:        DBMS_STATS.AUTO_SAMPLE_SIZE
METHOD_OPT:              FOR ALL COLUMNS SIZE AUTO
NO_INVALIDATE:           DBMS_STATS.AUTO_INVALIDATE
GRANULARITY:             AUTO
PUBLISH:                 TRUE
INCREMENTAL:             FALSE
INCREMENTAL_STALENESS:
INCREMENTAL_LEVEL:       PARTITION
STALE_PERCENT:           10
GLOBAL_TEMP_TABLE_STATS: SESSION
TABLE_CACHED_BLOCKS:     1
OPTIONS:                 GATHER

PL/SQL procedure successfully completed.

Did you see the extended stats have histograms? There is even one of the new fancy “hybrid” ones. The effect is noticeable:

SQL> select /*+ gather_plan_statistics */ count(*) from customers
  2   where cust_city = 'Los Angeles'
  3   and cust_state_province = 'CA'
  4   and country_id = 52790;

SQL> select * from table(dbms_xplan.display_cursor(format => 'allstats last'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------
SQL_ID  0qmjk7qm3zwkw, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from customers  where
cust_city = 'Los Angeles'  and cust_state_province = 'CA'  and
country_id = 52790

Plan hash value: 296924608

--------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |           |      1 |        |      1 |00:00:00.01 |    1521 |
|   1 |  SORT AGGREGATE            |           |      1 |      1 |      1 |00:00:00.01 |    1521 |
|*  2 |   TABLE ACCESS STORAGE FULL| CUSTOMERS |      1 |    969 |    932 |00:00:00.01 |    1521 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))
       filter(("CUST_CITY"='Los Angeles' AND "CUST_STATE_PROVINCE"='CA' AND
              "COUNTRY_ID"=52790))

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - 1 Sql Plan Directive used for this statement

The cardinality estimate is a lot closer now. What surprised me was the SQL Plan Directive referenced. But this is material for another blog post.

Advertisements

Example of Full Transportable Export to create a 12c PDB

The Cool Stuff

Oracle has introduced a new way of transporting data from one platform to another in 12c. The new feature is called “Full Transportable Export”. This enhancement works from 11.2.0.3 and later and is a great way to move data, as it allows for an easy-to-use combination of Export Data Pump and Transportable Tablespaces. The documentation specifically praises it as a means to move from an 11g Release 2 database into a 12c Pluggable Database. And Pluggable Databases I like :)

Unfortunately the Full Transportable Export/Import process does not perform endianness conversion for you. And in fact, if you have encrypted data you want to move you cannot do so unless you are on the same endianness.

Limitations

Before we begin, here is a list of restrictions pertaining to the new Full Transportable Export, taken from the 12c Utilities Guide:

  • You must use a privileged account, at least DATAPUMP_EXP_FULL_DATABASE must be granted to user. That user’s default tablespace must not be part of the transport set
  • Encrypted data can be transported only between same endianness.
  • Endianness conversion not transparently taken care off: you must do so using dbms_file_transfer or RMAN convert
  • Export is not restartable
  • Objects to be transported must be on user-created tablespaces
  • LONG and LONG RAW objects in administrative tablespaces (SYSTEM, SYSAUX) cannot be transported across the network
  • All the user defined tablespaces must be placed in read-only mode: this is an outage!

There might be more, check the documentation for an updated version.

Let’s test

In order to assess the suitability of the concept, two users have been created in an Oracle 11.2.0.3 database on Solaris/Intel. I would have liked to test the procedure with a big endian source but my last SPARC box was a Sun Blade 100 from 10 years ago that finally went to the skip, or in other words I don’t have a SPARC server available in my lab right now…

To make it more interesting I am going to migrate these accounts into a 12.1.0.2 PDB on Linux. The source database is 11.2.0.3 with the April 2015 database PSU. Here is the data creation part in case you would like to follow the demo. First the Order Entry schema, part of the Swingbench suite is created.

oracle@solaris:~/swingbench/bin$ ./oewizard -allindexes -cl -create -cs //solaris/sol11203 -dba system \
-dbap secretpwd -ts soe_tbs -part -p soe -scale 1 -tc 2 -u soe
SwingBench Wizard
Author  :        Dominic Giles
Version :        2.5.0.971

Running in Lights Out Mode using config file : oewizard.xml
The following statement failed : GRANT EXECUTE ON dbms_lock TO soe :
   Due to : ORA-01031: insufficient privileges

============================================
|           Datagenerator Run Stats        |
============================================
Connection Time                        0:00:00.002
Data Generation Time                   0:06:53.268
DDL Creation Time                      0:03:17.259
Total Run Time                         0:10:10.533
Rows Inserted per sec                       29,294
Data Generated (MB) per sec                    2.4
Actual Rows Generated                   13,004,989

Post Creation Validation Report
===============================
The schema appears to have been created successfully.

Valid Objects
=============
Valid Tables : 'ORDERS','ORDER_ITEMS','CUSTOMERS','WAREHOUSES','ORDERENTRY_METADATA',
'INVENTORIES','PRODUCT_INFORMATION','PRODUCT_DESCRIPTIONS','ADDRESSES','CARD_DETAILS'
Valid Indexes : 'PRD_DESC_PK','PROD_NAME_IX','PRODUCT_INFORMATION_PK','PROD_SUPPLIER_IX',
'PROD_CATEGORY_IX','INVENTORY_PK','INV_PRODUCT_IX','INV_WAREHOUSE_IX','ORDER_PK',
'ORD_SALES_REP_IX','ORD_CUSTOMER_IX','ORD_ORDER_DATE_IX','ORD_WAREHOUSE_IX',
'ORDER_ITEMS_PK','ITEM_ORDER_IX','ITEM_PRODUCT_IX','WAREHOUSES_PK','WHS_LOCATION_IX',
'CUSTOMERS_PK','CUST_EMAIL_IX','CUST_ACCOUNT_MANAGER_IX','CUST_FUNC_LOWER_NAME_IX',
'ADDRESS_PK','ADDRESS_CUST_IX','CARD_DETAILS_PK','CARDDETAILS_CUST_IX'
Valid Views : 'PRODUCTS','PRODUCT_PRICES'
Valid Sequences : 'CUSTOMER_SEQ','ORDERS_SEQ','ADDRESS_SEQ','LOGON_SEQ','CARD_DETAILS_SEQ'
Valid Code : 'ORDERENTRY'
Schema Created

The Swingbench benchmark suite does not need introducing any more. I have used it many times and am truly grateful for @dominic_giles having written this for the Oracle community.

About to cause grief-deliberately

A second user is created to test all sorts of segment types to see how these are handled. All data is bundled under the MARTIN account. The idea is to test against (some of) the limitations documented against Transportable Tablespaces, which is invoked under the covers. It would be nice if segments that cannot be transported using TTS were part of the export dump-let’s see if that is going to be the case. And although the transport might work without problems for me, it will do so because I tested it the way I did. It does not mean that for example objects using XMLDB will always be migrated successfully! As always, test, test, test … with your systems. There is nothing like a few dummy runs to hone in the procedures.

Back to the user creation:

SQL> show parameter compatible

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
compatible                           string      11.2.0.3

SQL> create user martin identified by secretpwd quota unlimited on users
  2  /

User created.

SQL> begin
  2   for i in 1..10 loop
  3    execute immediate 'create or replace procedure martin.p' || i || ' as begin null; end;';
  4   end loop;
  5  end;
  6  /

PL/SQL procedure successfully completed.

SQL> select owner, object_name from dba_objects
  2  where owner = 'MARTIN'
  3* and object_name like 'P%'

OWNER                          OBJECT_NAME
------------------------------ ------------------------------
MARTIN                         P9
MARTIN                         P8
MARTIN                         P7
MARTIN                         P6
MARTIN                         P5
MARTIN                         P4
MARTIN                         P3
MARTIN                         P2
MARTIN                         P10
MARTIN                         P1

10 rows selected.

SQL> create table martin.xmltest of xmltype tablespace users;

Table created.

SQL> insert into martin.xmltest values ('<toto/>');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from martin.xmltest;

SYS_NC_ROWINFO$
--------------------------------------------------------------------------------
<toto/>

SQL> create table martin.iot_test (
  2    object_id,
  3    owner,
  4    object_name,
  5    subobject_name,
  6    object_type,
  7    constraint pk_iot_test primary key (object_id))
  8  organization index tablespace users as
  9  select object_id,owner,object_name,subobject_name,
 10  object_type from dba_objects;

Table created.

SQL> create table martin.lobtest (
  2   id number primary key,
  3   l_data clob)
  4  lob (l_data)
  5 store as secrefile;

Table created.

SQL> insert into martin.lobtest values (1, 'firstlob');

1 row created.

SQL> commit;

Commit complete.

SQL> select table_name, column_name,in_row, tablespace_name
  2  from dba_lobs
  3* where owner = 'MARTIN'

TABLE_NAME           COLUMN_NAME                    IN_ TABLESPACE_NAME
-------------------- ------------------------------ --- --------------------
XMLTEST              XMLDATA                        YES USERS
LOBTEST              L_DATA                         YES USERS

SQL> create table martin.tztest (id number primary key, d timestamp with local time zone)
  2   tablespace users;

SQL> insert into martin.tztest values (1, systimestamp);

SQL> commit;

Commit complete

That’s all I could think of. I may have added nested tables but I am not aware of their use in an Oracle application outside of PL/SQL. Now in this place I am deliberately causing trouble, and I am also working against the limitations and recommendations taken from the documentation. The following is documented behaviour, my interest was to see how it actually works (and also give you a clue when you search the Internet for error messages).

The Migration

Before migrating the database you need to check if the data is self-contained. Since there are two tablespaces to be migrated, I need to check them both.


SQL> select name,PLATFORM_NAME from v$database

NAME      PLATFORM_NAME
--------- --------------------------------------------------
SOL11203  Solaris Operating System (x86-64)

SQL> exec sys.dbms_tts.transport_set_check('users,soe_tbs',true)

PL/SQL procedure successfully completed.

SQL> select * from transport_set_violations;

no rows selected

Just as with “regular” transportable tablespaces, you need to create a directory to store the export dump and put the tablespaces to, or use the default DATA_PUMP_DIR.

You then need to place the user-defined tablespaces into read-only mode. Essentially all non-Oracle managed tablespaces are affected, causing an outage at this point.


SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
SOE_TBS

6 rows selected.

SQL> alter tablespace users read only;

Tablespace altered.

SQL> c.users.soe_tbs
  1* alter tablespace soe_tbs read only
SQL> r
  1* alter tablespace soe_tbs read only

Tablespace altered.

The exciting bit is when you start the export. Here is a sample command:

oracle@solaris:~/full_tts$ expdp system full=y transportable=always version=12 directory=full_tts_dir \
> dumpfile=exp_full_transportable.dmp logfile=exp_full_transportable.log 

Export: Release 11.2.0.3.0 - Production on Wed Oct 7 21:04:10 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Password: 

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** full=y  
 transportable=always version=12 directory=full_tts_dir 
 dumpfile=exp_full_transportable.dmp 
 logfile=exp_full_transportable.log 
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/PLUGTS_FULL/FULL/PLUGTS_TABLESPACE
Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/TABLE_DATA
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 80.43 MB
Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
...
. . exported "SYSTEM"."REPCAT$_REPCATLOG"                    0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_REPCOLUMN"                    0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_REPGROUP_PRIVS"               0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_REPOBJECT"                    0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_REPPROP"                      0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_REPSCHEMA"                    0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_RESOLUTION"                   0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_RESOLUTION_STATISTICS"        0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_RESOL_STATS_CONTROL"          0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_RUNTIME_PARMS"                0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_SITES_NEW"                    0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_SITE_OBJECTS"                 0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_SNAPGROUP"                    0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_OBJECTS"             0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_PARMS"               0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_REFGROUPS"           0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_SITES"               0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_TARGETS"             0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_USER_AUTHORIZATIONS"          0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_USER_PARM_VALUES"             0 KB       0 rows
. . exported "SYSTEM"."SQLPLUS_PRODUCT_PROFILE"              0 KB       0 rows
Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /export/home/oracle/full_tts/exp_full_transportable.dmp
******************************************************************************
Datafiles required for transportable tablespace SOE_TBS:
  /u01/oradata/SOL11203/datafile/o1_mf_soe_tbs_bomhd219_.dbf
Datafiles required for transportable tablespace USERS:
  /u01/oradata/SOL11203/datafile/o1_mf_users_bolbn3sn_.dbf
Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at 21:09:33

A quick check on the logfile revealed that objects owned by XDB, SYS, WMSYS, SYSTEM, APEX, SYSMAN, OLAPSYS, ORDDATA were exported. I did not see any non-Oracle provided objects exported in the list of tables. So the import will be interesting!

The next step is to move the dump and data files across to the target system. In this example NFS is used, but any other efficient means of transportation will do.

Importing into a PDB

In preparation for the metadata import you need to either create a directory or point impdp to an existing one. In the example, FULL_TTS_DIR is used. The directory points to the NFS mount. The files stored in the directory include the data files (/u01/oradata/SOL11203/datafile/o1_mf_soe_tbs_bomhd219_.dbf and /u01/oradata/SOL11203/datafile/o1_mf_users_bolbn3sn_.dbf plus the dump file exp_full_transportable.dmp). Since both systems are little endian no conversion is necessary at this stage. Before starting here is some information about the destination. The PDB created for the import is named SOE_PDB.

SQL> select name,cdb, platform_name from v$database;

NAME      CDB PLATFORM_NAME
--------- --- ----------------------------------------------------------------------------------
CDB       YES Linux x86 64-bit

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MASTER                         MOUNTED

SQL> create pluggable database soe_pdb admin user pdb_admin
  2  identified by secretpwd;

Pluggable database created.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MASTER                         MOUNTED
         4 SOE_PDB                        MOUNTED

SQL> alter pluggable database SOE_PDB open ;

Pluggable database altered.

Since the database is using ASM the data files first have to be moved into ASM before they can be made known to the destination. The files originate from little endian Solaris/Intel so no conversion is needed. There are many ways to move data files to ASM, and asmcmd is one of them. The file naming convention on ASM for PDBs is diskgroupName/CDBName/PDBGUID/datafile/fileName.dbf’. The GUID can be found in v$pdbs for example.

SQL> select guid, name from v$pdbs where name = 'SOE_PDB';

GUID                             NAME
-------------------------------- ------------------------------
218B3FD3B43B146EE0531438A8C06EE5 SOE_PDB

[oracle@oraclelinux7 full_tts_dir]$ asmcmd cp /m/oracle/full_tts_dir/o1_mf_soe_tbs_bomhd219_.dbf  +data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/soe_tbs.dbf
copying /m/oracle/full_tts_dir/o1_mf_soe_tbs_bomhd219_.dbf -> +data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/soe_tbs.dbf
[oracle@oraclelinux7 full_tts_dir]$ asmcmd cp /m/oracle/full_tts_dir/o1_mf_users_bolbn3sn_.dbf +data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/users.dbf
copying /m/oracle/full_tts_dir/o1_mf_users_bolbn3sn_.dbf -> +data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/users.dbf
[oracle@oraclelinux7 full_tts_dir]$ 

Although the data files are now in ASM, they are not part of the database yet. This is done in the next step.

[oracle@oraclelinux7 full_tts_dir]$ impdp system/secretpwd@localhost/SOE_PDB full=y \
> dumpfile=exp_full_transportable.dmp directory=FULL_TTS_DIR \
> transport_datafiles= \
> '+data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/soe_tbs.dbf', \
> '+data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/users.dbf' \
> logfile=imp_full_transportable.log

Import: Release 12.1.0.2.0 - Production on Wed Oct 7 22:40:55 2015

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Source time zone is +00:00 and target time zone is -07:00.
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@localhost/SOE_PDB 
  full=y dumpfile=exp_full_transportable.dmp directory=FULL_TTS_DIR 
  transport_datafiles=+data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/soe_tbs.dbf, 
  +data/cdb/218B3FD3B43B146EE0531438A8C06EE5/datafile/users.dbf  
  logfile=imp_full_transportable.log 
Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK
Processing object type DATABASE_EXPORT/TABLESPACE
ORA-39083: Object type TABLESPACE:"UNDOTBS1" failed to create with error:
ORA-01516: nonexistent log file, data file, or temporary file "/u01/oradata/SOL11203/datafile/o1_mf_undotbs1_bolbn3ro_.dbf"
Failing sql is:
 ALTER DATABASE DATAFILE '/u01/oradata/SOL11203/datafile/o1_mf_undotbs1_bolbn3ro_.dbf' RESIZE 844103680
ORA-31684: Object type TABLESPACE:"TEMP" already exists
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
...
Processing object type DATABASE_EXPORT/NORMAL_POST_INSTANCE_IMPCALLOU/MARKER
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
ORA-39360: Table "SOE"."ORDERS" was skipped due to transportable import and TSLTZ issues resulting from time zone mismatch.
ORA-39151: Table "ORDDATA"."ORDDCM_INTERNAL_TAGS" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
ORA-39151: Table "ORDDATA"."ORDDCM_DOC_TYPES" exists. All dependent metadata and data will be skipped due to table_exists_action of skip

ORA-39151: Table "OLAPSYS"."XML_LOAD_LOG" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
ORA-39360: Table "MARTIN"."TZTEST" was skipped due to transportable import and TSLTZ issues resulting from time zone mismatch.
ORA-39151: Table "ORDDATA"."ORDDCM_MAPPING_DOCS" exists. All dependent metadata and data will be skipped due to table_exists_action of skip

ORA-39083: Object type TABLE:"MARTIN"."XMLTEST" failed to create with error:
ORA-39945: Token conflicting with existing tokens.
Failing sql is:
CREATE TABLE "MARTIN"."XMLTEST" OF XMLTYPE  OID '165A17E90A5B0857E054080027F26450'  XMLTYPE STORE AS SECUREFILE BINARY XML ( TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 8192 NOCACHE LOGGING  NOCOMPRESS  KEEP_DUPLICATES  STORAGE(SEG_FILE 4 SEG_BLOCK 177 OBJNO_REUSE 76699 INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 BUFFER_POOL DEFAU
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
. . imported "APEX_030200"."WWV_FLOW_PAGE_PLUGS"         3.834 MB    7416 rows


Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
ORA-39112: Dependent object type INDEX:"SOE"."ORDER_PK" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_SALES_REP_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_CUSTOMER_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_ORDER_DATE_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_WAREHOUSE_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX

The import command took a fair amount of time to complete, but this is at least partially so because of the underlying hardware which could do with 2 more cores. Or 4 actually … As expected, some errors were thrown by the process. As stated in the documentation timestamp with local time zone tables will not be transportable using TTS. This is visible from the log, the subsequent errors are a result of this. You can’t create an index on a non-existing table.

[oracle@oraclelinux7 full_tts_dir]$ grep SOE imp_full_transportable.log
ORA-39360: Table "SOE"."ORDERS" was skipped due to transportable import and TSLTZ issues resulting from time zone mismatch.
ORA-39082: Object type PACKAGE:"SOE"."ORDERENTRY" created with compilation warnings
ORA-39112: Dependent object type INDEX:"SOE"."ORDER_PK" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_SALES_REP_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_CUSTOMER_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_ORDER_DATE_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"SOE"."ORD_WAREHOUSE_IX" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type CONSTRAINT:"SOE"."ORDER_MODE_LOV" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type CONSTRAINT:"SOE"."ORDER_PK" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39112: Dependent object type CONSTRAINT:"SOE"."ORDER_TOTAL_MIN" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39083: Object type REF_CONSTRAINT:"SOE"."ORDER_ITEMS_ORDER_ID_FK" failed to create with error:
ALTER TABLE "SOE"."ORDER_ITEMS" ADD CONSTRAINT "ORDER_ITEMS_ORDER_ID_FK" FOREIGN KEY ("ORDER_ID") REFERENCES "SOE"."ORDERS" ("ORDER_ID") ON DELETE CASCADE ENABLE NOVALIDATE
ORA-39112: Dependent object type REF_CONSTRAINT:"SOE"."ORDERS_CUSTOMER_ID_FK" skipped, base object type TABLE:"SOE"."ORDERS" creation failed
ORA-39082: Object type PACKAGE BODY:"SOE"."ORDERENTRY" created with compilation warnings

In addition to that some of the objects created in the MARTIN schema did not survive the operation either:

ORA-31684: Object type USER:"MARTIN" already exists
ORA-39360: Table "MARTIN"."TZTEST" was skipped due to transportable import and TSLTZ issues resulting from time zone mismatch.
ORA-39083: Object type TABLE:"MARTIN"."XMLTEST" failed to create with error:
CREATE TABLE "MARTIN"."XMLTEST" OF XMLTYPE  OID '165A17E90A5B0857E054080027F26450'  XMLTYPE
 STORE AS SECUREFILE BINARY XML ( TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 8192 NOCACHE
 LOGGING  NOCOMPRESS  KEEP_DUPLICATES  STORAGE(SEG_FILE 4 SEG_BLOCK 177 OBJNO_REUSE 76699
 INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 BUFFER_POOL
 DEFAU
ORA-39112: Dependent object type CONSTRAINT:"MARTIN"."SYS_C0011162" skipped, base object
 type TABLE:"MARTIN"."TZTEST" creation failed

You can’t blame Oracle for that, this is expected behaviour! On the other hand it does also imply that the process is indeed a wrapper around (cross-platform) transportable tablespaces. Two out of four tables from the MARTIN schema made it across: LOBTEST and IOT_TEST. The XML table did not, and neither did the one with the timestamp with local time zone. When using the new Full Tranportable Export/Import, be aware of what the tool can and cannot do!

The white paper at http://www.oracle.com/technetwork/database/enterprise-edition/full-transportable-wp-12c-1973971.pdf also states that beginning with 12c administrative information stored on SYSTEM and SYSAUX is neither exported nor imported. This probably applies to a 12c->12c migration. In my log file I can see a number of account creation commands, or rather, how they failed.

[oracle@oraclelinux7 ~]$ egrep -i "object type user.*already exists" imp_full_transportable.log
ORA-31684: Object type USER:"OUTLN" already exists
ORA-31684: Object type USER:"ORDDATA" already exists
ORA-31684: Object type USER:"OLAPSYS" already exists
ORA-31684: Object type USER:"MDDATA" already exists
ORA-31684: Object type USER:"SPATIAL_WFS_ADMIN_USR" already exists
ORA-31684: Object type USER:"SPATIAL_CSW_ADMIN_USR" already exists
ORA-31684: Object type USER:"FLOWS_FILES" already exists
ORA-31684: Object type USER:"APEX_PUBLIC_USER" already exists

Time permitting I will try again with a 12c non-CDB and import into a 12c PDB.