Tag Archives: hcc

Little things worth knowing: exp/imp vs expdp and impdp for HCC in Exadata

Do you know the difference between exp/imp and expdp/impdp when it comes to importing HCC compressed data in Exadata?

If not, then follow me through two examples. This is on 11.2.0.3/11.2.3.3.1 but applies to all database releases you can have on Exadata. The task at hand is to export a table (which happens to be non-partitioned and HCC compressed for query high) and import it into a different user’s schema. This is quite a common approach when migrating data from a non-Exadata system into an Exadata system. You could for example pre-create the DDL for the tables and implement HCC before even importing a single row. When importing the data, the partitions’ HCC attributes will be honoured and data will be inserted compressed. Or won’t it?

The table

The table I want to export resides on our V2 system. Since I am (trying to be) a good citizen I want to use dbfs for the dump files. Beginning with 12.1.0.2 Grid Infrastructure you can also have ACFS by the way. Let’s start by creating the directory needed and some meta-information about the source table:

SQL> create or replace directory DBM01 as '/dbfs_direct/FS1/mbach/';

Directory created.

SQL> select owner, table_name, partitioned, num_rows, compress_for from dba_tables where table_name = 'T1_QH';

OWNER                          TABLE_NAME                     PAR   NUM_ROWS COMPRESS_FOR
------------------------------ ------------------------------ --- ---------- ------------
MARTIN                         T1_QH                          NO             QUERY HIGH

SQL> select bytes/power(1024,2) m, blocks from dba_segments where segment_name = 'T1_QH';

         M     BLOCKS
---------- ----------
        72       9216

The table is 72 MB in size, and HCC compressed (I actually ensured that it was by issuing an “alter table t1_qh move;” before the export started).

Data Pump

The first export uses expdp, followed by impdp to get the data back. I am remapping the schema so I don’t have to risk overwriting my source data.

[enkdb01:oracle:DBM011] /home/oracle/mbach
> expdp martin/xxxxxx directory=DBM01 dumpfile=exp_t1_qh.dmp logfile=exp_t1_qh.log tables=t1_qh

Export: Release 11.2.0.3.0 - Production on Tue Mar 24 06:15:34 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "MARTIN"."SYS_EXPORT_TABLE_02":  martin/******** directory=DBM01 dumpfile=exp_t1_qh.dmp logfile=exp_t1_qh.log tables=t1_qh
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 72 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "MARTIN"."T1_QH"                            9.675 GB 10000000 rows
Master table "MARTIN"."SYS_EXPORT_TABLE_02" successfully loaded/unloaded
******************************************************************************
Dump file set for MARTIN.SYS_EXPORT_TABLE_02 is:
  /dbfs_direct/FS1/mbach/exp_t1_qh.dmp
Job "MARTIN"."SYS_EXPORT_TABLE_02" successfully completed at 06:17:16

The interesting bit here is that the table on disk occupies around 72 MB, and yet expdp tells me the 10000000 rows occupy 9.7 GB. Can anyone guess why?

[enkdb01:oracle:DBM011] /home/oracle/mbach
> ls -lh /dbfs_direct/FS1/mbach/exp_t1_qh.dmp
-rw-r----- 1 oracle dba 9.7G Mar 24 06:17 /dbfs_direct/FS1/mbach/exp_t1_qh.dmp

Yes, 10GB. Data apparently is not exported in its compressed form. Now this table is going to be imported:

[enkdb01:oracle:DBM011] /home/oracle/mbach
> impdp imptest/xxxxxx directory=DBM01 dumpfile=exp_t1_qh.dmp logfile=imp_t1_qh.log \
> tables=martin.t1_qh remap_schema=martin:imptest 

Import: Release 11.2.0.3.0 - Production on Tue Mar 24 06:23:44 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Master table "IMPTEST"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "IMPTEST"."SYS_IMPORT_TABLE_01":  imptest/******** directory=DBM01
  dumpfile=exp_t1_qh.dmp logfile=imp_t1_qh.log tables=martin.t1_qh remap_schema=martin:imptest
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "IMPTEST"."T1_QH"                           9.675 GB 10000000 rows
Job "IMPTEST"."SYS_IMPORT_TABLE_01" successfully completed at 06:29:53

This looks all right-all data back (the number of rows exported matches those imported). What about the segment size?

SQL> select owner, table_name, partitioned, num_rows, compress_for from dba_tables where table_name = 'T1_QH';

OWNER                          TABLE_NAME                     PAR   NUM_ROWS COMPRESS_FOR
------------------------------ ------------------------------ --- ---------- ------------
MARTIN                         T1_QH                          NO             QUERY HIGH
IMPTEST                        T1_QH                          NO             QUERY HIGH

2 rows selected.

SQL> select bytes/power(1024,2) m, blocks from dba_segments where segment_name = 'T1_QH';

         M     BLOCKS
---------- ----------
        72       9216
        72       9216

2 rows selected.

Identical down to the block.

Traditional Export/Import

Despite the fact that exp/imp are deprecated they are still included with 12.1.0.2, the current release at the time of writing. What if you did the same process with these instead? After all, many DBAs “grew up” with those tools and can use them in their sleep. This plus some initial deficits with Data Pump in 10g keep exp/imp high up in the list of tools we like.

Let’s export the table:

[enkdb01:oracle:DBM011] /home/oracle/mbach
> exp martin/xxxxxxx file=/dbfs_direct/FS1/mbach/exp_t1_qh_classic.dmp log=/dbfs_direct/FS1/mbach/exp_t1_qh_classic.log tables=t1_qh

Export: Release 11.2.0.3.0 - Production on Tue Mar 24 06:42:34 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Tes
Export done in US7ASCII character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...
. . exporting table                          T1_QH   10000000 rows exported
Export terminated successfully without warnings.

The files are more or less identical in size to the ones created before by Data Pump:

[enkdb01:oracle:DBM011] /home/oracle/mbach
> ls -lh /dbfs_direct/FS1/mbach/*classic*
-rw-r--r-- 1 oracle dba 9.7G Mar 24 07:01 /dbfs_direct/FS1/mbach/exp_t1_qh_classic.dmp
-rw-r--r-- 1 oracle dba  537 Mar 24 07:01 /dbfs_direct/FS1/mbach/exp_t1_qh_classic.log

What about the import?

[enkdb01:oracle:DBM011] /home/oracle/mbach
> imp imptest/xxxxxx file=/dbfs_direct/FS1/mbach/exp_t1_qh_classic.dmp log=/dbfs_direct/FS1/mbach/imp_t1_qh_classic.log \
> FROMUSER=martin TOUSER=imptest tables=t1_qh

Import: Release 11.2.0.3.0 - Production on Tue Mar 24 07:27:42 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Tes

Export file created by EXPORT:V11.02.00 via conventional path

Warning: the objects were exported by MARTIN, not by you

import done in US7ASCII character set and AL16UTF16 NCHAR character set
. importing MARTIN's objects into IMPTEST
. . importing table                        "T1_QH"

And this takes a loooong time. What’s happening in the background? I first checked what the session was doing. That’s simple-have a look at the output.

SQL> @scripts/as

   INST_ID   SID    SERIAL# USERNAME      PROG       SQL_ID         CHILD PLAN_HASH_VALUE      EXECS   AVG_ETIME OFF SQL_TEXT
---------- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- --- -----------------------------------------
         1   594         23 IMPTEST       imp@enkdb0 fgft1tcrr12ga      0               0      22723         .00 No  INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "

The #execs were interesting and indeed, after a couple of minutes that counter has gone up a lot:

   INST_ID   SID    SERIAL# USERNAME      PROG       SQL_ID         CHILD PLAN_HASH_VALUE      EXECS   AVG_ETIME OFF SQL_TEXT
---------- ----- ---------- ------------- ---------- ------------- ------ --------------- ---------- ----------- --- -----------------------------------------
         1   594         23 IMPTEST       imp@enkdb0 fgft1tcrr12ga      0               0     639818         .00 Yes INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "

You can probably guess what’s coming next. After quite some while the import finished:

> imp imptest/xxxxxxx file=/dbfs_direct/FS1/mbach/exp_t1_qh_classic.dmp log=/dbfs_direct/FS1/mbach/imp_t1_qh_classic.log \
> FROMUSER=martin TOUSER=imptest tables=t1_qh

Import: Release 11.2.0.3.0 - Production on Tue Mar 24 07:27:42 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Tes

Export file created by EXPORT:V11.02.00 via conventional path

Warning: the objects were exported by MARTIN, not by you

import done in US7ASCII character set and AL16UTF16 NCHAR character set
. importing MARTIN's objects into IMPTEST
. . importing table                        "T1_QH"   10000000 rows imported
Import terminated successfully without warnings.

And now for the reason of this blog post:

SQL> select owner, bytes/power(1024,2) m, blocks from dba_segments where segment_name = 'T1_QH';

OWNER                                   M     BLOCKS
------------------------------ ---------- ----------
MARTIN                                 72       9216
IMPTEST                             11227    1437056

The newly important table was not compressed at all when importing using the traditional path.

Advertisement

HCC error on Exadata after partitioning maintenance

Recently I have been asked to investigate the following error on an Exadata system.

ORA-64307: hybrid columnar compression is not supported for tablespaces on this storage type

Well, that’s simple I thought! Must be (d)NFS mounted storage, right? Everyone knows that you can have HCC on Exadata (and a few other storage products). So I looked at the problem and soon found out that the data files in question all resided on the cells. Here is the sequence of events:

  1. HCC is introduced to a range-list partitioned table, residing on TBS1 (all segments, no exception). The range partitions are in fact interval partitions
  2. Subpartitions are moved to TBS2, this way they are compressed
  3. TBS1 is no longer needed and dropped

Now when we tried to insert a row into the table we get the above error. As with so many Oracle errors, this one is not very helpful in the situation. The error only occurs when a new partition is automatically created… so here is the test case to demonstrate the problem.

create table martin.T1 (id, c, name)
tablespace tbs1
partition by range (id)
interval (1000) store in (tbs1)
subpartition by list (c) subpartition template
(
  subpartition sp1 values ('0'),
  subpartition sp2 values ('1'),
  subpartition sp3 values ('2'),
  subpartition sp4 values ('3'),
  subpartition sp5 values ('4')
)
(
  partition p_man values less than (1000) tablespace tbs1
)
as select object_id, to_char(mod(rownum,4)) c, object_name from dba_objects;

You can see for yourself that everything is neatly stored on TBS1:

MARTIN:mbach1> select count(*),tablespace_name
  2 from dba_segments where segment_name = 'T1'
  3 group by tablespace_name;

  COUNT(*) TABLESPACE_NAME
---------- ------------------------------
       399 TBS1

Personal thoughts

Two comments about the above configuration.

There are 400 entries in user_tab_subpartitions, but only 399 in user_segments. It turned out that P_MAN_SP5 wasn’t created due to deferred segment creation which has been left at its default value of TRUE. Interestingly all subpartitions were created for the automatically created interval partition although I had a bug in my data generation query: mod(rownum,4) will NEVER generate a value of 4.

This leads to point 2-now you are wasting space! Due to _partition_large_extents default value in my database (actually from 11.2.0.2 onwards) any partition will be 8 MB in size, even if there are only 250 (or 0 as in this case!) records in it…

Enter HCC Compression

Now I enable compression (note this is propagated to partitions and subpartitions):

MARTIN:mbach1> alter table T1 compress for query high;

Table altered.

This doesn’t compress anything, it merely enables compression if rows are inserted using direct path operations. Therefore I need to move the subpartitions which causes them to be compressed. In this scenario I have a dedicated tablespace for the compressed data, TBS2.

MARTIN:mbach1> select count(*),tablespace_name from dba_segments where segment_name = 'T1' group by tablespace_name;

   COUNT(*) TABLESPACE_NAME
----------- ------------------------------
        399 TBS2

Since all segments for T1 are on TBS2, I can drop TBS1 which I did.

Now what has changed? The best answer you can get is from DBMS_METADATA:

MARTIN:mbach1> select dbms_metadata.get_ddl('TABLE','T1') from dual;

  CREATE TABLE "MARTIN"."T1"
   (	"ID" NUMBER,
	"C" VARCHAR2(40),
	"NAME" VARCHAR2(128)
   ) PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
 COMPRESS FOR QUERY HIGH
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TBS1"
  PARTITION BY RANGE ("ID") INTERVAL (1000) STORE IN ("TBS1")
  SUBPARTITION BY LIST ("C")
  SUBPARTITION TEMPLATE (
    SUBPARTITION "SP1" VALUES ( '0' ),
    SUBPARTITION "SP2" VALUES ( '1' ),
    SUBPARTITION "SP3" VALUES ( '2' ),
    SUBPARTITION "SP4" VALUES ( '3' ),
    SUBPARTITION "SP5" VALUES ( '4' ) )
 (PARTITION "P_MAN"  VALUES LESS THAN (1000)
PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TBS1"
 COMPRESS FOR QUERY HIGH
 ( SUBPARTITION "P_MAN_SP1"  VALUES ('0')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP2"  VALUES ('1')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP3"  VALUES ('2')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP4"  VALUES ('3')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP5"  VALUES ('4')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ) )

As you can see only the subpartition definition changed and correctly reflect the move to TBS2. What’s going to hurt me is the reference to TBS1 in the STORE IN clause for the interval partition. And indeed:

MARTIN:mbach1> insert into T1 values(150000,'1', 'ICH BINS');
insert into T1 values(150000,'1', 'ICH BINS')
            *
ERROR at line 1:
ORA-64307: hybrid columnar compression is not supported for tablespaces on this storage type

Well I wonder if you could compress data on a non-existing tablespace ….

I tried various ways to change the store-in clause of the interval partition but for some reason didn’t find the correct syntax in the documentation Thanks to Andy Klock I finally managed to get around the issue, he found the correct syntax in a MOS note. And it does even exist in the documentation as alter_interval_partitioning clause.

Now I can force the new interval partitions to be created on TBS2:

MARTIN@MBACH1:1> alter table martin.t1 set store in (tbs2);

Table altered.

The resulting metadata shows the change:

  CREATE TABLE "MARTIN"."T1"
   (	"ID" NUMBER,
	"C" VARCHAR2(40),
	"NAME" VARCHAR2(128)
   ) PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
 COMPRESS FOR QUERY HIGH
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TBS1"
  PARTITION BY RANGE ("ID") INTERVAL (1000) STORE IN ("TBS2")
  SUBPARTITION BY LIST ("C")
  SUBPARTITION TEMPLATE (
    SUBPARTITION "SP1" VALUES ( '0' ),
    SUBPARTITION "SP2" VALUES ( '1' ),
    SUBPARTITION "SP3" VALUES ( '2' ),
    SUBPARTITION "SP4" VALUES ( '3' ),
    SUBPARTITION "SP5" VALUES ( '4' ) )
 (PARTITION "P_MAN"  VALUES LESS THAN (1000)
PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TBS1"
 COMPRESS FOR QUERY HIGH
 ( SUBPARTITION "P_MAN_SP1"  VALUES ('0')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP2"  VALUES ('1')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP3"  VALUES ('2')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP4"  VALUES ('3')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ,
  SUBPARTITION "P_MAN_SP5"  VALUES ('4')
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH ) )

It now is possible to insert data again:

MARTIN@MBACH1:1> insert into T1 values(150000,'1', 'ICH BINS');

1 row created.

MARTIN@MBACH1:1> insert into T1 values(50,'2', 'ICH BINS');

1 row created.

MARTIN@MBACH1:1> insert into T1 values(2050,'2', 'ICH BINS');

1 row created.

MARTIN@MBACH1:1> commit;

Commit complete.

All is well again. You could now change the default attributes of the table and all partitions to TBS2 using commands similar to these:

MARTIN@MBACH1:1> alter table t1 modify default attributes tablespace tbs2;

Table altered.

MARTIN@MBACH1:1> alter table t1 modify default attributes for partition p_man tablespace tbs2;

Table altered.

The last command obviously has to be repeated for the partitions that have already been created. The DDL for the table now looks more healthy:

MARTIN@MBACH1:1> select dbms_metadata.get_ddl('TABLE','T1') from dual;

DBMS_METADATA.GET_DDL('TABLE','T1')
--------------------------------------------------------------------------------

  CREATE TABLE "MARTIN"."T1"
   (	"ID" NUMBER,
	"C" VARCHAR2(40),
	"NAME" VARCHAR2(128)
   ) PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
 COMPRESS FOR QUERY HIGH
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TBS2"
  PARTITION BY RANGE ("ID") INTERVAL (1000) STORE IN ("TBS2")
  SUBPARTITION BY LIST ("C")
  SUBPARTITION TEMPLATE (
    SUBPARTITION "SP1" VALUES ( '0' ),
    SUBPARTITION "SP2" VALUES ( '1' ),
    SUBPARTITION "SP3" VALUES ( '2' ),
    SUBPARTITION "SP4" VALUES ( '3' ),
    SUBPARTITION "SP5" VALUES ( '4' ) )
 (PARTITION "P_MAN"  VALUES LESS THAN (1000)
PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TBS2"
 COMPRESS FOR QUERY HIGH )

Problem solved! Although it shouldn’t really matter what’s in user_tab_partitions as these segments do not exist (only the subpartitions exist!)

Summary

The partition maintenance operation followed by dropping the default tablespace for new (interval) partitions caused Oracle to raise a misleading error message. Instead of the ORA-64307 error (stating the HCC is not supported on a void tablespace) a better error message would have been to let the user know that the tablespace did not exist.

This was on 11.2.0.3, I will try and test on 11.2.0.4 and 12.1.0.1 later and update the post.