Introduce Partitioning to a large table

Recently I have been asked to find a way to partition a very big table on very slow storage. Very large table translates into 600G, and slow storage translates into HP MSA 1000.

WARNING #1: Partitioning is an extra cost option on top of Enterprise Edition. Do NOT use partitioning unless you are licensed appropriately to do so!

Background

In my particular case the use of DBMS_REDEFINITION wasn’t an option then so I decided on a multi-step approach. If I can’t neatly use DBMS_REDEFINITION then at least I needed the power of the partitioning commands. In hindsight this wasn’t a great idea, everyone should use DBMS_REDEFINITION as it’s rather excellent.

Update 221103 this really isn’t the right way to do this, please use DBMS_REDEFINITION instead. This post was validated against Oracle 19c running on Oracle Linux 8.

Poor-man’s DBMS_REDEFINITION

Create a “large” table as source first to test. Actually it’s not large at all but that’s besides the point…

create table unpart
tablespace users
as select * from all_objects;

Table created.

SQL> select count(1) from unpart;

  COUNT(1)
----------
      9960

Now let’s put some more data into it:

begin
 for i  in 1..5 loop  
  insert into unpart select * from unpart;
 end loop;
end;
/

SQL> /

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> select count(1) from unpart;

  COUNT(1)
----------
    318720

Slightly better. I now need to find the upper bound for the newly to be created partition and add 1 in preparation of the partitioning exchange following soon. The choice of partition key is very important: unless new rows go into new partitions there is no point in introducing range partitioning!

Failing to do properly calculate the upper boundary will result in ORA-14099 when trying to exchange partitions. As soon as you determined the maximum value in the table DML activity should cease or else the following calculation will be wrong and the operation fails.

SQL> select max(object_id) +1 from unpart;

MAX(OBJECT_ID)
--------------
         10459

Now create the partitioned table to hold the data after the move. The whole source table should fit into the first partition.

create table part
partition by range (object_id)
(
 partition p1 values less than (10459),
 partition p2 values less than (20000),
 partition p3 values less than (maxvalue)
)
as select * from unpart where 1 = 0;

SQL> alter table part enable row movement;

Table altered.

A few selects to prove that the table is empty:

SQL> select count(1) from part partition (p1);

  COUNT(1)
----------
         0

SQL> select count(1) from part partition (p2)
  2  /

  COUNT(1)
----------
         0

SQL> select count(1) from part partition (p3);

  COUNT(1)
----------
         0

Now let’s exchange the table with the initial partition:

SQL> alter table part exchange partition p1 with table unpart;

Table altered.

SQL> select count(1) from part partition (p1)
  2  /

  COUNT(1)
----------
    318688

SQL> select count(1) from part partition (p2);

  COUNT(1)
----------
         0

SQL> select count(1) from part partition (p2); 

  COUNT(1) 
---------- 
         0

In my case object_id is a monotonically increasing value, each new object gets an object ID that’s greater than the previous one. In other words, although all existing rows are stored in the same partition, moving forward new rows will go into partition p2. Regular partitioning maintenance must be carried out to prevent partition p3 from getting all the rows.

The final step is to rename the tables and ensuring all constraints are in place, indexes usable and all the rest of it.

Advertisement