Category Archives: VLDB

Little things worth knowing: keeping enq: TM enqueue at bay during direct path inserts

Direct path inserts are commonly found in processing where data are shifted from one source to another. There are many permutations of the theme, this post details the way SQL Loader (sqlldr) behaves.

I have previously written about sqlldr and concurrent direct path inserts. The behaviour for releases <= 12.1 is described in the first post, the most welcome changes in 12.2 went into the second post. Since the fastest way of doing something is not to do it at all, I thought it might be worth demonstrating a useful technique to keep the dreaded TM enqueue at bay. Please note that these do not only apply to sqlldr, inserts with the append hint for example behave the same way in my experience.

I used Oracle 19.4.0 on Oracle Linux 7 for this post, the content should apply to older database releases, too. The data model is once more provided by Swingbench, I’m using the SH benchmark this time. As I said on twitter yesterday, @TanelPoder scripts are simply too good to afford writing my own, so I’m using them in this post where possible.

Please note that I’m using partitioning in this post, which is a cost option on top of Enterprise Edition. Always ensure your system is properly licensed for features you use.

Preparations

Let’s assume for a moment that you would like to load another batch of records into the (Swingbench) sh.sales table. The table is range partitioned on time_id, and split into quarterly partitions – at least for the later ones. On my system I count 68 partitions, with the latest containing records up to January 1st 2014. I added a couple of partitions for the next 2 quarters to allow for the new data load.

SQL> select partition_name, partition_position,high_value 
  2  from user_tab_partitions where table_name = 'SALES' 
  3   and partition_position >= 68;

PARTITION_NAME     PARTITION_POSITION HIGH_VALUE
----------------- ------------------- -----------------------------------------------------------------------------------
SALES_Q4_2013                      68 TO_DATE(' 2014-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')   
SALES_Q1_2014                      69 TO_DATE(' 2014-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')   
SALES_Q2_2014                      70 TO_DATE(' 2014-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')   

I created CSV files of the data to be loaded by simply adding 3 and 6 months to the last populated partition.

spool /u01/app/oracle/admin/ORCL/sqlldr/sales_q1_2014.csv
select /*csv*/ 
 prod_id,
 cust_id,
 add_months(time_id,3) time_id,
 channel_id,
 promo_id,
 quantity_sold,
 seller,
 fulfillment_center,
 courier_org,
 tax_country,
 tax_region,
 amount_sold 
from sales partition (sales_q4_2013);

spool off
spool  /u01/app/oracle/admin/ORCL/sqlldr/sales_q2_2014.csv

select /*csv*/ 
 prod_id,
 cust_id,
 add_months(time_id,6) time_id,
 channel_id,
 promo_id,
 quantity_sold,
 seller,
 fulfillment_center,
 courier_org,
 tax_country,
 tax_region,
 amount_sold 
from sales partition (sales_q4_2013);

spool off
exit

Using the csv “hint” and executing this little script in sqlcl allows me to spool the files into the desired format (I made sure I have plenty of space available in /u01/app/oracle/admin). Now let’s try loading the data. As always, I use a control file although there might be better ways of doing this. It contains the following instructions:

options (direct=true,skip=1)
load data
append into table sales
fields terminated by ',' optionally enclosed by '"'
date format "dd-mon-yy"
trailing nullcols
(
  PROD_ID,                                                                        
  CUST_ID,                                                                        
  TIME_ID,                                                                        
  CHANNEL_ID,                                                                     
  PROMO_ID,                                                                       
  QUANTITY_SOLD,                                                                  
  SELLER,                                                                         
  FULFILLMENT_CENTER,                                                             
  COURIER_ORG,                                                                    
  TAX_COUNTRY,                                                                    
  TAX_REGION,                                                                     
  AMOUNT_SOLD
)

Test #1

Now let’s load the data! Since I’m not the most patient person I launched 2 sessions in parallel using a screen (1) session

$ sqlldr /@sh control=load_sales.ctl data=sales_q1_2014.csv log=load_q1.log bad=load_q1.bad direct_path_lock_wait=true

$ sqlldr /@sh control=load_sales.ctl data=sales_q2_2014.csv log=load_q2.log bad=load_q2.bad direct_path_lock_wait=true

While monitoring the load process I was surprised to see session 2 waiting:

SQL> select sid,seq#,event,state from v$session where program like '%sqlldr%';

       SID       SEQ# EVENT                                                            STATE
---------- ---------- ---------------------------------------------------------------- -------------------
      1228        578 SQL*Net more data from client                                    WAITED SHORT TIME
      1470         97 enq: TM - contention                                             WAITING

2 rows selected.

Each process should insert data into separate partitions, so why is there a wait? Maybe there is a clue in v$session_wait:

SQL> @sw 1470

    SID STATE   EVENT                                          SEQ# SEC_IN_WAIT P1                  P2                  P3                  P1TRANSL
------- ------- ---------------------------------------- ---------- ----------- ------------------- ------------------- ------------------- ------------------------------------------
   1470 WAITING enq: TM - contention                             98           6 name|mode=          object #= 78758     table/partition= 0  0x544D0006: TM mode 6
                                                                                0x00000000544D0006


1 row selected.

I noticed that p3 mentions a table or partition flag, which triggered my interest. After a little bit of research I found this section in the documentation: 12.3.2 Direct Path Load of a Partitioned or Subpartitioned Table which provides some clues. Object# 78758 is the table in my database:

SQL> @oid 78758

owner                     object_name                    object_type        SUBOBJECT_NAME                 CREATED             LAST_DDL_TIME       status    DATA_OBJECT_ID
------------------------- ------------------------------ ------------------ ------------------------------ ------------------- ------------------- --------- --------------
SH                        SALES                          TABLE                                             2019-08-29 15:14:53 2019-08-29 23:06:27 VALID

What if I tried to insert into a partition straight away?

Test #2

Let’s try that with the following 2 control files. The main differences are a) referencing the input file directly and b) the specification of the target partition in the append clause:

$ cat load_sales_q1_2014.ctl
options (skip=1,direct=true)
load data
infile  /u01/app/oracle/admin/ORCL/sqlldr/sales_q1_2014.csv
append into table sales partition (sales_q1_2014)
fields terminated by ',' optionally enclosed by '"'
date format "dd-mon-yy"
trailing nullcols
( 
  PROD_ID,
  CUST_ID,
  TIME_ID,
  CHANNEL_ID,
  PROMO_ID,
  QUANTITY_SOLD,
  SELLER,
  FULFILLMENT_CENTER,
  COURIER_ORG,
  TAX_COUNTRY,
  TAX_REGION,
  AMOUNT_SOLD
)

$ cat load_sales_q2_2014.ctl
options (skip=1,direct=true)
load data
infile  /u01/app/oracle/admin/ORCL/sqlldr/sales_q2_2014.csv
append into table sales partition (sales_q2_2014)
fields terminated by ',' optionally enclosed by '"'
date format "dd-mon-yy"
trailing nullcols
( 
  PROD_ID,
  CUST_ID,
  TIME_ID,
  CHANNEL_ID,
  PROMO_ID,
  QUANTITY_SOLD,
  SELLER,
  FULFILLMENT_CENTER,
  COURIER_ORG,
  TAX_COUNTRY,
  TAX_REGION,
  AMOUNT_SOLD
)

The slightly adjusted commands for sqlldr are as follows:

$ sqlldr /@sh control=load_sales_q1_2014.ctl log=load_q1.log bad=load_q1.bad direct_path_lock_wait=true

$ sqlldr /@sh control=load_sales_q2_2014.ctl log=load_q2.log bad=load_q2.bad direct_path_lock_wait=true

With the change in place I couldn’t notice any TM enqueues when running these in parallel:

SQL> select sid,seq#,event,state from v$session where program like '%sqlldr%';

no rows selected

SQL>
SQL> r
  1* select sid,seq#,event,state from v$session where program like '%sqlldr%'

       SID       SEQ# EVENT                                                            STATE
---------- ---------- ---------------------------------------------------------------- -------------------
      1251        627 direct path write temp                                           WAITED KNOWN TIME
      1488        475 PGA memory operation                                             WAITED SHORT TIME

SQL> r
  1* select sid,seq#,event,state from v$session where program like '%sqlldr%'

       SID       SEQ# EVENT                                                            STATE
---------- ---------- ---------------------------------------------------------------- -------------------
      1251        819 direct path write temp                                           WAITING
      1488        664 direct path write temp                                           WAITING

SQL> r
  1* select sid,seq#,event,state from v$session where program like '%sqlldr%'

       SID       SEQ# EVENT                                                            STATE
---------- ---------- ---------------------------------------------------------------- -------------------
      1251       1070 SQL*Net more data from client                                    WAITED SHORT TIME
      1488        906 direct path write temp                                           WAITED KNOWN TIME

SQL> r
  1* select sid,seq#,event,state from v$session where program like '%sqlldr%'

       SID       SEQ# EVENT                                                            STATE
---------- ---------- ---------------------------------------------------------------- -------------------
      1251       1332 db file sequential read                                          WAITED SHORT TIME
      1488       1143 library cache lock                                               WAITING

SQL> r
  1* select sid,seq#,event,state from v$session where program like '%sqlldr

       SID       SEQ# EVENT                                                            STATE
---------- ---------- ---------------------------------------------------------------- -------------------
      1251       1824 db file sequential read                                          WAITED SHORT TIME
      1488       1372 db file sequential read                                          WAITING

That looks ok, and switching back through both screen sessions I can see both invocations of sqlldr completed ok:

$ sqlldr /@sh control=load_sales_q1_2014.ctl log=load_q1.log bad=load_q1.bad direct_path_lock_wait=true

SQL*Loader: Release 19.0.0.0.0 - Production on Thu Aug 29 22:44:53 2019
Version 19.4.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 652437.

Table SALES, partition SALES_Q1_2014:
  652437 Rows successfully loaded.

Check the log file:
  load_q1.log
for more information about the load.

$ sqlldr /@sh control=load_sales_q2_2014.ctl log=load_q2.log bad=load_q2.bad direct_path_lock_wait=true

SQL*Loader: Release 19.0.0.0.0 - Production on Thu Aug 29 22:44:54 2019
Version 19.4.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 652437.

Table SALES, partition SALES_Q2_2014:
  652437 Rows successfully loaded.

Check the log file:
  load_q2.log
for more information about the load.

Summary

When you are trying to load data into a table concurrently using direct path inserts, you will have to wait on a TM enqueue unless your table is partitioned and you are specifying different partitions as the target in each process. More generally speaking, Oracle will hold a TM enqueue on the segment you are loading into, so 2 processes attempting a direct path insert into a (sub) partition will equally have to serialise.

NB: if you are using direct path mode for sqlldr and your target segment is indexed, make sure to read chapter 12 in the Database Utilities Guide to learn about the implications of direct path load and index maintenance.

Linux large pages and non-uniform memory distribution

In my last post about large pages in 11.2.0.3 I promised a little more background information on how large pages and NUMA are related.

Background and some history about processor architecture

For quite some time now the CPUs you get from AMD and Intel both are NUMA, or better: cache coherent NUMA CPUs. They all have their own “local” memory directly attached to them, in other words the memory distribution is not uniform across all CPUs. This isn’t really new, Sequent has pioneered this concept on x86 a long time ago but that’s in a different context. You really should read Scaling Oracle 8i by James Morle which has a lot of excellent content related to NUMA in it, with contributions from Kevin Closson. It doesn’t matter that it reads “8i” most of it is as relevant today as it was then.

So what is the big deal about NUMA architecture anyway? To explain NUMA and why it is important to all of us a little more background information is on order.

Some time ago processor designers and architects of industry standard hardware could no longer ignore the fact that a front side bus (FSB) proved to be a bottleneck. There were two reasons for this: it was a) too slow and b) too much data had to go over it. As one direct consequence DRAM memory has been directly attached to the CPUs. AMD has done this first with it’s Opteron processors in its AMD64 micro architecture, followed by Intel’s Nehalem micro architecture. By removing the requirement of data retrieved from DRAM to travel across a slow bus latencies could be removed.

Now imagine that every processor has a number of memory channels to which DDR3 (DDR4 could arrive soon!) SDRAM is attached to. In a dual socket system, each socket is responsible for half the memory of the system. To allow the other socket to access the corresponding other half of memory some kind of interconnect between processors is needed. Intel has opted for the Quick Path Interconnect, AMD (and IBM for p-Series) use Hyper Transport. This is (comparatively) simple when you have few sockets, up to 4 each socket can directly connect to every other without any tricks. For 8 sockets it becomes more difficult. If every socket can directly communicate with its peers the system is said to be glue-less which is beneficial. The last production glue-less system Intel released was based on the Westmere architecture. Sandy Bridge (current until approximately Q3/2013) didn’t have an eight-way glue-less variant, and this is exactly why you get Westmere-EX in the X3-8, and not Sandy Bridge as in the X3-2.

Anyway, your system will have local and remote memory. For most of us, we are not going to notice this at all since there is little point in enabling NUMA on systems with two sockets. Oracle still recommends that you only enable NUMA on 8 way systems, and this is probably the reason the oracle-validated and preinstall RPMs add “numa=off” to the kernel command line in your GRUB boot loader.

Continue reading

Introduce Partitioning to a large table

Recently I have been asked to find a way to partition a very big table on very slow storage. Very large table translates into 600G, and slow storage translates into HP MSA 1000. The time window wasn’t big enough to run this, and the insert activity on the table required it to be done on less busy times.

DBMS_REDEFINITION wasn’t an option then so I decided on a multi-step approach. If I can’t neatly use dbms_redefinition then at least I needed the power of the partitioning commands. Here’s the proof of concept.

WARNING
As alwyas, don’t simply copy and paste – this works for me but doesn’t mean it also works for you. Test, test, test first before doing this in production.

Create a “large” table as source first to test. Actually it’s not large at all but that’s besides the point…

create table unpart
tablespace users
as select * from dba_objects;

Table created.

09:53:05 SQL> select count(1) from unpart;

  COUNT(1)
----------
      9960

Now let’s put some more data into it:

begin
 for i  in 1..5 loop  
  insert into unpart select * from unpart;
 end loop;
end;
/

09:53:26 SQL> /

PL/SQL procedure successfully completed.

09:53:27 SQL> commit;

Commit complete.

09:53:30 SQL> select count(1) from unpart;

  COUNT(1)
----------
    318720

Slightly better. I now need to find the upper bound for the newly to be created partition. This is where you shouldn’t insert data into the table…

SQL> select max(object_id) from unpart;

MAX(OBJECT_ID)
--------------
         10459

Now create the partitioned table to hold the data after the move. The whole source table should fit into the first partition.

create table part
partition by range (object_id)
(
 partition p1 values less than (10459),
 partition p2 values less than (20000),
 partition p3 values less than (maxvalue)
)
as select * from unpart where 1 = 0;

SQL> alter table part enable row movement;

Table altered.

A few selects to prove that the table is empty:

SQL> select count(1) from part partition (p1);

  COUNT(1)
----------
         0

SQL> select count(1) from part partition (p2)
  2  /

  COUNT(1)
----------
         0

SQL> select count(1) from part partition (p3);

  COUNT(1)
----------
         0

Now let’s exchange the table with the initial partition:

SQL> alter table part exchange partition p1 with table unpart;

Table altered.

SQL> select count(1) from part partition (p1)
  2  /

  COUNT(1)
----------
    318688

SQL> select count(1) from part partition (p2);

  COUNT(1)
----------
         0

-- Now we can partition the rest!

09:56:22 SQL> alter table part split partition p1 at (5000) into (partition p_del, partition p1) 
           2  update global indexes parallel 4;

Table altered.

09:56:35 SQL> select count(1) from part partition (p_del);

  COUNT(1)
----------
    157312

09:56:46 SQL> select count(1) from part partition (p1);

  COUNT(1)
----------
    161376

09:56:50 SQL> select 157312+161376 from dual;

157312+161376
-------------
       318688

So we’ve got the initial number of rows, evenly split across partitions. This is great! Unfotunately there is a slight caveat with the split partition command – it’s very heavy on resources (remember the multi step approach?). Each row in the source partition will be copied to the destination partitions (new segments), causing a large number of IOs.

09:57:33 SQL> alter table part split partition p1 at (8500) into (partition p1, partition p1bis)
09:57:37   2   update global indexes parallel 4;

Dropping partitions is easy too:

SQL> alter table part drop partition p_del

Table altered.