Category Archives: 19c

Printing all table preferences affecting dbms_stats.gather_table_stats

Oracle 11g introduced the abilty to control the behaviour of the dbms_stats package by setting preferences on the database, schema, and table level. These affect the way dbms_stats goes about doing its work. This feature has been extensively documented, I found the post by Maria Colgan exceptionally good at explaining the mechanism.

I often have to check table preferences in case statistics look a little suspicious but there isn’t an Oracle supplied way that I’m aware of to print all table preferences so I came up with a little script to do so. It currently supports Oracle 19c but it’s super easy to update the code for other releases. The table – passed as a parameter – needs to be in the user’s schema:

$ cat table_prefs.sql 
set serveroutput on verify off

prompt
prompt getting table prefs for &1
prompt ----------------------------------------

declare 
        v_version varchar2(100);
        v_compat  varchar2(100);

        type prefs_t is table of varchar2(100);

        v_prefs_19c prefs_t := prefs_t(
                'APPROXIMATE_NDV_ALGORITHM',
                'AUTO_STAT_EXTENSIONS',
                'AUTO_TASK_STATUS',
                'AUTO_TASK_MAX_RUN_TIME',
                'AUTO_TASK_INTERVAL',
                'CASCADE',
                'CONCURRENT',
                'DEGREE',
                'ESTIMATE_PERCENT',
                'GLOBAL_TEMP_TABLE_STATS',
                'GRANULARITY',
                'INCREMENTAL',
                'INCREMENTAL_STALENESS',
                'INCREMENTAL_LEVEL',
                'METHOD_OPT',
                'NO_INVALIDATE',
                'OPTIONS',
                'PREFERENCE_OVERRIDES_PARAMETER',
                'PUBLISH',
                'STALE_PERCENT',
                'STAT_CATEGORY',
                'TABLE_CACHED_BLOCKS');

        procedure print_prefs(pi_prefs prefs_t) as
                v_value varchar2(100);
        begin   
                for i in pi_prefs.first .. pi_prefs.last loop
                        v_value := sys.dbms_stats.get_prefs(
                                pname => pi_prefs(i),
                                ownname => user,
                                tabname => sys.dbms_assert.sql_object_name('&1'));

                sys.dbms_output.put_line(rpad(pi_prefs(i), 50) || ': ' || v_value);
                end loop;
        end;

begin   
        sys.dbms_utility.db_version(v_version, v_compat);

        if v_version = '19.0.0.0.0' then
                print_prefs(v_prefs_19c);
        else
                raise_application_error(-20001, 'Oracle ' || v_version || ' not yet supported');
        end if;

end;
/ 

Extending the code is very straight-forward. Simply create a new variable like v_prefs_12102 and copy/paste all the possible values from the 12c documentation for dbms_stats.get_prefs(). Then add a branch for your release and off you go.

Happy troubleshooting!

Initialising PL/SQL associative arrays in 18c and later

I can never remember how to initialise PL/SQL associative arrays and thought I’d write a short post about it. This is primarily based on an article on Oracle’s Ask Tom site, plus a little extra detail from Steven Feuerstein. Associative arrays were previously known as index-by tables, by the way.

Associative arrays before 18c

Prior to Oracle 18c, you had to initialise an associative array in a slightly cumbersome way, like so:

DECLARE
    TYPE array_t IS
        TABLE OF VARCHAR2(100) INDEX BY PLS_INTEGER;
    v_the_array   array_t;
    v_index       PLS_INTEGER;
BEGIN
    v_the_array(1) := 'one';
    v_the_array(2) := 'two';
    v_the_array(3) := 'three';
    v_the_array(9) := 'nine';
    v_index := v_the_array.first;
    WHILE ( v_index IS NOT NULL ) LOOP
        dbms_output.put_line('v_the_array(' || v_index || '): ' || v_the_array(v_index));
        v_index := v_the_array.next(v_index);
    END LOOP;
END;
/

There are slight variations to the theme as explained in example 5-3 of the Oracle Database PL/SQL language reference you might find interesting. You end up having to repeat yourself a lot, as you can see immediately below the BEGIN keyword.

Oracle 18c simplifies this task

With 18c the syntax reminds me a little of Perl, as in you define the type first; in the next step you add a variable to your anonymous code block, which you initialise in the same step:

DECLARE
    TYPE array_t IS
        TABLE OF VARCHAR2(100) INDEX BY PLS_INTEGER;

    -- this is new for 18c, see Steven Feuerstein's article here:
    -- https://blogs.oracle.com/oraclemagazine/easy-initializing-for-records-and-arrays
    v_the_array   array_t := array_t(
        1 => 'one', 
        2 => 'two', 
        3 => 'three',    -- note gap here ;)
        9 => 'nine');

    v_index       PLS_INTEGER;
BEGIN
    v_index := v_the_array.first;

    WHILE ( v_index IS NOT NULL ) LOOP
        dbms_output.put_line('v_the_array(' || v_index || '): ' || v_the_array(v_index));
        v_index := v_the_array.next(v_index);
    END LOOP;

END;
/ 

This way you can define the array in a much nicer looking way and with less code. I also find it more readable.

You are of course not limited to using PLS_INTEGER for indexing the array, you can index differently:

DECLARE
    TYPE capitals_t IS
        TABLE OF VARCHAR2(100) INDEX BY VARCHAR2(100);
    v_capitals   capitals_t := capitals_t(
        'France' => 'Paris', 
        'Belgium' => 'Brussels', 
        'Austria' => 'Vienna');

    v_index      VARCHAR2(100);
BEGIN
    v_index := v_capitals.first;
    WHILE ( v_index IS NOT NULL ) LOOP
        dbms_output.put_line('The capital of ' || v_index || ' is ' 
                             || v_capitals(v_index));

        v_index := v_capitals.next(v_index);
    END LOOP;

END;
/ 

Once I wrapped my head around this it all made sense, and I think I’ll use this syntax from now on.

Dead Connection Detection (DCD) and the Oracle database

Dead Connection Detection is a useful feature of the Oracle database: it allows for the cleanup of “dead” sessions so they don’t linger around consuming memory and other system resources. The idea is simple: if the database detects that a client process is no longer connected to its server process, it cleans up. This can happen in many ways, in most cases this kind of problem is triggered by an end user.

A dead connection shouldn’t be confused with idle connections: an idle connection still maintains the network link between client and server process, except that there is no activity. Idle connections aren’t maintained/controlled via DCD, there are other tools in the database handling such cases.

As a by product, DCD can also help with overly eager firewalls forcibly removing seemingly idle network connections. I found the following posts and the references therein very useful:

With Oracle 12c Oracle changed the way DCD works by no longer relying on its network layer but rather pushing the functionality into the TCP stack on platforms that support it. This change in behaviour is also explained in an Oracle white paper from 2014.

For the first part of this post I set sqlnet.expire_time to 1 as per a MOS note I found, your value is probably different. The parameter is documented in the Net*8 reference, please use it to work out what the best value is for you. As others have pointed out, this parameter has to go into the RDBMS home, more specifically $ORACLE_HOME/network/admin/sqlnet.ora. I am using Oracle 19.4 on Oracle Linux 7.7 for this blog post. I have seen the same behaviour in 12.1.0.2 as well in my lab.

Background

In addition to the proven methods of checking whether TCP_KEEPALIVE is enabled for a given session I wanted to show another one. Using the ss(8) utility it is possible to show socket options. I also tried lsof but on my system I couldn’t get it to print the options:

SQL> select spid from v$process where addr = (select paddr from v$session where username = 'MARTIN');

SPID
------------------------
13656

SQL> exit 

...

[root@server2 ~]# lsof -nP -p 13656 -T f
lsof: unsupported TCP/TPI info selection: f
lsof 4.87

Although the man-page for lsof reads:

       -T [t]   controls the reporting of some TCP/TPI information,  also  reported  by  net‐
                stat(1),  following  the network addresses.  In normal output the information
                appears in parentheses, each item except TCP or TPI state name identified  by
                a keyword, followed by `=', separated from others by a single space:

                     <TCP or TPI state name>
                     QR=<read queue length>
                     QS=<send queue length>
                     SO=<socket options and values>
                     SS=<socket states>
                     TF=<TCP flags and values>
                     WR=<window read length>
                     WW=<window write length>

                Not all values are reported for all UNIX dialects.  Items values (when avail‐
                able) are reported after the item name and '='.

                When the field output mode is in effect  (See  OUTPUT  FOR  OTHER  PROGRAMS.)
                each item appears as a field with a `T' leading character.

                -T with no following key characters disables TCP/TPI information reporting.

                -T with following characters selects the reporting of specific TCP/TPI infor‐
                mation:

                     f    selects reporting of socket options,
                          states and values, and TCP flags and
                          values.

So let’s try something else: ss(8) – another utility to investigate sockets

Revealing socket options

I have used ss(8) in the past when I didn’t have netstat available, which is more and more common now that netstat is deprecated and its designated successor is ss :)

As far as I know you can’t limit ss to show information just for a PID, I use grep to limit the output. The output is in fact very wide, which is why this might not look pretty on the blog depending on whether the renderer decides to wrap output or not.

[root@server2 ~]# ss -nop | egrep 'NetidState|13656'
NetidState Recv-Q Send-Q                                    Local Address:Port                                      Peer Address:Port                                                                                                           
tcp  ESTAB 0      0                               [::ffff:192.168.100.22]:1521                           [::ffff:192.168.100.21]:25780                           users:(("oracle_13656_si",pid=13656,fd=18)) timer:(keepalive,4.412ms,0)        
[root@server2 ~]# 

I used the following options:

  • -n for “Do not try to resolve service names”
  • -o for “Show timer information” and finally
  • -p to “Show process using socket”.

The main option here is -o. As per the man page:

       -o, --options
              Show timer information. For tcp protocol, the output format is:

              timer:(<timer_name>,<expire_time>,<retrans>)

              <timer_name>
                     the name of the timer, there are five kind of timer names:

                     on: means one of these timers: tcp retrans timer, tcp early retrans timer and tail loss probe timer
                     keepalive: tcp keep alive timer
                     timewait: timewait stage timer
                     persist: zero window probe timer
                     unknown: none of the above timers

              <expire_time>
                     how long time the timer will expire

              <retrans>
                     how many times the retran occurs

With a little shell loop I can show how that timer is decrementing:

[root@server2 ~]# for i in $(seq 1 5); do ss -nop | grep 13656; sleep 1 ; done
tcp  ESTAB 0      0                               [::ffff:192.168.100.22]:1521                           [::ffff:192.168.100.21]:25780                           users:(("oracle_13656_si",pid=13656,fd=18)) timer:(keepalive,20sec,0)          
tcp  ESTAB 0      0                               [::ffff:192.168.100.22]:1521                           [::ffff:192.168.100.21]:25780                           users:(("oracle_13656_si",pid=13656,fd=18)) timer:(keepalive,19sec,0)          
tcp  ESTAB 0      0                               [::ffff:192.168.100.22]:1521                           [::ffff:192.168.100.21]:25780                           users:(("oracle_13656_si",pid=13656,fd=18)) timer:(keepalive,18sec,0)          
tcp  ESTAB 0      0                               [::ffff:192.168.100.22]:1521                           [::ffff:192.168.100.21]:25780                           users:(("oracle_13656_si",pid=13656,fd=18)) timer:(keepalive,17sec,0)          
tcp  ESTAB 0      0                               [::ffff:192.168.100.22]:1521                           [::ffff:192.168.100.21]:25780                           users:(("oracle_13656_si",pid=13656,fd=18)) timer:(keepalive,16sec,0)          
[root@server2 ~]# 

Summary

Using the ss utility it is possible to check if a keepalive timer is implemented as a means to support DCD with 12.1 and later releases. Invoking ss(8) hasn’t caused any problems on my system, but as with every such tool you need to ensure it’s safe to use before attempting to look at an important system.

sqlldr, direct path loads and concurrency in 12.2 and later

In my previous post I showed you that Oracle’s SQL loader (sqlldr) utility has a built-in timeout of 30 seconds waiting for locked resources before returning SQL*Loader-951/ORA-604/ORA-54 errors and failing to load data. This can cause quite some trouble! Before showing you the enhancement in 12.2 and later, here is the gist of the previous post.

Concurrency in Oracle sqlldr 12.1 and earlier

To show you how sqlldr times out I need to simulate an exclusive lock on the table in sqlplus for example. That’s quite simple:

SQL> set time on
10:17:00 SQL> lock table t2 in exclusive mode;

Table(s) Locked.

10:17:07 SQL>

Next I started a sqlldr process in another session. Please refer to the previous post for details, or take my word that I’m using a direct path insert strategy. The only difference is the size of the input file – I had to inflate it considerably to buy some time running standard diagnostic tools:

$ date; sqlldr /@loader control=t2_2.ctl ; date
Tue 23 Jul 10:22:35 BST 2019

SQL*Loader: Release 19.0.0.0.0 - Production on Tue Jul 23 10:22:35 2019
Version 19.4.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct
SQL*Loader-951: Error calling once/load initialization
ORA-00604: error occurred at recursive SQL level 1
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
Tue 23 Jul 10:23:06 BST 2019

The “date” commands reveal a timeout of 30 seconds. Setting ddl_lock_timeout has no bearing on the timeout: the database is waiting on kpodplck wait before retrying ORA-54:

10:22:36 SQL> r
  1* select event, state from v$session where program like 'sqlldr%'

EVENT                                                            STATE
---------------------------------------------------------------- -------------------
kpodplck wait before retrying ORA-54                             WAITING

1 row selected.

This was where I left off with the previous post until I noticed there is another option!

Oracle 12.2 and later

In 12.2 and later you can instruct sqlldr to wait until the lock is released. There is a new parameter named direct_path_lock_timeout:

$ sqlldr | egrep 'Version|direct_path'
Version 19.4.0.0.0
direct_path_lock_wait -- wait for access to table when currently locked  (Default FALSE)

Interestingly there are no hits for direct_path_lock_wait in My Oracle Support’s knowledgebase. There are merely a few documentation references. So what does this parameter do? While the table is still locked in exclusive mode, let’s start the sqlldr process with the new option:

$ date; sqlldr /@loader control=t2_2.ctl direct_path_lock_wait=true; date
Fri 26 Jul 10:13:54 BST 2019

SQL*Loader: Release 19.0.0.0.0 - Production on Fri Jul 26 10:13:54 2019
Version 19.4.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct
[ ... waiting ... ]

The process now sits there and waits … and it does so for more than 30 seconds. And instead of kpodplck wait before retrying ORA-54 it waits on … drums please: enq: TM contention!

10:20:11 SQL> select seq#, event, state, round(seconds_in_wait/60, 2) mins_waiting
10:20:21   2  from v$session where program like 'sqlldr%';

      SEQ# EVENT                    STATE               MINS_WAITING
---------- ------------------------ ------------------- ------------
       119 enq: TM - contention     WAITING                     6.53

10:20:26 SQL> 

This is great news if your sqlldr processes compete for TM enqueues and your load process takes a little longer than the previously hard coded timeout of 30 seconds. The process eventually completed successfully after the enqueue was released:

$ date; sqlldr /@loader control=t2_2.ctl direct_path_lock_wait=true; date
Fri 26 Jul 10:13:54 BST 2019

SQL*Loader: Release 19.0.0.0.0 - Production on Fri Jul 26 10:13:54 2019
Version 19.4.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Path used:      Direct

Load completed - logical record count 950936.

Table T2:
  950924 Rows successfully loaded.

Check the log file:
  t2_2.log
for more information about the load.
Fri 26 Jul 10:21:32 BST 2019

Caveats

As with every pro, there are usually cons associated. The downside to waiting (for a potentially very long time) is that you might not notice load processes beginning to stack up unless proper instrumentation and monitoring are in place. Waiting too long for data to be loaded is equally bad as not loading at all because the end result is identical. As with many features in the database Oracle gives you plenty of options, and it’s up to the developers and architects to make the correct decisions on how to use them.

Summary

Beginning with sqlldr 12.2 Oracle introduced the option to wait for enqueues on segments to be released instead of aborting after 30 seconds.

In the next post I’ll write about another possibility to prevent exclusive table locks in the first place when running multiple concurrent sqlldr sessions.

JSON_TABLE() and date/time columns in Oracle 19c

While researching the use of JSON in Oracle 19c I came some interesting behaviour that wasn’t immediately obvious (to me). With this post I am hoping to save you a couple of minutes scratching your head when working with JSON_TABLE(). This is Oracle 19.3.0 on Linux and I’m connecting to it using SQLcl 19.1.

Some background

As part of my JSON-support-in-Oracle research I had a good look at JSON_TABLE. Although complex at first sight, it is a lot less intimidating if you know how to use XMLTABLE :) My goal for this post is to convert a JSON document to a relational structure.

To convert JSON to (relational) rows and columns, I believe I need to use the JSON_value_column from the list of available options in JSON_TABLE‘s JSON_column_definition. Interestingly, and different from the syntax diagram for XMLTABLE(), I’m allowed to specify a JSON_value_return_type. There are quite a a few of them, and I guess the most commonly used ones are going to be varchar2, date and timestamp. This post is about the latter two, since parsing out a text string is pretty simple.

The Test Case

I created a very simple table to test the functionality:

SQL> create table t1 (
   2   id number not null,
   3   json_txt clob not null,
   4   constraint pk_t1 primary key (id),
   5   constraint c_is_json check (json_txt is json)
   6  );

 Table created.

Essentially a table to hold a JSON column plus a check constraint to make sure that I’m not inserting data that isn’t JSON. Here’s the initial JSON document I started with:

SQL> select json_query(json_txt, '$' pretty error on error) 
  2   from t1 where id = 1;

JSON_QUERY(JSON_TXT,'$'PRETTYERRORONERROR)                  
------------------------------------------------------------
{                                                           
  "results" :                                               
  [                                                         
    {                                                       
      "resultID" : 1,                                       
      "resultDate" : "13-JUN-19 08:10:00",                  
      "details" : "none"                                    
    }                                                       
  ]                                                         
}                 

Thanks to my check constraint I know this is valid JSON. I specified the pretty keyword to make the output more readable. When developing using new(-ish) features I prefer Oracle to raise an error so I’m going with ERROR ON ERROR.

These are my NLS settings by the way, this is a system installed with the default locale set to en_US. I have added these settings for reference and they will play a role later on.

SQL> select * from nls_session_parameters
  2* where regexp_like(parameter,'(DATE|TIME)');
PARAMETER                      VALUE                         
------------------------------ ------------------------------
NLS_DATE_FORMAT                DD-MON-RR                     
NLS_DATE_LANGUAGE              AMERICAN                      
NLS_TIME_FORMAT                HH24.MI.SSXFF                 
NLS_TIMESTAMP_FORMAT           DD-MON-RR HH24.MI.SSXFF       
NLS_TIME_TZ_FORMAT             HH24.MI.SSXFF TZR             
NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH24.MI.SSXFF TZR   

6 rows selected. 

I am now ready to start testing.

First Test

So let’s get started with the testing. My query is shown here:

SELECT
    jt.*
FROM
    t1,
    JSON_TABLE ( t1.json_txt, '$.results[*]' ERROR ON ERROR
            COLUMNS (
                resID   NUMBER   PATH '$.resultID',
                resDATE DATE     PATH '$.resultDate'
            )
        )
    AS jt
WHERE
    t1.id = 1;

Not too dissimilar to using XMLTABLE, I need to pass the column containing my JSON document to the function, followed by the path to the data within the JSON document I want to parse. I could have stored multiple results in the array (and I did initially, but considered that a little too confusing), hence the reference to all of them.

The columns() clause (of type JSON_value_column) allows me to define columns, their respective data type and the path relative to the second argument to JSON_TABLE. In my case that’s $.results.resultID and $.results.resultDate. These two are mapped to resID and resDATE in the output. I am limiting the query to my first row in t1.

When I execute this query against the first record in the table, I get an error:

SQL> SELECT
  2      jt.*
  3  FROM
  4      t1,
  5      JSON_TABLE ( t1.json_txt, '$.results[*]' ERROR ON ERROR
  6              COLUMNS (
  7                  resID   NUMBER   PATH '$.resultID',
  8                  resDATE DATE     PATH '$.resultDate'
  9              )
 10          )
 11      AS jt
 12  WHERE
 13      t1.id = 1;

Error starting at line : 1 in command -
SELECT
    jt.*
FROM
    t1,
    JSON_TABLE ( t1.json_txt, '$.results[*]' ERROR ON ERROR
            COLUMNS (
                resid   NUMBER   PATH '$.resultID',
                resdate DATE     PATH '$.resultDate'
            )
        )
    AS jt
WHERE
    t1.id = 1
Error report -
ORA-01861: literal does not match format string

This is kind of surprising, my suspicion at the time was that something is wrong with the NLS_DATE_FORMAT settings. But whatever I tried, I always got the same error. Using MOS and other resources on the net turned out blank, which might well be down to my inadequate search efforts. Tracing didn’t really help either.

Second Test

I tried a few other things until, with the help of an educated guess, I got lucky: I removed the time portion from the resultDate and formatted it to the output you get from selecting sysdate from dual, as shown here:

SQL> select json_query(json_txt, '$.results' pretty error on error) 
  2  from t1 where id = 2; 

JSON_QUERY(JSON_TXT,'$.RESULTS'PRETTYERRORONERROR)          
------------------------------------------------------------
[                                                           
  {                                                         
    "resultID" : 2,                                         
    "resultDate" : "13-JUN-19",                             
    "details" : "none"                                      
  }                                                         
]                                                           

This time around, the error message was different:

SQL> SELECT
  2      jt.*
  3  FROM
  4      t1,
  5      JSON_TABLE ( t1.json_txt, '$.results[*]' ERROR ON ERROR
  6              COLUMNS (
  7                  resID   NUMBER    PATH '$.resultID',
  8                  resDATE DATE      PATH '$.resultDate'
  9              )
 10          )
 11      AS jt
 12  WHERE
 13      t1.id = 2;

Error starting at line : 1 in command -
SELECT
    jt.*
FROM
    t1,
    JSON_TABLE ( t1.json_txt, '$.results[*]' ERROR ON ERROR
            COLUMNS (
                resID   NUMBER    PATH  '$.resultID',
                resDATE DATE      PATH '$.resultDate'
            )
        )
    AS jt
WHERE
    t1.id = 2
Error report -
ORA-40688: date or time stamp string value does not conform
to ISO 8601 format

Aha! That’s much better than ORA-01861: literal does not match format string. So it would appear as if the date the system expected to find in the JSON document must be ISO 8601 compliant. Ummm, what exactly is an ISO 8601 date format? The Internet knew!

Third Test

So maybe using an ISO 8601 time format in the JSON document is the answer? Worth a try, the JSON document now looks like this:

SQL> select json_query(json_txt, '$.results' pretty error on error) 
  2  from t1 where id = 3;

JSON_QUERY(JSON_TXT,'$.RESULTS'PRETTYERRORONERROR)          
------------------------------------------------------------
[                                                           
  {                                                         
    "resultID" : 3,                                         
    "resultDate" : "2019-06-13T16:39:00+00:00",             
    "details" : "none"                                      
  }                                                         
]     

It seems that ISO 8601 requires a slightly more complex time expression, much more like a timestamp in Oracle. Let’s see if this works:

SQL> SELECT
  2      jt.*
  3  FROM
  4      t1,
  5      JSON_TABLE ( t1.json_txt, '$.results[*]' ERROR ON ERROR
  6              COLUMNS (
  7                  resID   NUMBER    PATH  '$.resultID',
  8                  resDATE DATE      PATH '$.resultDate'
  9              )
 10          )
 11      AS jt
 12  WHERE
 13      t1.id = 3;

     RESID RESDATE  
---------- ---------
         3 13-JUN-19

Hurray, no errors, but result data for a change!

Caveat

But wait, that’s not the whole story. I have stored a time component in the input document, so how do I get that? I’d normally use to_char() for that purpose, so let’s give that a go:

SQL> SELECT
  2      to_char(jt.resDATE, 'dd.mm.yyyy hh24:mi:ss') as formattedDate
  3  FROM
  4      t1,
  5      JSON_TABLE ( t1.json_txt, '$.results[*]' ERROR ON ERROR
  6              COLUMNS (
  7                  resID   NUMBER    PATH  '$.resultID',
  8                  resDATE DATE      PATH '$.resultDate'
  9              )
 10          )
 11      AS jt
 12  WHERE
 13      t1.id = 3;

FORMATTEDDATE      
-------------------
13.06.2019 00:00:00

Hmmm, that’s midnight, and not 16:39:00+00:00 …. Unless I have missed something important it seems as if the DATE column type in the columns() clause stripped the input of its time component. Not to worry though, there’s a timestamp type. Maybe that can help?

SQL> SELECT
  2      jt.*
  3  FROM
  4      t1,
  5      JSON_TABLE ( t1.json_txt, '$.results[*]' ERROR ON ERROR
  6              COLUMNS (
  7                  resID   NUMBER    PATH '$.resultID',
  8                  resDATE TIMESTAMP PATH '$.resultDate'
  9              )
 10          )
 11      AS jt
 12  WHERE
 13      t1.id = 3;

     RESID RESDATE                     
---------- ----------------------------
         3 13-JUN-19 16.39.00.000000000

Voila! I’ve got a date and time.

Summary

From my very limited testing it appears as if input date in the JSON document has to be provided in ISO 8601 format, or otherwise Oracle raises an error. Unlike in SQLLDR’s control file there appears to be no option to tell Oracle about the formatting. Changing the NLS_DATE_FORMAT (not shown here) did affect the output of the query only. It didn’t appear to be applied when parsing the input.