Category Archives: Oracle

Oracle (Database and Middleware) related posts

Installation of Oracle Restful Data Services 20.4 without using SYSDBA

It’s really hard to come up with good, concise, and short twitter-compatible blog titles, so let’s try with a few more words. What I’d like to share is how to install Oracle Restful Data Services (ORDS) v20.4 without having to connect to the database as SYSDBA. There are good reasons not to connect as SYSDBA, and I’m glad Oracle made it possible to grant the necessary privileges for an ORDS installation to a regular database account. It’s not a new feature at all, the option has been around for a while but I didn’t have time to write about it yet.

Some background before getting started

The installation of Oracle’s Restful Data Services (ORDS) consists of 2 parts:

  1. Deploying the ORDS distribution to a Docker container or virtual machine
  2. Connecting to and configuring its database

Initially it was necessary to connect to the highly privileged SYSDBA account to complete step 2. Thankfully this isn’t necessary anymore. My post describes how to install and configure ORDS 20.4 against an Oracle 19c (non-container) database.

Deploying ORDS

The first step consists of downloading ORDS from Oracle’s website. I am planning on deploying it in one of my Oracle Linux 8 vagrant boxes. There is more to the deployment of ORDS than configuration, which I’m leaving to a later post. More specifically I’ll not concern myself integrating ORDS into a Docker container or even Tomcat 9.x to keep the post simple.

The Ansible playbook I use creates an Oracle account out of habit. It also creates the location I want it to use – /opt/ords – and changes the directory’s ownership to oracle:oinstall. Finally, it unzips ORDS.

Configuring ORDS

ORDS stores its configuration in a directory of your choice. Since I’m lazy I use /opt/ords/config for this purpose. Once the directory is created on the file system you tell ORDS where to find its configuration:

[oracle@ords ords]$ java -jar /opt/ords/ords.war configdir /opt/ords/config
2021-04-21T18:48:23.156Z INFO        Set config.dir to /opt/ords/config in: /opt/ords/ords.war

Once this step is complete it’s time to configure ORDS and its database connection. This step is #2 in the above list and referred to as “installation” in the documentation.

Creating the less-privileged user in the database

As per the Install Guide you need to run a script to grant a non-privileged user the rights to configure ORDS. The script doesn’t create the user so you have to ensure it exists. The user didn’t seem to require any elevated privileges. I went creative and created the installation account:

SQL> create user ordsinstall identified by values '...';

User created.

SQL> grant create session to ordsinstall;

Grant succeeded.

With the user created I could start the minimum privilege script:

SQL> @installer/ords_installer_privileges ordsinstall

The script takes one argument: the account you intend to use for the installation (ordsinstall).

Installing ORDS in the database

The final step is to install ORDS in the database. This can be done in many ways. Trying to keep it simple I went with the interactive installation.

Have a look at the screen output, it should be self-explanatory for the most part. When prompted for the administrator username you provide the account just created (ordsinstall in my case). Since I wanted to try SQL*Developer Web, I chose that option. Your mileage may vary.

[oracle@ords ords]$ java -jar ords.war install advanced
Specify the database connection type to use.
Enter number for [1] Basic  [2] TNS  [3] Custom URL [1]:1
Enter the name of the database server [localhost]:server3
Enter the database listen port [1521]:
Enter 1 to specify the database service name, or 2 to specify the database SID [1]:1
Enter the database service name:ORCL
Enter 1 if you want to verify/install Oracle REST Data Services schema or 2 to skip this step [1]:
Enter the database password for ORDS_PUBLIC_USER:
Confirm password:
Requires to login with administrator privileges to verify Oracle REST Data Services schema.

Enter the administrator username:ordsinstall
Enter the database password for ordsinstall:
Confirm password:
Connecting to database user: ordsinstall url: jdbc:oracle:thin:@//server3:1521/ORCL

Retrieving information.
Enter the default tablespace for ORDS_METADATA [SYSAUX]:
Enter the temporary tablespace for ORDS_METADATA [TEMP]:
Enter the default tablespace for ORDS_PUBLIC_USER [SYSAUX]:
Enter the temporary tablespace for ORDS_PUBLIC_USER [TEMP]:
Enter 1 if you want to use PL/SQL Gateway or 2 to skip this step.
If using Oracle Application Express or migrating from mod_plsql then you must enter 1 [1]:2
Enter a number to select a feature to enable:
   [1] SQL Developer Web  (Enables all features)
   [2] REST Enabled SQL
   [3] Database API
   [4] REST Enabled SQL and Database API
   [5] None
Choose [1]:1
2021-04-21T19:38:18.012Z INFO        reloaded pools: []
Installing Oracle REST Data Services version 20.4.3.r0501904
... Log file written to /home/oracle/ords_install_core_2021-04-21_193818_00414.log
... Verified database prerequisites
... Created Oracle REST Data Services proxy user
Warning: Nashorn engine is planned to be removed from a future JDK release
... Created Oracle REST Data Services schema
... Granted privileges to Oracle REST Data Services
... Created Oracle REST Data Services database objects
... Log file written to /home/oracle/ords_install_datamodel_2021-04-21_193842_00226.log
... Log file written to /home/oracle/ords_install_apex_2021-04-21_193846_00491.log
Completed installation for Oracle REST Data Services version 20.4.3.r0501904. Elapsed time: 00:00:32.177 

Enter 1 if you wish to start in standalone mode or 2 to exit [1]:2
[oracle@ords ords]$

That’s it! ORDS has been configured in the database, and I didn’t have to connect as SYSDBA. I think that’s a big step ahead, and I have meant to write about this topic for a while.

Have fun!

Connecting to a database using SQLcl, a wallet and the thin driver

I have previously written about the use of SQLcl and using an Oracle wallet (aka secure external password store). In my previous article I used the JDBC oci driver. This is perfectly fine, but as it’s based on Oracle’s Call Interface, it requires client libraries to be present. This time I wanted to use the thin driver. A cursory search didn’t reveal any recent information about the topic so I decided to write this short post.

Creating the Wallet

The wallet is created exactly the same way as described in an earlier article of mine. Scroll down to “Preparing the External Password Store” and have a look at the steps required. You probably don’t have a client installation on the sqlcl host, so you need to prepare the wallet on a machine with one present. When providing input to mkstore and when creating tnsnames.ora it still seems to be best to ensure all identifiers are in lower case. Even if you db_name is in upper case like mine (“ORCL”).

Once the wallet is created and tested, transfer it to the sqlcl-host in a secure manner.

Deploying SQLcl

SQLcl – among other things – allows you to connect to an Oracle database without a client installation. This is quite useful in many cases. I downloaded SQLcl 21c (build 21.1.0.104.1544) from Oracle’s website and deployed it in ~/sql.

Testing the Wallet

I dropped the wallet and associated configuration files created earlier into ~/tns on my sqlcl-host. There is no need to set any environment variables at all. For reference, the following files were present on this vagrant box:

[vagrant@ords ~]$ ls -l ~/tns
total 20
-rw-------. 1 vagrant vagrant 581 Apr 20 19:29 cwallet.sso
-rw-------. 1 vagrant vagrant   0 Apr 20 19:28 cwallet.sso.lck
-rw-------. 1 vagrant vagrant 536 Apr 20 19:29 ewallet.p12
-rw-------. 1 vagrant vagrant   0 Apr 20 19:28 ewallet.p12.lck
-rw-r--r--. 1 vagrant vagrant  89 Apr 20 19:30 ojdbc.properties
-rw-r--r--. 1 vagrant vagrant 120 Apr 20 19:30 tnsnames.ora
[vagrant@ords ~]$  

With the files in place you connect to the database as follows:

[vagrant@ords bin]$ ./sql /@jdbc:oracle:thin:/@orcl?TNS_ADMIN=/home/vagrant/tns

SQLcl: Release 21.1 Production on Tue Apr 20 20:17:12 2021

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Last Successful login time: Tue Apr 20 2021 20:17:15 +00:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.10.0.0.0


SQL> select host_name from v$instance;

HOST_NAME 
-------------
server3      

SQL>   

Note the leading forward slashes in the connection string, they are absolutely mandatory. The beauty of EZConnect Plus syntax is that I don’t need to specify TNS_ADMIN as an environment variable anymore. EZConnect Plus can do many more things, feel free to browse the white paper I linked to.

Have fun!

New initialisation parameters for Transparent Data Encryption beginning in 19c

This blog post is a short summary of my attempt to configure Transparent Data Encryption (TDE) in a single-instance Oracle 19c Container Database (CDB). Reading the documentation I noticed that the way I used to configure TDE changed, there are two new initialisation parameters to be used since SQLNET.ENCRYPTION_WALLET_LOCATION has been deprecated. They are:

  • wallet_root
  • tde_configuration

This article assumes you have prior knowledge of TDE, if not I suggest you head over to the Advanced Security Guide for Oracle 19c. I won’t be so bold as to make the claim my configuration is perfect or even 100% correct; it does seem to work though. Please make sure that you get sign-off from your security team once you completed your TDE configuration.

If you are interested in Transparent Data Encryption please ensure your database is appropriately licensed to use the feature!

Reference

I used the following chapters in the Oracle Advanced Security Guide as reference:

  • Chapter 3: Configuring Transparent Data Encryption
  • Chapter 5: Managing Keystores and TDE Master Encryption Keys in United Mode

Current Environment

Key properties of the environment I’m creating in the lab:

  • I’ll use a software keystore
  • Oracle Restart 19c Enterprise Edition
  • Separation of duties in place in an Oracle Restart configuration
  • I use a Container Database with a single PDB (pdb1). The database has just been created by dbca
  • Configuration of “united mode” where all PDBs share the same keystore with the CDB$ROOT
  • I would like to use an auto-login keystore, trading security for convenience

I would have liked to use a local auto login keystore, however I’m planning on writing another article discussing Data Guard, and I don’t think a local auto-login would have worked as – per the docs – you can only open it on the node it was created.

In united mode, you create the keystore and TDE master encryption key for CDB and PDBs that reside in the same keystore. This might not be a good choice for you, make sure your implementation is consistent with your security team’s requirements.

Configure location and keystore type

This step is significantly different from previous releases, and the main new thing in 19c from what I understand.

[oracle@dgnode1 ~]$ mkdir /u01/app/oracle/admin/SITEA/wallets || echo "the directory exists, check your configuration"

Connect to CDB$ROOT and set wallet_location and tde_configuration

[oracle@dgnode1 ~]$ sq

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Feb 16 09:14:05 2021
Version 19.8.0.0.0

Copyright (c) 1982, 2020, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.8.0.0.0

SQL> show parameter wallet_root

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
wallet_root                          string

SQL> alter system set wallet_root='/u01/app/oracle/admin/SITEA/wallets' scope=spfile;

System altered.

Unfortunately it is necessary to bounce the instance as wallet_root is a static parameter. You have to set wallet_root before you can change tde_configuration. So it’s time to bounce the database now. After it’s up, the finishing touches can be applied.

SQL> alter system set tde_configuration="keystore_configuration=file" scope=both;

System altered.

SQL> select name, value from v$parameter
  2  where name in ('wallet_root', 'tde_configuration');

NAME                 VALUE
-------------------- ------------------------------------------------------
wallet_root          /u01/app/oracle/admin/SITEA/wallets
tde_configuration    keystore_configuration=file

To avoid bouncing the instance you could have set the parameters via dbca when creating the database.

Create the software keystore

I was surprised when I noticed that Oracle appended “/tde” to the location specified by wallet_root. According to the documentation this is correct.

SQL> select wrl_parameter, status, wallet_type, keystore_mode, con_id from v$encryption_wallet
  2  /

WRL_PARAMETER                                      STATUS                         WALLET_TYPE          KEYSTORE     CON_ID
-------------------------------------------------- ------------------------------ -------------------- -------- ----------
/u01/app/oracle/admin/SITEA/wallets/tde/           NOT_AVAILABLE                  UNKNOWN              NONE              1
                                                   NOT_AVAILABLE                  UNKNOWN              UNITED            2
                                                   NOT_AVAILABLE                  UNKNOWN              UNITED            3

The value of NONE in KEYSTORE_MODE for CON_ID 1 is expected, as is the absence of wallet locations for the PDBs. CDB1 (the root) owns the keystore for all PDBs.

Let’s create a local auto-login software keystore. This might not be the right keystore type for you, make sure to read the documentation about keystore types and their pros and cons.

Create the necessary password-protected keystore

Before creating the auto login keystore, a password protected keystore has to be created. This is what I did:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> show parameter wallet_root

NAME                                 TYPE        VALUE
------------------------------------ ----------- --------------------------------------
wallet_root                          string      /u01/app/oracle/admin/SITEA/wallets

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO

SQL> set verify off

SQL> administer key management create keystore identified by &password;

keystore altered.

SQL> select con_id, wrl_parameter, status, wallet_type, keystore_mode from v$encryption_wallet;

    CON_ID WRL_PARAMETER                                      STATUS                         WALLET_TYPE          KEYSTORE
---------- -------------------------------------------------- ------------------------------ -------------------- --------
         1 /u01/app/oracle/admin/SITEA/wallets/tde/           CLOSED                         UNKNOWN              NONE
         2                                                    CLOSED                         UNKNOWN              UNITED
         3                                                    CLOSED                         UNKNOWN              UNITED

The keystore is created, but it’s closed. Here I had to make a choice whether to keep the keystore password protected or transforming it to an auto login keystore. The higher degree of security in the first case is offset by the necessity to enter the keystore password when starting the database. In my lab environment I wanted to be able to start the database without manual intervention, so an auto login password it is. As you read earlier I would have preferred a local auto login keystore but since I will create a physical standby for the database. The Advanced Security Guide doesn’t provide specifics about the keystore type to be used in Data Guard.

Deciding between an auto login keystore and a password protected keystore isn’t a decision for the DBA to make. This is one for the security team!

Convert to auto-login keystore

Straight forward, but the keystore location cannot be ommitted this time. The docs instruct us to use wrl_location as shown in v$encryption_wallet. And yes, please use $WALLET_ROOT/tde for this.

SQL> administer key management create auto_login keystore
  2  from keystore '/u01/app/oracle/admin/SITEA/wallets/tde'
  3  identified by &password;

keystore altered.

SQL> select con_id, wrl_parameter, status, wallet_type, keystore_mode from v$encryption_wallet;

    CON_ID WRL_PARAMETER                                      STATUS                         WALLET_TYPE          KEYSTORE
---------- -------------------------------------------------- ------------------------------ -------------------- --------
         1 /u01/app/oracle/admin/SITEA/wallets/tde/           OPEN_NO_MASTER_KEY             AUTOLOGIN            NONE
         2                                                    OPEN_NO_MASTER_KEY             AUTOLOGIN            UNITED
         3                                                    OPEN_NO_MASTER_KEY             AUTOLOGIN            UNITED

This command created the following files:

SQL> !ls -laR /u01/app/oracle/admin/SITEA/wallets
/u01/app/oracle/admin/SITEA/wallets:
total 0
drwxr-xr-x. 3 oracle oinstall 17 Feb 17 13:16 .
drwxr-x---. 7 oracle oinstall 79 Feb 17 13:12 ..
drwxr-x---. 2 oracle asmadmin 44 Feb 17 13:16 tde

/u01/app/oracle/admin/SITEA/wallets/tde:
total 8
drwxr-x---. 2 oracle asmadmin   44 Feb 17 13:16 .
drwxr-xr-x. 3 oracle oinstall   17 Feb 17 13:16 ..
-rw-------. 1 oracle asmadmin 2600 Feb 17 13:16 cwallet.sso
-rw-------. 1 oracle asmadmin 2555 Feb 17 13:16 ewallet.p12

Since this is an auto-login password store it is not necessary to open it. A master key is missing though, it needs to be created.

Create the TDE master encryption key

Important: before running this command ensure all the PDBs in your CDB are open read-write!

SQL> administer key management set key force keystore identified by &password with backup container=all;

keystore altered.

It was necessary to use the “force keystore” option due to this error:

SQL> administer key management set key identified by &password with backup container = all;
administer key management set key identified by ... with backup
*
ERROR at line 1:
ORA-28417: password-based keystore is not open

When clearly it was open! The keystore status column read OPEN_NO_MASTER_KEY, see above.

It has also been necessary to specify container = all or otherwise the PDB wouldn’t have been assigned the key. This was the output of my first attempt, before I specified container=all:

SQL> administer key management set key force keystore identified by &password with backup;

keystore altered.

SQL> select con_id, wrl_parameter, status, wallet_type, keystore_mode from v$encryption_wallet;

    CON_ID WRL_PARAMETER                                 STATUS                         WALLET_TYPE          KEYSTORE
---------- --------------------------------------------- ------------------------------ -------------------- --------
         1 /u01/app/oracle/admin/SITEA/wallets/tde/      OPEN                           LOCAL_AUTOLOGIN      NONE
         2                                               OPEN                           LOCAL_AUTOLOGIN      UNITED
         3                                               OPEN_NO_MASTER_KEY             LOCAL_AUTOLOGIN      UNITED 

Note how CON_ID 3 (my PDB) doesn’t have a master key. According to the documentation it shouldn’t be necessary to specify container = all when using unified mode, but I guess it is.

Validating the configuration

The documentation provides a query to check if the master key is enabled:

SQL> select con_id, masterkey_activated from v$database_key_info;

    CON_ID MAS
---------- ---
         1 YES
         2 NO
         3 YES

So both my CDB$ROOT and PDB1 have the master key activated. I found the query against v$encryption_wallet useful, too.

SQL> select con_id, wrl_parameter, status, wallet_type, keystore_mode from v$encryption_wallet;

    CON_ID WRL_PARAMETER                                      STATUS                         WALLET_TYPE          KEYSTORE
---------- -------------------------------------------------- ------------------------------ -------------------- --------
         1 /u01/app/oracle/admin/SITEA/wallets/tde/           OPEN                           AUTOLOGIN            NONE
         2                                                    OPEN                           AUTOLOGIN            UNITED
         3                                                    OPEN                           AUTOLOGIN            UNITED

The autologin wallet is now open and ready for use.

Encrypt data

Connect to the PDB and create an encrypted tablespace using the default encryption algorithm. Be advised you can specify which encryption algorithm to use in the create tablespace command. I’m just mucking around with the feature and thus am perfectly happy with the default.

SQL> alter session set container = pdb1;

Session altered.

SQL> create tablespace soe_tbs datafile size 2g autoextend on next 1g maxsize 5g 
  2  encryption encrypt;

Tablespace created.

SQL> select ets.ts#, ets.encryptionalg, ets.key_version, ets.status, ts.name
  2  from v$encrypted_tablespaces ets join v$tablespace ts on (ets.ts# = ts.ts#);

       TS# ENCRYPT KEY_VERSION STATUS     NAME
---------- ------- ----------- ---------- ------------------------------
         6 AES128            0 NORMAL     SOE_TBS

Now let’s load some data… As always, I use Swingbench for this purpose. I’m pointing oewizard to the existing (TDE) tablespace; that’s an important aspect!

martin@client:~/java/swingbench/bin$ 
>./oewizard -cl -create ... -scale 1 -tc 4 -ts SOE_TBS -u soe  

While the data creation job is running you can see the number of blocks encrypted increase on my encrypted tablespace:

SQL> select ets.ts#, ets.encryptionalg, ets.key_version, ets.status, 
  2     ets.blocks_encrypted, ets.blocks_decrypted, ts.name
  3    from v$encrypted_tablespaces ets 
  4     join v$tablespace ts on (ets.ts# = ts.ts#);

       TS# ENCRYPT KEY_VERSION STATUS     BLOCKS_ENCRYPTED BLOCKS_DECRYPTED NAME
---------- ------- ----------- ---------- ---------------- ---------------- ------------------------------
         6 AES128            0 NORMAL                28587                0 SOE_TBS

Similarly, you can see the number of blocks decrypted increase when you query data from the SOE schema.

13:26:06 SQL> select ets.ts#, ets.encryptionalg, ets.key_version, 
  2     ets.status, ets.blocks_encrypted, ets.blocks_decrypted, ts.name
  3      from v$encrypted_tablespaces ets 
  4      join v$tablespace ts on (ets.ts# = ts.ts#);

       TS# ENCRYPT KEY_VERSION STATUS     BLOCKS_ENCRYPTED BLOCKS_DECRYPTED NAME
---------- ------- ----------- ---------- ---------------- ---------------- ------------------------------
         6 AES128            0 NORMAL                69901                0 SOE_TBS

13:26:07 SQL> /

       TS# ENCRYPT KEY_VERSION STATUS     BLOCKS_ENCRYPTED BLOCKS_DECRYPTED NAME
---------- ------- ----------- ---------- ---------------- ---------------- ------------------------------
         6 AES128            0 NORMAL               155725           191459 SOE_TBS

13:26:57 SQL> 

You can also notice blocks decrypted. As I haven’t run charbench yet I assume this is caused by the table scans when building the indexes.

Summary

This concludes the article on implementing TDE for an Oracle single-instance database. So far a lot of complexities have been omitted: no RAC, no Data Guard, no additional features requiring wallets. I am planning on extending this article to Data Guard setups, and eventually discuss a RAC setup.

Create a custom service in single instance Oracle 19c

This post demonstrates how to create a custom service in a single instance Oracle 19c database. As per the Oracle 19c PL/SQL Packages and Types guide, this is the only Oracle database deployment option where you are allowed to use this technique. Anything to do with high availability rules this approach out straight away. The same applies for a database managed by Clusterware (both Oracle Real Application Clusters and Oracle Restart) and Global Data Services (GDS).

Furthermore, the service_name parameter to DBMS_SERVICE.CREATE_SERVICE() is deprecated, I wouldn’t recommend running the code in this blog post on anything newer than Oracle 19c.

If you have a 19c single-instance database where the use of DBMS_SERVICE.CREATE_SERVICE is permitted, you might find this post useful. As always, my example deals with Swingbench. I created a CDB containing a PDB named swingbench1. In addition to the default service I would like to start a custom service, swingbench1_svc. This little SQL*Plus script should do the trick.

To add a little bit of security I added a short prompt to remind you that you can’t use DBMS_SERVICE.START_SERVICE in combination with RAC/Oracle Restart/Global Data Service. It requires you to be logged in a SYS, but that’s easy enough to change if you don’t want to do so.

whenever sqlerror exit

set verify off

define v_service=swingbench1_svc

-- let's try to prevent problems with RAC/Oracle Restart and GDS before we start

prompt This script cannot be run if your database is managed by Clusterware (RAC/Oracle Restart)
prompt or Global Data Services (GDS). 
prompt
accept ok_to_run prompt 'Is this environment a single-instance database [y/n] '

BEGIN
    IF upper('&ok_to_run') != 'Y' THEN
        raise_application_error(-20001, 'you must not use this script with RAC/Oracle Restart/GDS');
    END IF;
END;
/


DECLARE
    v_parameters dbms_service.svc_parameter_array;
    service_exists EXCEPTION;
    service_running EXCEPTION;
    PRAGMA exception_init ( service_exists, -44303 );
    PRAGMA exception_init ( service_running, -44305);
    v_version VARCHAR2(100);
    v_compatibility VARCHAR2(100);
    
BEGIN
    -- must be connected as SYS to a non-CDB or PDB
    IF
        sys_context('userenv', 'cdb_name') IS NOT NULL
        AND sys_context('userenv', 'con_id') <= 2
    THEN
        raise_application_error(-20002, 'you must be connected to a PDB');
    END IF;

    IF sys_context('userenv', 'session_user') != 'SYS' THEN
        raise_application_error(-20002, 'you must by logged in as SYS to run this code');
    END IF;

    -- make sure this is 19c
    dbms_utility.db_version(v_version, v_compatibility);
    if v_version != '19.0.0.0.0' then
        raise_application_error(-20003, 'you must run this script in a 19c database');
    end if;    

    -- create the service, there is no need to provide any parameters
    -- for a single instance database. Ignore the error should the service exist
    BEGIN
        dbms_service.create_service(
            service_name => '&v_service', 
            network_name => '&v_service',
            parameter_array => v_parameters);
    EXCEPTION
        WHEN service_exists THEN
            NULL;
        WHEN others THEN
            raise;
    END;
            
    -- and start it. Ignore an error in case it's running
    BEGIN
        dbms_service.start_service('&v_service');
    EXCEPTION
        WHEN service_running THEN
            NULL;
        WHEN others THEN
            raise;
    END;
END;
/

-- make sure the service starts when the database opens
CREATE OR REPLACE TRIGGER SYS.SERVICES_TRIG
AFTER STARTUP ON DATABASE 
BEGIN
    IF sys_context('userenv','database_role') = 'PRIMARY' THEN
        dbms_service.start_service('&v_service');
    END IF;
END;
/

Once the code is deployed, the service will start with the PDB:

SQL> select name, open_mode, con_id from v$pdbs;

NAME                           OPEN_MODE      CON_ID
------------------------------ ---------- ----------
PDB$SEED                       READ ONLY           2
SWINGBENCH1                    MOUNTED             4

SQL> select con_id, name from v$active_services order by 1,2;

    CON_ID NAME
---------- ------------------------------
         1 CDB
         1 CDBXDB
         1 SYS$BACKGROUND
         1 SYS$USERS
         4 swingbench1

SQL> alter pluggable database SWINGBENCH1 open;

Pluggable database altered.

SQL> select con_id, name from v$active_services order by 1,2;

    CON_ID NAME
---------- ------------------------------
         1 CDB
         1 CDBXDB
         1 SYS$BACKGROUND
         1 SYS$USERS
         4 swingbench1
         4 swingbench1_svc

6 rows selected.

That should do it: if I can’t rely on Clusterware or Global Data Services to manage services for my database, I can use this approach to create an “application” or custom service in my single instance database. And it works:

SQL> conn martin@(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = server1.example.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = swingbench1_svc)))
Enter password: 
Connected.
SQL> select sys_context('userenv','service_name') from dual;

SYS_CONTEXT('USERENV','SERVICE_NAME')
----------------------------------------------------------
swingbench1_svc

I like it when a plan comes together.

Swingbench: creating benchmark schemas with strong passwords

This post describes my experience creating the SOE user account in an Oracle 21 database provided by Oracle’s Database Cloud Service (DBCS). The same principle should apply for earlier releases as well, at the end of the day it boils down to the use of Oracle’s password verify functions. To avoid any ambiguity: the use of password verification functions isn’t a cloud-only feature ;)

Any security conscious administrator should enforce strong passwords and good password hygiene. There is a lot to be said about the topic, too much even for me :) Please refer to the Database Security Guide for your release for an introduction to the topic in the context of the Oracle database and take it from there.

Password Verify Functions

Quite recently I wanted to install Swingbench in my Oracle 21 system, hosted in Oracle’s Cloud Infrastructure (OCI). Unsurprisingly the database enforces strong passwords. Connected to my PDB (“demo_pdb”) I noticed the following in dba_profiles:

SQL> select profile, resource_name, limit from dba_profiles 
  2  where resource_name = 'PASSWORD_VERIFY_FUNCTION';

PROFILE 		       RESOURCE_NAME			LIMIT
------------------------------ -------------------------------- --------------------------------------------------
DEFAULT 		       PASSWORD_VERIFY_FUNCTION 	ORA12C_STRONG_VERIFY_FUNCTION
ORA_CIS_PROFILE 	       PASSWORD_VERIFY_FUNCTION 	ORA12C_VERIFY_FUNCTION
ORA_STIG_PROFILE	       PASSWORD_VERIFY_FUNCTION 	ORA12C_STIG_VERIFY_FUNCTION

SQL> 

A weak password unsurprisingly will not work in these circumstances:

SQL> create user martin identified by superWeakPassword;
create user martin identified by superWeakPassword
*
ERROR at line 1:
ORA-28003: password verification for the specified password failed
ORA-20000: password must contain 2 or more digits

In other words, you need to provide a strong password for Swingbench’s schema creation wizards as well.

Calling OEWizard

I commonly use the command line to create Swingbench’s benchmark schemas. This time around I wanted to create the Swingbench Order Entry schema. To adhere to the password-complexity rule I have to provide Oracle with a password containing special characters. As per the create user SQL command, passwords containing those special characters need to be enclosed in double-quotes. Now the trick is to read man(1) bash correctly, namely the section on quoting.

The keep it short you need to wrap the actual password into a single quote/a double quote combination, like so:

./oewizard -allindexes ... -u soe -p '"superSafeTempPassword"' ...

This way the passwords will make it all the way to oewizard, allowing it to create the user successfully.

Change those passwords immediately

Unfortunately there is no other way than providing passwords to oewizard on the command line. They will be visible to other users on the system, so make sure to change them immediately as soon as the wizard finished with the schema creation.

Password complexity rules

By the way, the Oracle-provided password verification functions and their password complexity rules are explained in chapter 3 of the Database Security Guide. If you get ORA-28003/ORA-20000 in OEWizard or later in SQL*Plus when changing the password your new password doesn’t adhere to the complexity rules.

Happy benchmarking!

A mostly academic post about startup triggers and when they fire in Real Application Clusters

Out of curiosity I wanted to figure out when and how often startup triggers fire in RAC 19c. Do they fire when all instances of the database are up (e.g. only once), or do they fire for once each instance? A little experiment reveals the behaviour. As the title reads this is mostly academic, I hope it’ll save me 5 minutes some time in the future. Maybe it saves you 5 minutes right now?

I am using Oracle 19.8.0 RAC in a two node configuration, with the database created as a container database (CDB). To figure out what happens with the startup trigger I created a little bit of infrastructure, like so:

CREATE TABLE c##martin.logtab (
    id         NUMBER GENERATED ALWAYS AS IDENTITY NOT NULL,
    ts         TIMESTAMP(6) NOT NULL,
    inst_id    NUMBER NOT NULL,
    inst_name  VARCHAR2(200) NOT NULL,
    CONSTRAINT pk_logtab PRIMARY KEY ( id )
)
TABLESPACE users;

Normally I’d be more careful about the sequence’s properties, especially in RAC, but not with a startup trigger …

With the logging table in place I can create a stored procedure to log the data:

CREATE OR REPLACE PROCEDURE c##martin.logproc AS
BEGIN
    INSERT INTO c##martin.logtab (
        ts,
        inst_id,
        inst_name
    ) VALUES (
        systimestamp,
        sys_context('userenv', 'instance'),
        sys_context('userenv', 'instance_name')
    );

    COMMIT;
END;
/  

This way I can see what’s going on. Now the only part missing is the actual startup trigger. It has to be created by SYS.

CREATE OR REPLACE TRIGGER sys.startup_trig 
AFTER STARTUP ON DATABASE 
BEGIN
    c##martin.logproc;
END;
/  

Restart the database

Now that everything is in place I can restart the database to see what happens.

[oracle@rac19pri1 ~]$ srvctl stop database -db RCDB ; srvctl start database -db RCDB
[oracle@rac19pri1 ~]$ srvctl status database -d RCDB -verbose
Instance RCDB1 is running on node rac19pri1. Instance status: Open.
Instance RCDB2 is running on node rac19pri2. Instance status: Open.
[oracle@rac19pri1 ~]$  

Looking into the logging table after the database came up, I can see the trigger fired twice:

        ID TS                                INST_ID INST_NAME
---------- ------------------------------ ---------- ------------------------------
         1 11-JAN-21 07.08.04.205358 PM            2 RCDB2
        21 11-JAN-21 07.08.04.465076 PM            1 RCDB1 

This proves that startup triggers fire per instance. Which makes total sense now that I think about it, but it’s nice to know for sure. Well, for some people that might be.

Summary

In a RAC environment a startup trigger fires once the instance opens, irrespective of how many instances there are.

You should probably know that triggers have somewhat fallen out of fashion and should only be used with great care and after understanding their advantages as well as disadvantages. Reading the 19c Database PL/SQL Language Reference is a first step towards understanding trigger concepts.

Device name persistence in the cloud: OCI

Device name persistence is an important concept for everyone deploying the Oracle database. In this little series I’ll show how you can achieve device name persistence with Oracle Cloud Infrastructure (OCI) and block storage. I am hoping to share future parts for Azure and AWS.

In the example I’m going to prepare a cloud VM for the installation of Oracle Grid Infrastructure 19.9.0. To do so I have created a number of block devices in addition to the boot volume:

  • One block volume to contain the Oracle binaries
  • Two block volumes to be used as +DATA
  • Two more block volumes for +RECO

This is going to be a playground environment, the block volume size is unrealistically small. You will certainly need larger block devices for a production environment. Additionally there is most likely a cost associated with creating these resources, be careful!

Block devices

The following block devices have been created previously, and are waiting to be attached to the VM:

cloudshell:~ (eu-frankfurt-1)$ oci bv volume list -c $C \
 --query "data [?contains(\"display-name\", 'dbinst1')].{AD:\"availability-domain\",name:\"display-name\"}" \
 --output table
+--------------------------+--------------+
| AD                       | name         |
+--------------------------+--------------+
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv01 |
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv02 |
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv03 |
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv04 |
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv05 |
+--------------------------+--------------+ 

These now need to be attached to my VM, called dbinst1. You may have guessed ;)

Block device attachment

Once the block devices are created, they need to be attached to the VM. There are many ways to do so, but since I’m using a script in Cloud Shell I went with the OCI Command Linue Interface (CLI). For example:

oci compute volume-attachment attach-paravirtualized-volume \
--instance-id ocid1.instance.oc1.eu-frankfurt-1.a...a \
--volume-id ocid1.volume.oc1.eu-frankfurt-1.a...q \
--device "/dev/oracleoci/oraclevdf" 

This command attached the 5th block volume to the VM as /dev/oracleoci/oraclevdf. I have other volumes attached as /dev/oracleoci/oraclevd[a-e] already. Note that I opted to add the block volumes using paravirtualised option. This is fine for my playground VM where I don’t really expect or need the last bit of I/O performance. If you need performance, you need go with the iSCSI attachment type.

Block device use

And this is all there is to it: the para-virtualised block devices are immediately visible on dbinst1:

[opc@dbinst1 ~]$ lsscsi
[2:0:0:1]    disk    ORACLE   BlockVolume      1.0   /dev/sde 
[2:0:0:2]    disk    ORACLE   BlockVolume      1.0   /dev/sda 
[2:0:0:3]    disk    ORACLE   BlockVolume      1.0   /dev/sdb 
[2:0:0:4]    disk    ORACLE   BlockVolume      1.0   /dev/sdd 
[2:0:0:5]    disk    ORACLE   BlockVolume      1.0   /dev/sdc 
[2:0:0:6]    disk    ORACLE   BlockVolume      1.0   /dev/sdf 
[opc@dbinst1 ~]$  

The only thing to be aware of is that you shouldn’t use the native block device. Instead, use the device name you assigned when attaching the block device:

[opc@dbinst1 ~]$ ls -l /dev/oracleoci/*
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevda -> ../sde
lrwxrwxrwx. 1 root root 7 Nov 24 06:38 /dev/oracleoci/oraclevda1 -> ../sde1
lrwxrwxrwx. 1 root root 7 Nov 24 06:38 /dev/oracleoci/oraclevda2 -> ../sde2
lrwxrwxrwx. 1 root root 7 Nov 24 06:38 /dev/oracleoci/oraclevda3 -> ../sde3
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevdb -> ../sda
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevdc -> ../sdb
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevdd -> ../sdd
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevde -> ../sdc
lrwxrwxrwx. 1 root root 6 Nov 24 07:18 /dev/oracleoci/oraclevdf -> ../sdf
[opc@dbinst1 ~]$  

My Ansible playbooks reference /dev/oracleoci/oraclevd*, and that way ensure device name persistence across reboots. Happy automating!

Enforcing a re-parse of a cursor in Autonomous Database. Using a hammer

While setting up a demo environment in my Autonomous Database I found that one of my tuning techniques – evicting a cursor from the shared pool – isn’t quite as straight forward as it could be. In fact, at the time of writing it wasn’t possible to force a cursor reparse in Autonomous Database (to the best of my knowledge).

The problem

Executing my flush.sql script failed, and from what I can tell, dbms_shared_pool is not exposed to users in Autonomous Transaction Processing database at the moment:

SQL> show user
USER is "ADMIN"
SQL> desc sys.dbms_shared_pool
ERROR:
ORA-04043: object sys.dbms_shared_pool does not exist

If dbms_shared_pool isn’t accessible, it makes it pretty hard to purge a cursor with the surgical precision this package offers.

A potential workaround exists

There is a workaround, however it affects all cursor referencing a table. It’s neither granular, nor very elegant and has the potential to do more harm than good due to the potentially high number of cursor invalidations!. I’d say the workaround sits somewhere between dbms_shared_pool.purge and flushing the shared pool altogether.

Due to its broad scope, be very careful applying this workaround, and use all industry best known methods prior to its use, refrain from using it unless you are absolutely sure about what you are about to do. Consider yourself warned.

Setting the scene

As always, I’m using Swingbench for this example. I created the SOE schema in the Autonomous Transaction database, it’s quite straight forward to do and documented on Dominic Giles’ blog. I had to massage the data model a little after creating a rather large number of rows to reproduce my classic bind variable issue.

Consider the ORDERS table, and more specifically, ORDER_MODE_EXT:

SOE> select count(*), order_mode_ext from orders group by order_mode_ext;

  COUNT(*) ORDER_MOD
---------- ---------
  12625080 online
    113920 undefined
    231820 direct

3 rows selected. 

I created an index and frequency histogram on order_mode_ext, and I’m triggering an index-based access path by using a SQL statement and a bind variable. You will notice that I’m running SQL scripts, these are of course not mine, they are Tanel Poder’s.

SQL> var om varchar2(20)
SQL> exec :om := 'undefined' 

PL/SQL procedure successfully completed.

SQL> select /* atp-test */ /*+ gather_plan_statistics */ count(*) from orders where order_mode_ext = :om;

  COUNT(*)
----------
    113920

1 row selected.

SQL_ID: 1cw6t95y0jcsm 

Modern SQL*Plus can print a SQL ID if you set “feedback on SQL_ID” by the way, as you can see. Let’s have a look at the execution plan:

SQL> select plan_hash_value, child_number, invalidations, executions, is_bind_aware, is_bind_sensitive 
 2 from v$sql where sql_id = '1cw6t95y0jcsm';

PLAN_HASH_VALUE CHILD_NUMBER INVALIDATIONS EXECUTIONS I I
--------------- ------------ ------------- ---------- - -
      545586833            0             0          1 N Y

SQL> @xi 1cw6t95y0jcsm %
eXplain the execution plan for sqlid 1cw6t95y0jcsm child %...

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------
SQL_ID  1cw6t95y0jcsm, child number 0
-------------------------------------
select /* atp-test */ /*+ gather_plan_statistics */ count(*) from
orders where order_mode_ext = :om

Plan hash value: 545586833

-----------------------------------------------------------------------------------------------------------
| Id  | Operation         | Name           | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                |      1 |        |   311 (100)|      1 |00:00:00.01 |     358 |
|   1 |  SORT AGGREGATE   |                |      1 |      1 |            |      1 |00:00:00.01 |     358 |
|*  2 |   INDEX RANGE SCAN| I_ORDERS_OMEXT |      1 |    113K|   311   (1)|    113K|00:00:00.01 |     358 |
-----------------------------------------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

   1 - :1 (VARCHAR2(30), CSID=873): 'undefined'

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ORDER_MODE_EXT"=:OM)


25 rows selected. 

Tanel’s scripts prove the expected index-based access path was chosen for my bind value. Indexed based access is ok for this particular bind value (‘undefined’) because it’s a small subset of the table. But what about the other extreme, ‘online’ orders? Let’s try passing a different bind variable:

SQL> exec :om := 'online'

PL/SQL procedure successfully completed.

SQL> select /* atp-test */ /*+ gather_plan_statistics */ count(*) from orders where order_mode_ext = :om;

  COUNT(*)
----------
  12625080

1 row selected. 
 
SQL_ID: 1cw6t95y0jcsm

Again, let’s have a look at the shared pool:

SQL>@xi 1cw6t95y0jcsm %
eXplain the execution plan for sqlid 1cw6t95y0jcsm child %...

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------
SQL_ID  1cw6t95y0jcsm, child number 0
-------------------------------------
select /* atp-test */ /*+ gather_plan_statistics */ count(*) from
orders where order_mode_ext = :om

Plan hash value: 545586833

-----------------------------------------------------------------------------------------------------------
| Id  | Operation         | Name           | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                |      1 |        |   311 (100)|      1 |00:00:01.01 |   33760 |
|   1 |  SORT AGGREGATE   |                |      1 |      1 |            |      1 |00:00:01.01 |   33760 |
|*  2 |   INDEX RANGE SCAN| I_ORDERS_OMEXT |      1 |    113K|   311   (1)|     12M|00:00:01.36 |   33760 |
-----------------------------------------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

   1 - :1 (VARCHAR2(30), CSID=873): 'undefined'

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ORDER_MODE_EXT"=:OM)


25 rows selected.

SQL>select plan_hash_value, child_number, invalidations, executions, is_bind_aware, is_bind_sensitive
 2 from v$sql where sql_id = '1cw6t95y0jcsm';

PLAN_HASH_VALUE CHILD_NUMBER INVALIDATIONS EXECUTIONS I I
--------------- ------------ ------------- ---------- - -
      545586833            0             0          2 N Y  

No change after the second execution (I know, this isn’t the full story).

Triggering the re-parse

Let’s assume further that PHV 545586833 (the index-based plan) causes immense problems for the application and you need the database to reparse quickly since you know the “wrong” bind variable has been passed to the cursor as a one-off. You also realise that you forgot to implement plan stability measures to prevent this problem from occurring in the first place.

Keeping with the story, you don’t want to wait for Adaptive Cursor Sharing to help you, and need to take action. In the past, you could have used dbms_shared_pool to remedy the situation, but again, it’s not available at the time of writing.

There is another option though, but it comes with caveats.

From what I can tell the process you are about to read is the only option to force a reparse of a cursor referencing the table, but it’s a blunt and harsh method. It will force a re-parse of all cursors referencing the table. Only use it if there is no other option! You may end up fixing one problem and causing many others.

A table-DDL operation causes Oracle to re-parse all SQL statements referring to the table. After the most careful considerations about the side-effect on other SQL statements you have been given the green light by management to go ahead. The potential gain is considered worth the pain.

Before I made any changes, here’s the state of play:

SQL> select object_name, object_type, sysdate as now, to_char(last_ddl_time,'dd.mm.yyyy hh24:mi:ss')
 2 from dba_objects where object_name = 'ORDERS' and owner = 'SOE';

OBJECT_NAME                    OBJECT_TYPE             NOW                 TO_CHAR(LAST_DDL_TI
------------------------------ ----------------------- ------------------- -------------------
ORDERS                         TABLE                   2020-10-25 19:43:36 25.10.2020 19:06:34

The degree of parallelism on my table is 1 (noparallel), so let’s try a DML operation.

SQL> select degree from dba_tables where owner = 'SOE' and table_name = 'ORDERS';

DEGREE
----------------------------------------
         1

SQL> alter table soe.orders noparallel;

Table altered.

SQL> select degree from dba_tables where owner = 'SOE' and table_name = 'ORDERS';

DEGREE
----------------------------------------
         1

SQL> select object_name, object_type, sysdate as now, to_char(last_ddl_time,'dd.mm.yyyy hh24:mi:ss')
 2 from dba_objects where object_name = 'ORDERS' and owner = 'SOE';

OBJECT_NAME                    OBJECT_TYPE             NOW                 TO_CHAR(LAST_DDL_TI
------------------------------ ----------------------- ------------------- -------------------
ORDERS                         TABLE                   2020-10-25 19:44:24 25.10.2020 19:44:12

No change made to the table DOP, but the last DDL time changed. This will force ALL cursors referencing this table to become invalidated.

Now let’s run the “problem” query again, and check the execution statistics:

SQL> select /* atp-test / /+ gather_plan_statistics / count(*) from orders where order_mode_ext = :om; 

  COUNT(*)
----------
  12625080

1 row selected.

SQL_ID: 1cw6t95y0jcsm

SQL>@xi 1cw6t95y0jcsm %
eXplain the execution plan for sqlid 1cw6t95y0jcsm child %...

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------
SQL_ID  1cw6t95y0jcsm, child number 0
-------------------------------------
select /* atp-test */ /*+ gather_plan_statistics */ count(*) from
orders where order_mode_ext = :om

Plan hash value: 3862103574

-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name           | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                |      1 |        |   594 (100)|      1 |00:00:01.10 |   34845 |
|   1 |  SORT AGGREGATE               |                |      1 |      1 |            |      1 |00:00:01.10 |   34845 |
|*  2 |   INDEX STORAGE FAST FULL SCAN| I_ORDERS_OMEXT |      1 |     12M|   594  (31)|     12M|00:00:01.44 |   34845 |
-----------------------------------------------------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

   1 - :1 (VARCHAR2(30), CSID=873): 'online'

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("ORDER_MODE_EXT"=:OM)


25 rows selected.

SQL>select plan_hash_value, child_number, invalidations, executions, is_bind_aware, is_bind_sensitive
 2 from v$sql where sql_id = '1cw6t95y0jcsm';

PLAN_HASH_VALUE CHILD_NUMBER INVALIDATIONS EXECUTIONS I I
--------------- ------------ ------------- ---------- - -
     3862103574            0             1          1 N Y   

As a direct consequence of the DDL operation against the orders table a new plan has been chosen for SQL ID 1cw6t95y0jcsm (and all other statements referencing table!). As you can see there isn’t a new child cursor for SQL ID 1cw6t95y0jcsm, the old cursor has been evicted completely.

Remember that a DML operation as the one shown in this post is a blunt weapon and only to be used as a last resort.

JDBC & the Oracle Database: if you want Transparent Application Failover you need the OCI driver

This is the second article in the series of JDBC articles I’m about to publish. It covers an old technology that’s surprisingly often found in use: Transparent Application Failover (TAF). It’s a client side feature for clustered Oracle databases allowing sessions (and to some extent, select statements) to fail over to a healthy node from a crashed instance.

I would wager a bet that you probably don’t want to use Transparent Application Failover in (new) Java code. There are many better ways to write code these days. More posts to follow with my suggestions ;)

Well, then, why bother writing this post? Simple! There is a common misconception about the requirement: since Transparent Application Failover relies on the Oracle client libraries, you cannot use it with the thin driver. The little tool I have written demonstrates exactly that. And besides, I had the code more or less ready, so why not publish it?

Prerequisites for running the demo code

My Java code has been updated to Oracle work with Oracle 19c. I am also using an Oracle 19c RAC database as the back-end.

Preparing the client

Since I am going to use the Secure External Password Store again you need to prepare the client as per my previous article. The only difference this time is that I need a sqlnet.ora file in my client’s tns directory. Continuing the previous example I created the file in /home/martin/tns, and it contains the following information:

WALLET_LOCATION =
  (SOURCE =(METHOD = FILE)
    (METHOD_DATA =
      (DIRECTORY = /home/martin/tns)
    )
  )

SQLNET.WALLET_OVERRIDE = TRUE  

When you are creating yours, make sure to update the path according to your wallet location.

Since I’m connecting to a RAC database I need to change the entry in tnsnames.ora as well. This requires the application specific service to be created and started, a bit of a chicken and egg problem. The setup of the database service is explained in the next section. Here is my tnsnames.ora entry:

taf_svc =
 (DESCRIPTION = 
  (ADDRESS = (PROTOCOL = tcp)(HOST = rac19pri-scan.example.com)(PORT = 1521))
  (CONNECT_DATA=
    (SERVICE_NAME = taf_svc) 
     (FAILOVER_MODE=(TYPE=select)(METHOD=basic)))
  )

Note that setting the failover_mode () isn’t the preferred way to set TAF properties. It’s better to do that at the service level, see below.

Preparing the database service

Oracle strongly discourages the use of the default service name except for DBA tasks. As I’m a good citizen I’ll create a separate service for my little TAF application.

You need to connect to the database server and use srvctl create service to create a service. I used the following properties:

[oracle@rac19pri1]$ srvctl add service -db NCDB -service taf_svc \
-role primary -policy automatic -clbgoal long \
-failovermethod basic -failovertype session \
-preferred "NCDB1,NCDB2"

You have to set at least preferred nodes and the connect time load balancing goal. If you want to ensure anyone connecting to the TAF services actually makes use of it regardless of the tnsnames setting, you also need to set failovertype and failovermethod.

Don’t forget to start the service after you created it! Once the service is created and running, let’s try to use it to see if all TAF properties are available. To do so, I connected to taf_svc in my 1st session. I then checked the status after connecting as SYSTEM in a second session:

SQL> select inst_id, failover_method, failover_type, failed_over, service_name
  2  from gv$session where username = 'MARTIN'
  3  /

   INST_ID FAILOVER_M FAILOVER_TYPE FAI SERVICE_NAME
---------- ---------- ------------- --- ---------------
         1 BASIC      SESSION       NO  taf_svc

SQL> show user
USER is "SYSTEM"

Running the code

The complete code is available on github in my java-blogposts repository. After downloading it to your machine, change into the taf-demo-1 directory and trigger the compile target using mvn compile.

With the code built, you can run it easily on the command line. First off, try the thin driver.

JDBC Thin Driver

I used this command to start the execution using the thin driver:

java -cp /home/martin/java/libs/ojdbc10.jar:/home/martin/java/libs/oraclepki.jar:/home/martin/java/libs/osdt_cert.jar:/home/martin/java/libs/osdt_core.jar:target/taf-example-1-0.0.1-SNAPSHOT.jar de.martin.tafDemo.App thin

This should connect you to the database, but not with the desired effect.

About to start a demonstration using Transparent Application Failover
Driver Name: Oracle JDBC driver
Driver Version: 19.7.0.0.0
Connection established as MARTIN


inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver 

As you can easily spot there isn’t any trace of TAF in the output. Not surprisingly, the code crashes as soon as instance 2 fails:

inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00049 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00049 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
SQLException while trying to get the session information: java.sql.SQLRecoverableException: No more data to read from socket
[martin@appserver taf-demo-1]$

There are potentially ways around that, but I yet have to see an application implement these. So in other words, in most cases the following equation is true: instance crash = application crash.

JDBC OCI driver

Running the same code using the OCI driver should solve that problem. You will need an Oracle 19.7.0 client installation for this to work, and you have to set LD_LIBRARY_PATH as well as TNS_ADMIN in the shell

$ export TNS_ADMIN=/home/martin/tns
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/client/installation

Once these are set, start the application:

$ java -cp /home/martin/java/libs/ojdbc10.jar:/home/martin/java/libs/oraclepki.jar:/home/martin/java/libs/osdt_cert.jar:/home/martin/java/libs/osdt_core.jar:target/taf-example-1-0.0.1-SNAPSHOT.jar de.martin.tafDemo.App oci
About to start a demonstration using Transparent Application Failover
Driver Name: Oracle JDBC driver
Driver Version: 19.7.0.0.0
Connection established as MARTIN


inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 2 sid: 00275 failover_type: SELECT     failover_method: BASIC      failed_over: YES   module: java@appserver (TNS V1-V3) action: null
inst_id: 2 sid: 00275 failover_type: SELECT     failover_method: BASIC      failed_over: YES   module: java@appserver (TNS V1-V3) action: null
inst_id: 2 sid: 00275 failover_type: SELECT     failover_method: BASIC      failed_over: YES   module: java@appserver (TNS V1-V3) action: null
^C[martin@appserver taf-demo-1]$ 

You should notice a seamless transition from node 1 to node 2. As you can imagine this is the simplest example, but it should convey the message as intended. For more details about TAF and RAC, including the use of “select” failover and DML support I suggest you have a look at Pro Oracle 11g RAC on Linux, chapter 11.

Summary

Contrary to what one might think using TAF with the JDBC thin driver doesn’t protect a session from instance failure. The only way to protect a (fat client) session is to make use of the Oracle Call Interface.

Then again, TAF is a very mature solution and there might be better ways of working with RAC. Connection pools based on Oracle’s own Universal Connection Pool look like the way forward. Newer technologies, such as (Transparent) Application Continuity are better suited to meet today’s requirements.

JDBC & the Oracle Database: using Maven Central

Over the years I have written a few blog posts about Java and the Oracle database. Most of these posts contained code in some form or another. And I have to admit that I haven’t been following my own advice … I have run into all sorts of issues with the code simply as a result of not storing it centrally and keeping a close eye on versioning. I think it’s about time I correct these issues and re-write my examples. This is going to be a bit of a journey, and it’s entirely my fault for putting up with a mountain of technical debt.

At the end of my code journey I am hoping to have a code repository on Github with all my problems being a thing of the past. There were a few issues I wanted to correct with the code, and this looks like a great opportunity to address them.

A little more security

I really don’t like JDBC code examples where username and password are exposed in code. It is too easy as a developer to simply copy and paste the code into one’s own application. Once it’s in there it doesn’t take much and sensitive credentials end up on Github and are visible to everyone. This is Proper Bad.

So rather than providing username and password in clear text in the Java code, I will make use of the external password store.

Compiling the code

Previously I have used Apache Ant as the build tool. This time around I have taken the opportunity to switch to Apache Maven 3.6.x. Maven is one of the most popular tools in the Java space from what I can tell. Furthermore, it is now possible to pull the necessary JDBC libraries from Maven Central and connecting to my single instance Oracle 19c database. Enough reason for me to give it a try.

Making basic use of Maven is straight forward, at least for my needs. I started off with a very basic example:

$ mvn archetype:generate -DgroupId=de.martin.demo01 -DartifactId=demo01 \
> -DarchetypeArtifactId=maven-archetype-quickstart \
> -DarchetypeVersion=1.4 -DinteractiveMode=false

This creates the necessary project structure. Before I can use the Oracle JDBC drivers I have to update the dependencies in pom.xml. As per the aforementioned article on Medium I added the following dependency:

<!-- Oracle JDBC as per https://www.oracle.com/database/technologies/maven-central-guide.html -->
  <dependency>
    <groupId>com.oracle.database.jdbc</groupId>
    <artifactId>ojdbc10-production</artifactId>
    <version>19.7.0.0</version>
    <type>pom</type>
  </dependency>

This should pull in all required JARs. From what I understood reading another source, Oracle’s Maven Central Guide, this is only one of 2 ways of integrating the necessary libraries, have a look at the link for the full story.

Preparing the External Password Store

Using the external password store allows me to store username and password in an external entity called a wallet. There is a slight caveat: creating a wallet requires access to an Oracle client installation. As far as I know the necessary mkstore utility isn’t part of any instant client download yet. Using the external password store with the JDBC thin driver is a little tricky, but I like challenges :)

The requirement

I would like to connect as “martin” to my single instance database “ORCL” running on a VM with a primary IP of 192.168.56.122 without having to provide either a username or a password. The listener receives incoming connections on port 1521. The Java code will be executed on a workstation with an IP address of 192.168.56.1.

Creating the wallet

In my case the creation of the wallet happened on the database server. To keep things simple I’ll create it in $ORACLE_BASE/admin/$ORACLE_SID/authn. This wallet has nothing to do with other Oracle technology such as Transparent Data Encryption for example – it is purely used for authentication of my application to the database.

[oracle@server1 authn]$ which mkstore
/u01/app/oracle/product/19.0.0/dbhome_1/bin/mkstore
[oracle@server1 authn]$ 
[oracle@server1 authn]$ mkstore -wrl $(pwd) -create
Oracle Secret Store Tool Release 20.0.0.0.0 - Production
Version 21.0.0.0.0
Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.

Enter password:   
Enter password again:   

The next step is to create a credential. To do so I must provide the connect string to the database, matching the entry in tnsnames.ora in lower case. This is the crucial bit: whenever I tried to use an upper case net*8 alias in tnsnames.ora or the wallet, the connection failed with an “invalid username/password” error.

The second parameter to provide is the username I want to connect to. For example:

[oracle@server1 authn]$ mkstore -wrl $(pwd) -createCredential orcl martin
Oracle Secret Store Tool Release 20.0.0.0.0 - Production
Version 21.0.0.0.0
Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line 
Enter your secret/Password:   
Re-enter your secret/Password:   
Enter wallet password:   
[oracle@server1 authn]$ 

I usually create an ORACLE_SID in upper case for nostalgic reasons, hence it took me a little while to figure the need for lower case identifiers out.

Preparing the client

After the wallet has been created, it needs to be securely transferred to the workstation where it’s about to be used. The designated location on the client is /home/martin/tns. After the wallet has been transferred to the client I have the following files in said directory:

[martin@client1 ~]$ ls -1 ~/tns
total 8
cwallet.sso
cwallet.sso.lck
ewallet.p12
ewallet.p12.lck
[martin@client1 ~]$  

The next step is to add a tnsnames.ora file. Remember to create the net*8 alias in lower case. Here is my file in case you need to use something as a reference:

[martin@client1 tns]$ cat tnsnames.ora 
orcl =
(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.56.122)(PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = ORCL)))
[martin@client1 tns]$  

Finally you need a file to point the thin client to the wallet location. The way to do is not immediately obvious, however I found it documented for Autonomous Database. Long story short, you need to create another file named ojdbc.properties with the following contents:

oracle.net.wallet_location=(source=(method=file)(method_data=(directory=${TNS_ADMIN})))

This is a big step ahead compared to earlier JDBC versions. Instead of having to provide system properties to the JVM it is now possible to omit those. Oracle introduced EZConnect Plus in 18c and I am totally going to make use of it. Thanks to @martinberx for spotting a copy/paste mistake in the number of closing brackets. The Oracle documentation misses one ;)

With all the files in place the client is ready! In case you wondered, ~/tns now contains the following files:

[martin@client1 tns]$ ls -1
cwallet.sso
cwallet.sso.lck
ewallet.p12
ewallet.p12.lck
ojdbc.properties
tnsnames.ora
[martin@client1 tns]$  

Let there be code!

It turns out the preparations were the hard part. Now for some code to connect to the database. This example is loosely modeled on a Java class in Oracle’s Github repository (DataSourceSample.java).

Whilst this works, I believe it’s not the best way for a non-library developer to access the “raw” JDBC interface. From what I read it seems to be consensus not to code the JDBC interface directly, but rather use a framework.

Nevertheless, I decided to share the class, after all I needed some code to test authentication via the wallet.

package de.martin.demo01;

import java.sql.SQLException;
import oracle.jdbc.pool.OracleDataSource;
import oracle.jdbc.OracleConnection;
import java.sql.DatabaseMetaData;

/**
 * The equivalent of a "hello world" type application for connecting to the 
 * Oracle database 
 */
public class App 
{

    final static String ezConnectPlus = "jdbc:oracle:thin:/@orcl?TNS_ADMIN=/home/martin/tns";

    public static void main( String[] args ) throws SQLException
    {
        System.out.println( "Getting ready to connect to the database" );

        OracleDataSource ods = new OracleDataSource();
        ods.setURL(ezConnectPlus);

        try (OracleConnection connection = (OracleConnection) ods.getConnection()) {
            
            // Get the JDBC driver name and version 
            DatabaseMetaData dbmd = connection.getMetaData();       
            System.out.println("Driver Name: " + dbmd.getDriverName());
            System.out.println("Driver Version: " + dbmd.getDriverVersion());
            
            // Print some connection properties
            System.out.println("Default Row Prefetch Value is: " + 
               connection.getDefaultRowPrefetch());
            System.out.println("Database Username is: " + connection.getUserName());
            System.out.println();

            connection.close();
          }   
    }
}

You may have noticed the absence of references to setting username and password. The connection string is now making use of EZConnect Plus:

jdbc:oracle:thin:/@orcl?TNS_ADMIN=/home/martin/tns

If you have used the JDBC thin driver before this JDBC connection string will without a doubt look slightly odd. Oracle 18c introduced the ability to use an extended EZ Connect Syntax, and I combined the syntax from Autonomous Database and the JDBC developer’s guide to end up with this one.

Running mvn package instructs maven compiles the code and package it.

Running the code

Running the code requires the following JAR files to be in the classpath:

  • ojdbc10.jar
  • oraclepki.jar
  • osdt_cert.jar
  • osdt_core.jar
  • And of course the JAR with the application code created by Maven

As that’s a lot of typing I created symbolic links from my local maven repository in ~/.m2/repository to ~/java/libs. With that done, I can run the code after a call to mvn package:

[martin@client1 maven-example-1]$ java -cp /home/martin/java/libs/ojdbc10.jar:/home/martin/java/libs/oraclepki.jar:/home/martin/java/libs/osdt_cert.jar:/home/martin/java/libs/osdt_core.jar:target/maven-example-1-1.0-SNAPSHOT.jar de.martin.demo01.App
Getting ready to connect to the database
Driver Name: Oracle JDBC driver
Driver Version: 19.7.0.0.0
Default Row Prefetch Value is: 10
Database Username is: MARTIN
[martin@client1 maven-example-1]$

Result! I connected to the database without ever providing a username or password. It is impossible to accidentally commit something that doesn’t exist to a source code repository. I hope this makes for better, and somewhat more secure code.

Summary

This is hopefully the first post in a series of Java-related articles allowing me to better maintain my code. In the first article I shared how to create the Secure External Password Store (aka wallet) to store connection credentials to avoid supplying them in application code. The other change I introduced was the use of Maven as my build tool. I am conscious of the fact that Maven is integrated into many Java development IDEs, yet I still preferred to show its usage on the command line so that everyone can follow along without having to learn a particular IDE’s workflow.

Last but not least I would like to thank @gustavojimenez for peer-reviewing this article.