Author Archives: Martin Bach

About Martin Bach

Oracle DBA and Linux enthusiast, part time author and presenter.

Installation of Oracle Restful Data Services 20.4 without using SYSDBA

It’s really hard to come up with good, concise, and short twitter-compatible blog titles, so let’s try with a few more words. What I’d like to share is how to install Oracle Restful Data Services (ORDS) v20.4 without having to connect to the database as SYSDBA. There are good reasons not to connect as SYSDBA, and I’m glad Oracle made it possible to grant the necessary privileges for an ORDS installation to a regular database account. It’s not a new feature at all, the option has been around for a while but I didn’t have time to write about it yet.

Some background before getting started

The installation of Oracle’s Restful Data Services (ORDS) consists of 2 parts:

  1. Deploying the ORDS distribution to a Docker container or virtual machine
  2. Connecting to and configuring its database

Initially it was necessary to connect to the highly privileged SYSDBA account to complete step 2. Thankfully this isn’t necessary anymore. My post describes how to install and configure ORDS 20.4 against an Oracle 19c (non-container) database.

Deploying ORDS

The first step consists of downloading ORDS from Oracle’s website. I am planning on deploying it in one of my Oracle Linux 8 vagrant boxes. There is more to the deployment of ORDS than configuration, which I’m leaving to a later post. More specifically I’ll not concern myself integrating ORDS into a Docker container or even Tomcat 9.x to keep the post simple.

The Ansible playbook I use creates an Oracle account out of habit. It also creates the location I want it to use – /opt/ords – and changes the directory’s ownership to oracle:oinstall. Finally, it unzips ORDS.

Configuring ORDS

ORDS stores its configuration in a directory of your choice. Since I’m lazy I use /opt/ords/config for this purpose. Once the directory is created on the file system you tell ORDS where to find its configuration:

[oracle@ords ords]$ java -jar /opt/ords/ords.war configdir /opt/ords/config
2021-04-21T18:48:23.156Z INFO        Set config.dir to /opt/ords/config in: /opt/ords/ords.war

Once this step is complete it’s time to configure ORDS and its database connection. This step is #2 in the above list and referred to as “installation” in the documentation.

Creating the less-privileged user in the database

As per the Install Guide you need to run a script to grant a non-privileged user the rights to configure ORDS. The script doesn’t create the user so you have to ensure it exists. The user didn’t seem to require any elevated privileges. I went creative and created the installation account:

SQL> create user ordsinstall identified by values '...';

User created.

SQL> grant create session to ordsinstall;

Grant succeeded.

With the user created I could start the minimum privilege script:

SQL> @installer/ords_installer_privileges ordsinstall

The script takes one argument: the account you intend to use for the installation (ordsinstall).

Installing ORDS in the database

The final step is to install ORDS in the database. This can be done in many ways. Trying to keep it simple I went with the interactive installation.

Have a look at the screen output, it should be self-explanatory for the most part. When prompted for the administrator username you provide the account just created (ordsinstall in my case). Since I wanted to try SQL*Developer Web, I chose that option. Your mileage may vary.

[oracle@ords ords]$ java -jar ords.war install advanced
Specify the database connection type to use.
Enter number for [1] Basic  [2] TNS  [3] Custom URL [1]:1
Enter the name of the database server [localhost]:server3
Enter the database listen port [1521]:
Enter 1 to specify the database service name, or 2 to specify the database SID [1]:1
Enter the database service name:ORCL
Enter 1 if you want to verify/install Oracle REST Data Services schema or 2 to skip this step [1]:
Enter the database password for ORDS_PUBLIC_USER:
Confirm password:
Requires to login with administrator privileges to verify Oracle REST Data Services schema.

Enter the administrator username:ordsinstall
Enter the database password for ordsinstall:
Confirm password:
Connecting to database user: ordsinstall url: jdbc:oracle:thin:@//server3:1521/ORCL

Retrieving information.
Enter the default tablespace for ORDS_METADATA [SYSAUX]:
Enter the temporary tablespace for ORDS_METADATA [TEMP]:
Enter the default tablespace for ORDS_PUBLIC_USER [SYSAUX]:
Enter the temporary tablespace for ORDS_PUBLIC_USER [TEMP]:
Enter 1 if you want to use PL/SQL Gateway or 2 to skip this step.
If using Oracle Application Express or migrating from mod_plsql then you must enter 1 [1]:2
Enter a number to select a feature to enable:
   [1] SQL Developer Web  (Enables all features)
   [2] REST Enabled SQL
   [3] Database API
   [4] REST Enabled SQL and Database API
   [5] None
Choose [1]:1
2021-04-21T19:38:18.012Z INFO        reloaded pools: []
Installing Oracle REST Data Services version 20.4.3.r0501904
... Log file written to /home/oracle/ords_install_core_2021-04-21_193818_00414.log
... Verified database prerequisites
... Created Oracle REST Data Services proxy user
Warning: Nashorn engine is planned to be removed from a future JDK release
... Created Oracle REST Data Services schema
... Granted privileges to Oracle REST Data Services
... Created Oracle REST Data Services database objects
... Log file written to /home/oracle/ords_install_datamodel_2021-04-21_193842_00226.log
... Log file written to /home/oracle/ords_install_apex_2021-04-21_193846_00491.log
Completed installation for Oracle REST Data Services version 20.4.3.r0501904. Elapsed time: 00:00:32.177 

Enter 1 if you wish to start in standalone mode or 2 to exit [1]:2
[oracle@ords ords]$

That’s it! ORDS has been configured in the database, and I didn’t have to connect as SYSDBA. I think that’s a big step ahead, and I have meant to write about this topic for a while.

Have fun!

Connecting to a database using SQLcl, a wallet and the thin driver

I have previously written about the use of SQLcl and using an Oracle wallet (aka secure external password store). In my previous article I used the JDBC oci driver. This is perfectly fine, but as it’s based on Oracle’s Call Interface, it requires client libraries to be present. This time I wanted to use the thin driver. A cursory search didn’t reveal any recent information about the topic so I decided to write this short post.

Creating the Wallet

The wallet is created exactly the same way as described in an earlier article of mine. Scroll down to “Preparing the External Password Store” and have a look at the steps required. You probably don’t have a client installation on the sqlcl host, so you need to prepare the wallet on a machine with one present. When providing input to mkstore and when creating tnsnames.ora it still seems to be best to ensure all identifiers are in lower case. Even if you db_name is in upper case like mine (“ORCL”).

Once the wallet is created and tested, transfer it to the sqlcl-host in a secure manner.

Deploying SQLcl

SQLcl – among other things – allows you to connect to an Oracle database without a client installation. This is quite useful in many cases. I downloaded SQLcl 21c (build 21.1.0.104.1544) from Oracle’s website and deployed it in ~/sql.

Testing the Wallet

I dropped the wallet and associated configuration files created earlier into ~/tns on my sqlcl-host. There is no need to set any environment variables at all. For reference, the following files were present on this vagrant box:

[vagrant@ords ~]$ ls -l ~/tns
total 20
-rw-------. 1 vagrant vagrant 581 Apr 20 19:29 cwallet.sso
-rw-------. 1 vagrant vagrant   0 Apr 20 19:28 cwallet.sso.lck
-rw-------. 1 vagrant vagrant 536 Apr 20 19:29 ewallet.p12
-rw-------. 1 vagrant vagrant   0 Apr 20 19:28 ewallet.p12.lck
-rw-r--r--. 1 vagrant vagrant  89 Apr 20 19:30 ojdbc.properties
-rw-r--r--. 1 vagrant vagrant 120 Apr 20 19:30 tnsnames.ora
[vagrant@ords ~]$  

With the files in place you connect to the database as follows:

[vagrant@ords bin]$ ./sql /@jdbc:oracle:thin:/@orcl?TNS_ADMIN=/home/vagrant/tns

SQLcl: Release 21.1 Production on Tue Apr 20 20:17:12 2021

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Last Successful login time: Tue Apr 20 2021 20:17:15 +00:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.10.0.0.0


SQL> select host_name from v$instance;

HOST_NAME 
-------------
server3      

SQL>   

Note the leading forward slashes in the connection string, they are absolutely mandatory. The beauty of EZConnect Plus syntax is that I don’t need to specify TNS_ADMIN as an environment variable anymore. EZConnect Plus can do many more things, feel free to browse the white paper I linked to.

Have fun!

New initialisation parameters for Transparent Data Encryption beginning in 19c

This blog post is a short summary of my attempt to configure Transparent Data Encryption (TDE) in a single-instance Oracle 19c Container Database (CDB). Reading the documentation I noticed that the way I used to configure TDE changed, there are two new initialisation parameters to be used since SQLNET.ENCRYPTION_WALLET_LOCATION has been deprecated. They are:

  • wallet_root
  • tde_configuration

This article assumes you have prior knowledge of TDE, if not I suggest you head over to the Advanced Security Guide for Oracle 19c. I won’t be so bold as to make the claim my configuration is perfect or even 100% correct; it does seem to work though. Please make sure that you get sign-off from your security team once you completed your TDE configuration.

If you are interested in Transparent Data Encryption please ensure your database is appropriately licensed to use the feature!

Reference

I used the following chapters in the Oracle Advanced Security Guide as reference:

  • Chapter 3: Configuring Transparent Data Encryption
  • Chapter 5: Managing Keystores and TDE Master Encryption Keys in United Mode

Current Environment

Key properties of the environment I’m creating in the lab:

  • I’ll use a software keystore
  • Oracle Restart 19c Enterprise Edition
  • Separation of duties in place in an Oracle Restart configuration
  • I use a Container Database with a single PDB (pdb1). The database has just been created by dbca
  • Configuration of “united mode” where all PDBs share the same keystore with the CDB$ROOT
  • I would like to use an auto-login keystore, trading security for convenience

I would have liked to use a local auto login keystore, however I’m planning on writing another article discussing Data Guard, and I don’t think a local auto-login would have worked as – per the docs – you can only open it on the node it was created.

In united mode, you create the keystore and TDE master encryption key for CDB and PDBs that reside in the same keystore. This might not be a good choice for you, make sure your implementation is consistent with your security team’s requirements.

Configure location and keystore type

This step is significantly different from previous releases, and the main new thing in 19c from what I understand.

[oracle@dgnode1 ~]$ mkdir /u01/app/oracle/admin/SITEA/wallets || echo "the directory exists, check your configuration"

Connect to CDB$ROOT and set wallet_location and tde_configuration

[oracle@dgnode1 ~]$ sq

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Feb 16 09:14:05 2021
Version 19.8.0.0.0

Copyright (c) 1982, 2020, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.8.0.0.0

SQL> show parameter wallet_root

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
wallet_root                          string

SQL> alter system set wallet_root='/u01/app/oracle/admin/SITEA/wallets' scope=spfile;

System altered.

Unfortunately it is necessary to bounce the instance as wallet_root is a static parameter. You have to set wallet_root before you can change tde_configuration. So it’s time to bounce the database now. After it’s up, the finishing touches can be applied.

SQL> alter system set tde_configuration="keystore_configuration=file" scope=both;

System altered.

SQL> select name, value from v$parameter
  2  where name in ('wallet_root', 'tde_configuration');

NAME                 VALUE
-------------------- ------------------------------------------------------
wallet_root          /u01/app/oracle/admin/SITEA/wallets
tde_configuration    keystore_configuration=file

To avoid bouncing the instance you could have set the parameters via dbca when creating the database.

Create the software keystore

I was surprised when I noticed that Oracle appended “/tde” to the location specified by wallet_root. According to the documentation this is correct.

SQL> select wrl_parameter, status, wallet_type, keystore_mode, con_id from v$encryption_wallet
  2  /

WRL_PARAMETER                                      STATUS                         WALLET_TYPE          KEYSTORE     CON_ID
-------------------------------------------------- ------------------------------ -------------------- -------- ----------
/u01/app/oracle/admin/SITEA/wallets/tde/           NOT_AVAILABLE                  UNKNOWN              NONE              1
                                                   NOT_AVAILABLE                  UNKNOWN              UNITED            2
                                                   NOT_AVAILABLE                  UNKNOWN              UNITED            3

The value of NONE in KEYSTORE_MODE for CON_ID 1 is expected, as is the absence of wallet locations for the PDBs. CDB1 (the root) owns the keystore for all PDBs.

Let’s create a local auto-login software keystore. This might not be the right keystore type for you, make sure to read the documentation about keystore types and their pros and cons.

Create the necessary password-protected keystore

Before creating the auto login keystore, a password protected keystore has to be created. This is what I did:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> show parameter wallet_root

NAME                                 TYPE        VALUE
------------------------------------ ----------- --------------------------------------
wallet_root                          string      /u01/app/oracle/admin/SITEA/wallets

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO

SQL> set verify off

SQL> administer key management create keystore identified by &password;

keystore altered.

SQL> select con_id, wrl_parameter, status, wallet_type, keystore_mode from v$encryption_wallet;

    CON_ID WRL_PARAMETER                                      STATUS                         WALLET_TYPE          KEYSTORE
---------- -------------------------------------------------- ------------------------------ -------------------- --------
         1 /u01/app/oracle/admin/SITEA/wallets/tde/           CLOSED                         UNKNOWN              NONE
         2                                                    CLOSED                         UNKNOWN              UNITED
         3                                                    CLOSED                         UNKNOWN              UNITED

The keystore is created, but it’s closed. Here I had to make a choice whether to keep the keystore password protected or transforming it to an auto login keystore. The higher degree of security in the first case is offset by the necessity to enter the keystore password when starting the database. In my lab environment I wanted to be able to start the database without manual intervention, so an auto login password it is. As you read earlier I would have preferred a local auto login keystore but since I will create a physical standby for the database. The Advanced Security Guide doesn’t provide specifics about the keystore type to be used in Data Guard.

Deciding between an auto login keystore and a password protected keystore isn’t a decision for the DBA to make. This is one for the security team!

Convert to auto-login keystore

Straight forward, but the keystore location cannot be ommitted this time. The docs instruct us to use wrl_location as shown in v$encryption_wallet. And yes, please use $WALLET_ROOT/tde for this.

SQL> administer key management create auto_login keystore
  2  from keystore '/u01/app/oracle/admin/SITEA/wallets/tde'
  3  identified by &password;

keystore altered.

SQL> select con_id, wrl_parameter, status, wallet_type, keystore_mode from v$encryption_wallet;

    CON_ID WRL_PARAMETER                                      STATUS                         WALLET_TYPE          KEYSTORE
---------- -------------------------------------------------- ------------------------------ -------------------- --------
         1 /u01/app/oracle/admin/SITEA/wallets/tde/           OPEN_NO_MASTER_KEY             AUTOLOGIN            NONE
         2                                                    OPEN_NO_MASTER_KEY             AUTOLOGIN            UNITED
         3                                                    OPEN_NO_MASTER_KEY             AUTOLOGIN            UNITED

This command created the following files:

SQL> !ls -laR /u01/app/oracle/admin/SITEA/wallets
/u01/app/oracle/admin/SITEA/wallets:
total 0
drwxr-xr-x. 3 oracle oinstall 17 Feb 17 13:16 .
drwxr-x---. 7 oracle oinstall 79 Feb 17 13:12 ..
drwxr-x---. 2 oracle asmadmin 44 Feb 17 13:16 tde

/u01/app/oracle/admin/SITEA/wallets/tde:
total 8
drwxr-x---. 2 oracle asmadmin   44 Feb 17 13:16 .
drwxr-xr-x. 3 oracle oinstall   17 Feb 17 13:16 ..
-rw-------. 1 oracle asmadmin 2600 Feb 17 13:16 cwallet.sso
-rw-------. 1 oracle asmadmin 2555 Feb 17 13:16 ewallet.p12

Since this is an auto-login password store it is not necessary to open it. A master key is missing though, it needs to be created.

Create the TDE master encryption key

Important: before running this command ensure all the PDBs in your CDB are open read-write!

SQL> administer key management set key force keystore identified by &password with backup container=all;

keystore altered.

It was necessary to use the “force keystore” option due to this error:

SQL> administer key management set key identified by &password with backup container = all;
administer key management set key identified by ... with backup
*
ERROR at line 1:
ORA-28417: password-based keystore is not open

When clearly it was open! The keystore status column read OPEN_NO_MASTER_KEY, see above.

It has also been necessary to specify container = all or otherwise the PDB wouldn’t have been assigned the key. This was the output of my first attempt, before I specified container=all:

SQL> administer key management set key force keystore identified by &password with backup;

keystore altered.

SQL> select con_id, wrl_parameter, status, wallet_type, keystore_mode from v$encryption_wallet;

    CON_ID WRL_PARAMETER                                 STATUS                         WALLET_TYPE          KEYSTORE
---------- --------------------------------------------- ------------------------------ -------------------- --------
         1 /u01/app/oracle/admin/SITEA/wallets/tde/      OPEN                           LOCAL_AUTOLOGIN      NONE
         2                                               OPEN                           LOCAL_AUTOLOGIN      UNITED
         3                                               OPEN_NO_MASTER_KEY             LOCAL_AUTOLOGIN      UNITED 

Note how CON_ID 3 (my PDB) doesn’t have a master key. According to the documentation it shouldn’t be necessary to specify container = all when using unified mode, but I guess it is.

Validating the configuration

The documentation provides a query to check if the master key is enabled:

SQL> select con_id, masterkey_activated from v$database_key_info;

    CON_ID MAS
---------- ---
         1 YES
         2 NO
         3 YES

So both my CDB$ROOT and PDB1 have the master key activated. I found the query against v$encryption_wallet useful, too.

SQL> select con_id, wrl_parameter, status, wallet_type, keystore_mode from v$encryption_wallet;

    CON_ID WRL_PARAMETER                                      STATUS                         WALLET_TYPE          KEYSTORE
---------- -------------------------------------------------- ------------------------------ -------------------- --------
         1 /u01/app/oracle/admin/SITEA/wallets/tde/           OPEN                           AUTOLOGIN            NONE
         2                                                    OPEN                           AUTOLOGIN            UNITED
         3                                                    OPEN                           AUTOLOGIN            UNITED

The autologin wallet is now open and ready for use.

Encrypt data

Connect to the PDB and create an encrypted tablespace using the default encryption algorithm. Be advised you can specify which encryption algorithm to use in the create tablespace command. I’m just mucking around with the feature and thus am perfectly happy with the default.

SQL> alter session set container = pdb1;

Session altered.

SQL> create tablespace soe_tbs datafile size 2g autoextend on next 1g maxsize 5g 
  2  encryption encrypt;

Tablespace created.

SQL> select ets.ts#, ets.encryptionalg, ets.key_version, ets.status, ts.name
  2  from v$encrypted_tablespaces ets join v$tablespace ts on (ets.ts# = ts.ts#);

       TS# ENCRYPT KEY_VERSION STATUS     NAME
---------- ------- ----------- ---------- ------------------------------
         6 AES128            0 NORMAL     SOE_TBS

Now let’s load some data… As always, I use Swingbench for this purpose. I’m pointing oewizard to the existing (TDE) tablespace; that’s an important aspect!

martin@client:~/java/swingbench/bin$ 
>./oewizard -cl -create ... -scale 1 -tc 4 -ts SOE_TBS -u soe  

While the data creation job is running you can see the number of blocks encrypted increase on my encrypted tablespace:

SQL> select ets.ts#, ets.encryptionalg, ets.key_version, ets.status, 
  2     ets.blocks_encrypted, ets.blocks_decrypted, ts.name
  3    from v$encrypted_tablespaces ets 
  4     join v$tablespace ts on (ets.ts# = ts.ts#);

       TS# ENCRYPT KEY_VERSION STATUS     BLOCKS_ENCRYPTED BLOCKS_DECRYPTED NAME
---------- ------- ----------- ---------- ---------------- ---------------- ------------------------------
         6 AES128            0 NORMAL                28587                0 SOE_TBS

Similarly, you can see the number of blocks decrypted increase when you query data from the SOE schema.

13:26:06 SQL> select ets.ts#, ets.encryptionalg, ets.key_version, 
  2     ets.status, ets.blocks_encrypted, ets.blocks_decrypted, ts.name
  3      from v$encrypted_tablespaces ets 
  4      join v$tablespace ts on (ets.ts# = ts.ts#);

       TS# ENCRYPT KEY_VERSION STATUS     BLOCKS_ENCRYPTED BLOCKS_DECRYPTED NAME
---------- ------- ----------- ---------- ---------------- ---------------- ------------------------------
         6 AES128            0 NORMAL                69901                0 SOE_TBS

13:26:07 SQL> /

       TS# ENCRYPT KEY_VERSION STATUS     BLOCKS_ENCRYPTED BLOCKS_DECRYPTED NAME
---------- ------- ----------- ---------- ---------------- ---------------- ------------------------------
         6 AES128            0 NORMAL               155725           191459 SOE_TBS

13:26:57 SQL> 

You can also notice blocks decrypted. As I haven’t run charbench yet I assume this is caused by the table scans when building the indexes.

Summary

This concludes the article on implementing TDE for an Oracle single-instance database. So far a lot of complexities have been omitted: no RAC, no Data Guard, no additional features requiring wallets. I am planning on extending this article to Data Guard setups, and eventually discuss a RAC setup.

Create a custom service in single instance Oracle 19c

This post demonstrates how to create a custom service in a single instance Oracle 19c database. As per the Oracle 19c PL/SQL Packages and Types guide, this is the only Oracle database deployment option where you are allowed to use this technique. Anything to do with high availability rules this approach out straight away. The same applies for a database managed by Clusterware (both Oracle Real Application Clusters and Oracle Restart) and Global Data Services (GDS).

Furthermore, the service_name parameter to DBMS_SERVICE.CREATE_SERVICE() is deprecated, I wouldn’t recommend running the code in this blog post on anything newer than Oracle 19c.

If you have a 19c single-instance database where the use of DBMS_SERVICE.CREATE_SERVICE is permitted, you might find this post useful. As always, my example deals with Swingbench. I created a CDB containing a PDB named swingbench1. In addition to the default service I would like to start a custom service, swingbench1_svc. This little SQL*Plus script should do the trick.

To add a little bit of security I added a short prompt to remind you that you can’t use DBMS_SERVICE.START_SERVICE in combination with RAC/Oracle Restart/Global Data Service. It requires you to be logged in a SYS, but that’s easy enough to change if you don’t want to do so.

whenever sqlerror exit

set verify off

define v_service=swingbench1_svc

-- let's try to prevent problems with RAC/Oracle Restart and GDS before we start

prompt This script cannot be run if your database is managed by Clusterware (RAC/Oracle Restart)
prompt or Global Data Services (GDS). 
prompt
accept ok_to_run prompt 'Is this environment a single-instance database [y/n] '

BEGIN
    IF upper('&ok_to_run') != 'Y' THEN
        raise_application_error(-20001, 'you must not use this script with RAC/Oracle Restart/GDS');
    END IF;
END;
/


DECLARE
    v_parameters dbms_service.svc_parameter_array;
    service_exists EXCEPTION;
    service_running EXCEPTION;
    PRAGMA exception_init ( service_exists, -44303 );
    PRAGMA exception_init ( service_running, -44305);
    v_version VARCHAR2(100);
    v_compatibility VARCHAR2(100);
    
BEGIN
    -- must be connected as SYS to a non-CDB or PDB
    IF
        sys_context('userenv', 'cdb_name') IS NOT NULL
        AND sys_context('userenv', 'con_id') <= 2
    THEN
        raise_application_error(-20002, 'you must be connected to a PDB');
    END IF;

    IF sys_context('userenv', 'session_user') != 'SYS' THEN
        raise_application_error(-20002, 'you must by logged in as SYS to run this code');
    END IF;

    -- make sure this is 19c
    dbms_utility.db_version(v_version, v_compatibility);
    if v_version != '19.0.0.0.0' then
        raise_application_error(-20003, 'you must run this script in a 19c database');
    end if;    

    -- create the service, there is no need to provide any parameters
    -- for a single instance database. Ignore the error should the service exist
    BEGIN
        dbms_service.create_service(
            service_name => '&v_service', 
            network_name => '&v_service',
            parameter_array => v_parameters);
    EXCEPTION
        WHEN service_exists THEN
            NULL;
        WHEN others THEN
            raise;
    END;
            
    -- and start it. Ignore an error in case it's running
    BEGIN
        dbms_service.start_service('&v_service');
    EXCEPTION
        WHEN service_running THEN
            NULL;
        WHEN others THEN
            raise;
    END;
END;
/

-- make sure the service starts when the database opens
CREATE OR REPLACE TRIGGER SYS.SERVICES_TRIG
AFTER STARTUP ON DATABASE 
BEGIN
    IF sys_context('userenv','database_role') = 'PRIMARY' THEN
        dbms_service.start_service('&v_service');
    END IF;
END;
/

Once the code is deployed, the service will start with the PDB:

SQL> select name, open_mode, con_id from v$pdbs;

NAME                           OPEN_MODE      CON_ID
------------------------------ ---------- ----------
PDB$SEED                       READ ONLY           2
SWINGBENCH1                    MOUNTED             4

SQL> select con_id, name from v$active_services order by 1,2;

    CON_ID NAME
---------- ------------------------------
         1 CDB
         1 CDBXDB
         1 SYS$BACKGROUND
         1 SYS$USERS
         4 swingbench1

SQL> alter pluggable database SWINGBENCH1 open;

Pluggable database altered.

SQL> select con_id, name from v$active_services order by 1,2;

    CON_ID NAME
---------- ------------------------------
         1 CDB
         1 CDBXDB
         1 SYS$BACKGROUND
         1 SYS$USERS
         4 swingbench1
         4 swingbench1_svc

6 rows selected.

That should do it: if I can’t rely on Clusterware or Global Data Services to manage services for my database, I can use this approach to create an “application” or custom service in my single instance database. And it works:

SQL> conn martin@(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = server1.example.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = swingbench1_svc)))
Enter password: 
Connected.
SQL> select sys_context('userenv','service_name') from dual;

SYS_CONTEXT('USERENV','SERVICE_NAME')
----------------------------------------------------------
swingbench1_svc

I like it when a plan comes together.

Swingbench: creating benchmark schemas with strong passwords

This post describes my experience creating the SOE user account in an Oracle 21 database provided by Oracle’s Database Cloud Service (DBCS). The same principle should apply for earlier releases as well, at the end of the day it boils down to the use of Oracle’s password verify functions. To avoid any ambiguity: the use of password verification functions isn’t a cloud-only feature ;)

Any security conscious administrator should enforce strong passwords and good password hygiene. There is a lot to be said about the topic, too much even for me :) Please refer to the Database Security Guide for your release for an introduction to the topic in the context of the Oracle database and take it from there.

Password Verify Functions

Quite recently I wanted to install Swingbench in my Oracle 21 system, hosted in Oracle’s Cloud Infrastructure (OCI). Unsurprisingly the database enforces strong passwords. Connected to my PDB (“demo_pdb”) I noticed the following in dba_profiles:

SQL> select profile, resource_name, limit from dba_profiles 
  2  where resource_name = 'PASSWORD_VERIFY_FUNCTION';

PROFILE 		       RESOURCE_NAME			LIMIT
------------------------------ -------------------------------- --------------------------------------------------
DEFAULT 		       PASSWORD_VERIFY_FUNCTION 	ORA12C_STRONG_VERIFY_FUNCTION
ORA_CIS_PROFILE 	       PASSWORD_VERIFY_FUNCTION 	ORA12C_VERIFY_FUNCTION
ORA_STIG_PROFILE	       PASSWORD_VERIFY_FUNCTION 	ORA12C_STIG_VERIFY_FUNCTION

SQL> 

A weak password unsurprisingly will not work in these circumstances:

SQL> create user martin identified by superWeakPassword;
create user martin identified by superWeakPassword
*
ERROR at line 1:
ORA-28003: password verification for the specified password failed
ORA-20000: password must contain 2 or more digits

In other words, you need to provide a strong password for Swingbench’s schema creation wizards as well.

Calling OEWizard

I commonly use the command line to create Swingbench’s benchmark schemas. This time around I wanted to create the Swingbench Order Entry schema. To adhere to the password-complexity rule I have to provide Oracle with a password containing special characters. As per the create user SQL command, passwords containing those special characters need to be enclosed in double-quotes. Now the trick is to read man(1) bash correctly, namely the section on quoting.

The keep it short you need to wrap the actual password into a single quote/a double quote combination, like so:

./oewizard -allindexes ... -u soe -p '"superSafeTempPassword"' ...

This way the passwords will make it all the way to oewizard, allowing it to create the user successfully.

Change those passwords immediately

Unfortunately there is no other way than providing passwords to oewizard on the command line. They will be visible to other users on the system, so make sure to change them immediately as soon as the wizard finished with the schema creation.

Password complexity rules

By the way, the Oracle-provided password verification functions and their password complexity rules are explained in chapter 3 of the Database Security Guide. If you get ORA-28003/ORA-20000 in OEWizard or later in SQL*Plus when changing the password your new password doesn’t adhere to the complexity rules.

Happy benchmarking!

Ansible tips’n’tricks: configuring the Ansible Dynamic Inventory for OCI – Oracle Linux 7

I have previously written about the configuration of the Ansible Dynamic Inventory for OCI. The aforementioned article focused on Debian, and I promised an update for Oracle Linux 7. You are reading it now.

The biggest difference between the older post and this one is the ability to use YUM in Oracle Linux 7. Rather than manually installing Ansible, the Python SDK and the OCI collection from Ansible Galaxy you can make use of the package management built into Oracle Linux 7 and Oracle-provided packages.

Warning about the software repositories

All the packages referred to later in the article are either provided by Oracle’s Extra Packages for Enterprise Linux (EPEL) repository or the development repo. Both repositories are listed in a section labelled “Packages for Test and Development“ in Oracle’s yum server. As per https://yum.oracle.com/oracle-linux-7.html, these packages come with the following warning:

Note: The contents in the following repositories are for development purposes only. Oracle suggests these not be used in production.

This is really important! Please make sure you understand the implications for your organisation. If this caveat is a show-stopper for you, please refer to the manual installation of the tools in my earlier article for an alternative approach.

I’m ok with the restriction as it’s my lab anyway, with myself as the only user. No one else to blame if things go wrong :)

Installing the software

You need to install a few packages from Oracle’s development repositories if you accept the warning quoted above. One of the components you will need – oci-ansible-collection – requires Python 3, so there is no need to install Ansible with support for Python 2.

The first step is to enable the necessary repositories:

sudo yum-config-manager --enable ol7_developer_EPEL
sudo yum-config-manager --enable ol7_developer

Once that’s done I can install the OCI collection. This package pulls all the other RPMs I need as dependencies.

[opc@dynInv ~]$ sudo yum install oci-ansible-collection

...

--> Finished Dependency Resolution

Dependencies Resolved

==================================================================================
 Package                  Arch     Version             Repository            Size
==================================================================================
Installing:
 oci-ansible-collection   x86_64   2.19.0-1.el7        ol7_developer        6.6 M
Installing for dependencies:
 ansible-python3          noarch   2.9.18-1.el7        ol7_developer_EPEL    16 M
 python3-jmespath         noarch   0.10.0-1.el7        ol7_developer         42 k
 python36-asn1crypto      noarch   0.24.0-7.el7        ol7_developer        179 k
 python36-cryptography    x86_64   2.3-2.el7           ol7_developer        501 k
 python36-idna            noarch   2.10-1.el7          ol7_developer_EPEL    98 k
 python36-jinja2          noarch   2.11.1-1.el7        ol7_developer_EPEL   237 k
 python36-markupsafe      x86_64   0.23-3.0.1.el7      ol7_developer_EPEL    32 k
 python36-paramiko        noarch   2.1.1-0.10.el7      ol7_developer_EPEL   272 k
 python36-pyasn1          noarch   0.4.7-1.el7         ol7_developer        173 k
 python36-pyyaml          x86_64   5.1.2-1.0.2.el7     ol7_developer        198 k
 python36-six             noarch   1.14.0-2.el7        ol7_developer_EPEL    33 k
 sshpass                  x86_64   1.06-1.el7          ol7_developer_EPEL    21 k

Transaction Summary
==================================================================================
Install  1 Package (+12 Dependent packages)

Total download size: 25 M
Installed size: 233 M
Is this ok [y/d/N]: 

Once all packages are installed you should be in the position to test the configuration. The article assumes the OCI Python SDK is already configured. If not, head over to the documentation for instructions on how to do so.

Verifying the installation

Out of habit I run ansible --version once the software is installed to make sure everything works as expected. Right after the installation I tried, but I noticed that Ansible seemingly wasn’t present:

[opc@dyninv ~]$ which ansible
/usr/bin/which: no ansible in (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/opc/.local/bin:/home/opc/bin)

It is present though, and it took me a minute to understand the way Oracle packaged Ansible: Ansible/Python3 is found in ansible-python3 instead of ansible. A quick check of the package’s contents revealed that a suffix was added, for example:

[opc@dyninv ~]$ ansible-3 --version
ansible-3 2.9.18
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/opc/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible-3
  python version = 3.6.8 (default, Mar  9 2021, 15:08:44) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)]
[opc@dyninv ~]$ 

An important detail can be found in the last line: the python version is reported to be 3.6.8, at least at it was at the time of writing.

Testing the Dynamic Inventory

Before going into details about the dynamic inventory, first I’d like to repeat a warning I had in my older post as well:

Remember that the use of the Dynamic Inventory plugin is a great time saver, but comes with a risk. If you aren’t careful, you can end up running playbooks against far too many hosts. Clever Identity and Access Management (IAM) and the use of filters in the inventory are a must to prevent accidents. And don’t ever use hosts: all in your playbooks! Principle of least privilege is key.

Ansible configuration

With the hard work completed and out of the way it’s time to test the dynamic inventory. First of all I need to tell Ansible to enable the Oracle collection. I’m doing this in ~/.ansible.cfg:

[opc@dyninv ansible]$ cat ~/.ansible.cfg 
[defaults]
stdout_callback = debug

[inventory]
enable_plugins = oracle.oci.oci

The next file to be created is the dynamic inventory file. It needs to be named following the Ansible convention:

filename.oci.yml.

You are only allowed to change the first part (“filename”) or else you get an error. The example file contains the following lines, limiting the output to a particular compartment and set of tags, following my own advice from above.

plugin: oracle.oci.oci

hostname_format: "fqdn"

filters:
 defined_tags: { "project": { "name": "simple-app" } } 

regions:
- eu-frankfurt-1

compartments:
- compartment_ocid: "ocid1.compartment.oc1..aaa...a"
  fetch_hosts_from_subcompartments: true

With the setup complete I can graph the inventory:

[opc@dyninv ansible]$ ansible-inventory-3 --inventory dynInv.oci.yml --graph
...
@all:
  |--@IHsr_EU-FRANKFURT-1-AD-2:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@IHsr_EU-FRANKFURT-1-AD-3:
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |--@all_hosts:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@ougdemo-department:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@project#name=simple-app:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@region_eu-frankfurt-1:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@tag_role=appserver:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |--@tag_role=bastionhost:
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@ungrouped:

Happy Automating!

Summary

It’s quite a time saver not having to install all components of the toolchain yourself. By pulling packages from Oracle’s yum repositories I can also count on updates being made available, providing many benefits such as security and bug fixes.

Device name persistence in the cloud: OCI + Terraform

This is a really short post (by my standards at least) demonstrating how I ensure device name persistence in Oracle Cloud Infrastructure (OCI). Device name persistence matters for many reasons, not the least for my Ansible scripts expecting a given block device to be of a certain size and used for a specific purpose. And I’m too lazy to write discovery code in Ansible, I just want to be able to use /dev/oracleoci/oraclevdb for LVM so that I can install the database.

The goal is to provision a VM with a sufficient number of block devices for use with the Oracle database. I wrote about the basics of device name persistence in December last year. In my earlier post I used the OCI Command Line Interface (CLI). Today I rewrote my code, switching from shell to Terraform.

As always I shall warn you that creating cloud resources as shown in this post will incur cost so please make sure you are aware of the fact. You should also be authorised to spend money if you use the code for your purposes.

Terraform Compute and Block Storage

When creating a VM in OCI, you make use of the oci_core_instance Terraform resource. Amongst the arguments you pass to it is the (operating system) image as well as the boot volume size. The boot volume is attached to the VM instance without any further input on your behalf.

Let’s assume you have already defined a VM resource named sitea_instance in your Terraform code.

I generally attach 5 block volumes to my VMs unless performance requirements mandate a different approach.

  • Block device number 1 hosts the database binaries
  • Devices number 2 and 3 are used for +DATA
  • The remaining devices (4 and 5) will be used for +RECO

Creating block volumes

The first step is to create block volumes. I know I want five, and I know they need to end up as /dev/oracleoci/oraclevd[b-f]. Since I’m pretty lazy I thought I’d go with some kind of loop instead of hard-coding 5 block devices. It should also allow for more flexibility in the long run.

I tried to use the count meta-argument but failed to get it to work the way I wanted. Which might be a PBKAC issue. The other option in Terraform is to use the for each meta-argument instead. This sounded a lot better for my purpose. To keep my code flexible I decided to store the future block devices’ names in a variable:

variable "block_volumes" {
    type = list(string)
    default = [ 
        "oraclevdb",
        "oraclevdc",
        "oraclevdd",
        "oraclevde",
        "oraclevdf"
    ]
}

Remember that Oracle assigns /dev/oracleoci/oraclevda to the boot volume. You definitely want to leave that one alone.

Next I’ll use the for-each block to get the block device name. I’m not sure if this is considered good code, all I know is that it does the job. The Terraform entity to create block devices is name oci_core_volume:

resource "oci_core_volume" "sitea_block_volume" {
  for_each = toset(var.block_volumes)

  availability_domain  = data.oci_identity_availability_domains.local_ads.availability_domains.0.name
  compartment_id       = var.compartment_ocid
  display_name         = "sitea-${each.value}"
  size_in_gbs          = 50

}

This takes care of creating 5 block volumes. On their own they aren’t very useful yet, they need to be attached to a VM.

Attaching block devices to the VM

In the next step I have to create a block device attachment. This is where the count meta-argument failed me as I couldn’t find a way to generate the persistent device name. I got around that issue using for-each, as shown here:

resource "oci_core_volume_attachment" "sitea_block_volume_attachement" {
  for_each = toset(var.block_volumes)

  attachment_type = "iscsi"
  instance_id     = oci_core_instance.sitea_instance.id
  volume_id       = oci_core_volume.sitea_block_volume[each.value].id
  device          = "/dev/oracleoci/${each.value}"
}

Using the contents of each.value I can refer to the block volume and also assign a suitable device name. Note that I’m specifying “iscsi” as the attachement type. Instead of the remote-exec provisioner I rely on cloud-init to make my iSCSI devices available to the VM.

The result

Once the Terraform script completes, I have a VM with block storage ready for Ansible provisioning scripts.

[opc@sitea ~]$ ls -l /dev/oracleoci/
total 0
lrwxrwxrwx. 1 root root 6 Mar 25 14:47 oraclevda -> ../sda
lrwxrwxrwx. 1 root root 7 Mar 25 14:47 oraclevda1 -> ../sda1
lrwxrwxrwx. 1 root root 7 Mar 25 14:47 oraclevda2 -> ../sda2
lrwxrwxrwx. 1 root root 7 Mar 25 14:47 oraclevda3 -> ../sda3
lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevdb -> ../sdc
lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevdc -> ../sdd
lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevdd -> ../sde
lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevde -> ../sdb
lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevdf -> ../sdf
[opc@sitea ~]$ lsblk
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                  8:0    0   50G  0 disk 
├─sda1               8:1    0  100M  0 part /boot/efi
├─sda2               8:2    0    1G  0 part /boot
└─sda3               8:3    0 48,9G  0 part 
  ├─ocivolume-root 252:0    0 38,9G  0 lvm  /
  └─ocivolume-oled 252:1    0   10G  0 lvm  /var/oled
sdb                  8:16   0   50G  0 disk 
sdc                  8:32   0   50G  0 disk 
sdd                  8:48   0   50G  0 disk 
sde                  8:64   0   50G  0 disk 
sdf                  8:80   0   50G  0 disk 

Summary

There are many ways to complete tasks, and cloud providers usually offer plenty of them. I previously wrote about ensuring device name persistence using the OCI CLI whereas this post covers Terraform. Looking back and comparing both I have to say that I like the new approach better.

Oracle Database Cloud Service: Create a database from backup using Terraform

A common DBA task is to ensure that a development-type environment is refreshed. In a typical on-premises case a “dev refresh” involves quite a bit of scripting in various programming languages. Whilst that’s a perfectly fine approach, it can be done a lot simpler when you consider the use of the cloud. My example uses Oracle’s Database Cloud Service (DBCS).

I prefix all my cloud posts with a similar warning, and this is no exception. Using cloud services costs money, so please make sure you are authorised to make use of these services. You also need to ensure you are licensed appropriately

The Scenario

I am recreating a typical scenario: a database backup acts as the source for the “DEV” environment. To keep this post simple-ish, let’s assume I can use the backup as it is. The database backup is located in Oracle Cloud Infrastructure (OCI) Object Storage.

Implementation

Writing a piece of Terraform code with the intention of storing it in version control requires the use of variables, at least in my opinion. Otherwise, any change to the input parameters will result in git marking the file’s status as untracked. And you certrainly don’t want to store passwords in code, ever.

You’ll see variables used throughout in my example code.

Getting backup details

Backup details for my source database are provided by a database backups data source. My requirement is quite simple: just take the latest backup and use it for the restore operation.

#
# get the database backups for src_db_ocid
#
data "oci_database_backups" "src_bkp" {
  database_id    = var.src_db_ocid
} 

The database backup to grab is element 0 in the resulting list of backups provided by the data source.

Thinking about passwords

Passwords are a tricky affair in OCI. It would be great if we could lift them from (OCI) Vault, but this wasn’t possible at the time of writing. A Github issue has been raised but didn’t seem to gain much momentum. There are workarounds though, please refer to this excellent post by Yevgeniy Brikman on the topic. I’ll leave it as an exercise to the reader to work out the best strategy.

Since Terraform v0.14 it is possible to declare a variable to be “sensitive”. That sounds great:

variable "new_admin_pwd" {
  type      = string
  sensitive = true
}

variable "backup_tde_password" {
  type      = string
  sensitive = true
}

Except they aren’t quite there yet: all sensitive information still appears in the state file in plain text :(

Creating the DB System

The final step is to create the database system. In my case, I only need a single resource:

resource "oci_database_db_system" "dev_system" {

  # the AD of the new environment has to match the AD where
  # the backup is stored (a property exported by the data source)
  availability_domain = data.oci_database_backups.src_bkp.backups.0.availability_domain

  # instruction to create the database from a backup
  source = "DB_BACKUP"

  # Some of these properties are hard-coded to suit my use case. 
  # Your requirement is almost certainly different. Make sure you
  # change paramaters as required
  compartment_id          = var.compartment_ocid
  database_edition        = "ENTERPRISE_EDITION"
  data_storage_size_in_gb = 256
  hostname                = "dev"
  shape                   = "VM.Standard2.1"
  node_count              = 1
  ssh_public_keys         = [var.ssh_public_key]
  subnet_id               = var.subnet_id
  nsg_ids                 = [var.nsg_id]
  license_model           = "LICENSE_INCLUDED"

  display_name = "development DB system"

  db_home {

    database {
      # the admin password for the _new_ database
      admin_password = var.new_admin_pwd

      # this is from the source backup!
      backup_tde_password = var.backup_tde_password
      backup_id           = data.oci_database_backups.src_bkp.backups.0.id

      db_name = "DEV"
    }

  }

  db_system_options {
    storage_management = "ASM"
  }
}

This is all it takes. The majority of input parameters are provided as variables to make the script a little more portable between environments and easier to check in with version control.

A short terraform apply later a new database system is created. Happy Automating!

Creating custom database binaries in Oracle Cloud Infrastructure

Oracle Cloud Infrastructure (OCI) enables users to run the database in many different ways. Starting with Infrastructure as a Service (IaaS) deployments all the way to Autonomous Database (ADB), you can choose the offering that suits you best based on how much control you want to retain (or give up). While researching Oracle’s Database Cloud Service (DBCS) I noticed a nice feature that you might also find worthwhile: custom software images.

Some Background

Depending on your choice of cloud deployment (IaaS – DBCS/ExaCS – ADB), administrators can be freed from some of the more mundane DBA tasks. The rule is simple: the more the service provider does for you, the less influence you have over how things are done.

Making changes to the database binaries in managed cloud environments is often impossible. In OCI it’s possible to create “custom” binaries, which is what this post is about. I was inspired by a recent post from @MikeDietrichDE about recommended patches on top of 19c. Let’s see if I can create a custom image in OCI using the My Oracle Support (MOS) note as the source.

Be advised that following the steps in this short article will cost you money. Please consult the OCI documentation for details.

Creating a custom set of binaries

I’m using the OCI Command Line Interface (CLI) to create the custom software image. To speed things up I ran the CLI commands from within the cloud shell.

Documentation References

The official documentation has more details about custom software images for DBCS systems if you like to get the full story. The CLI calls are described in a different section of the documentation. Note that I was using CLI version version 2.21.5.

Building the binaries

My goal is to create a custom set of binaries on top of 19.10.0, for example by including patch 123456789 (it obviously doesn’t exist, substitute it with your patch number). I didn’t run the compatibility check deliberately to see what happens. In the real world I strongly advise to test the patch strategy on a VM first to see if it works. The feedback loop is much tighter that way should any problem occur.

The command to create the software image is reasonably straight forward:

$ oci db database-software-image create \
--compartment-id "${CID}" \
--database-version "19.0.0.0" \
--display-name "210322: custom 19.10.0 + 123456789" \
--patch-set "19.10.0.0" \
--database-software-image-one-offs ' [ "123456789" ] '

Make sure not to forget any of the “.0”. According to the CLI you have to provide the one-off patch as a JSON data structure. It looks like a regular array of strings to me:

$ oci db database-software-image create \
--generate-param-json-input database-software-image-one-off-patches
[
  "string",
  "string"
]

Once you fire the command off, the prompt returns immediately and emits a JSON response with more details. The most important piece of information concerns the Oracle Cloud ID (OCID) of the new binaries as well as the corresponding work request ID. The actual work of creating the software home continues in the background.

Viewing progress is where the GUI comes in handy: simply navigate to Oracle Databases > Bare Metal, VM and Exadata > Database Software Images and you should see the new image along with its status. Otherwise query the state a few times until the lifecycle changes to “available”:

$ oci db database-software-image list -c "${CID}" \
--query 'data[].{release:"database-version",name:"display-name",state:"lifecycle-state",ru:"patch-set"}' \
--output table
+------------------------------------+----------+-----------+-----------+
| name                               | release  | ru        | state     |
+------------------------------------+----------+-----------+-----------+
| 210322: custom 19.10.0 + 123456789 | 19.0.0.0 | 19.10.0.0 | AVAILABLE |
+------------------------------------+----------+-----------+-----------+

Using the New Binaries

Once the binaries are created, you can specify the new binaries when creating a new database system. Still using Cloud Shell, I can create a database system with the new binaries quite easily:

oci db system launch \
--admin-password "${ADMIN_PWD}" \
--availability-domain "${AD}" \
--compartment-id "${CID}" \
--database-edition "ENTERPRISE_EDITION" \
--db-name DUMMY \
--shape "VM.Standard2.1" \
--ssh-authorized-keys-file /path/to/key.pub \
--subnet-id "${DATABASE_SUBNET_OCID}" \
--database-software-image-id "${CUSTOM_BINARIES_OCID}" \
--db-unique-name DUMMY \
--display-name "custom binaries test" \
--initial-data-storage-size-in-gb 256 \
--pdb-name DUMMYPDB \
--hostname demohost \
--cpu-core-count 1 \
--db-version "19.10.0.0" \
--node-count 1

Once the command completes I have a new database home including patch 123456789. When I checked the database home after the system provisioned I noticed that in addition to the one-off patch I requested a few others were present as well. So please make sure you check the exact status using $ORACLE_HOME/OPatch/opatch lspatches after provisioning completed. As with all changes to the underlying infrastructure, ensure rigorous testing ensues to find any potential issues there might be.

Summary

Oracle Cloud Infrastructure provides a very convenient way to create “customer-specific” database binary installations. The binaries are a regional resource and can be accessed from any AD in the region. According to the documentation they are stored in Object Storage.

Installing Ansible on Oracle Linux 8 for test and development use

I have previously written about installing Ansible on Oracle Linux 7 for non-production use. A similar approach can be taken to install Ansible on Oracle Linux 8. This is a quick post to show you how I did that in my Vagrant (lab) VM.

As it is the case with Oracle Linux 7, the Extra Packages for Enterprise Linux (EPEL) repository is listed in a section labelled “Packages for Test and Development“. As per http://yum.oracle.com/oracle-linux-8.html, these packages come with the following warning:

Note: The contents in the following repositories are for development purposes only. Oracle suggests these not be used in production.

This is really important!

If you are ok with the limitation I just quoted from Oracle’s YUM server, please read on. If not, head back to the official Ansible documentation and use a different installation method instead. I only use Ansible in my own lab and therefore don’t mind.

Enabling the EPEL repository

The first step is to enable the EPEL repository. For quite some time now, Oracle has split the monolithic YUM configuration file into smaller, more manageable pieces. For EPEL, you need to install oracle-epel-release-el8.x86_64:

[vagrant@dev ~]$ sudo dnf info oracle-epel-release-el8.x86_64
Last metadata expiration check: 1:51:09 ago on Wed 10 Feb 2021 09:30:41 UTC.
Installed Packages
Name         : oracle-epel-release-el8
Version      : 1.0
Release      : 2.el8
Architecture : x86_64
Size         : 18 k
Source       : oracle-epel-release-el8-1.0-2.el8.src.rpm
Repository   : @System
From repo    : ol8_baseos_latest
Summary      : Extra Packages for Enterprise Linux (EPEL) yum repository
             : configuration
License      : GPLv2
Description  : This package contains the  Extra Packages for Enterprise Linux
             : (EPEL) yum repository configuration.

[vagrant@dev ~]$  

A quick sudo dnf install oracle-epel-release-el8 will install the package and create the EPEL repository configuration. Until this stage the new repository is known, but still disabled. This is what it looked like on my (custom built) Oracle Linux 8.3 Vagrant box, booted into UEK 6:

[vagrant@dev ~]$ sudo dnf repolist
repo id           repo name
ol8_UEKR6         Latest Unbreakable Enterprise Kernel Release 6 for Oracle Linux 8 (x86_64)
ol8_appstream     Oracle Linux 8 Application Stream (x86_64)
ol8_baseos_latest Oracle Linux 8 BaseOS Latest (x86_64)
[vagrant@dev ~]$  

If you are ok with the caveat mentioned earlier (development purpose, no production use…, see above) you can enable the EPEL repository:

[vagrant@dev ~]$ sudo yum-config-manager --enable ol8_developer_EPEL
[vagrant@dev ~]$ sudo dnf repolist
repo id            repo name
ol8_UEKR6          Latest Unbreakable Enterprise Kernel Release 6 for Oracle Linux 8 (x86_64)
ol8_appstream      Oracle Linux 8 Application Stream (x86_64)
ol8_baseos_latest  Oracle Linux 8 BaseOS Latest (x86_64)
ol8_developer_EPEL Oracle Linux 8 EPEL Packages for Development (x86_64)
[vagrant@dev ~]$   

The output of dnf repolist confirms that EPEL is now enabled.

Installing Ansible

With the repository enabled you can search for Ansible:

[vagrant@dev ~]$ sudo dnf info ansible
Last metadata expiration check: 0:00:10 ago on Wed 10 Feb 2021 11:26:57 UTC.
Available Packages
Name         : ansible
Version      : 2.9.15
Release      : 1.el8
Architecture : noarch
Size         : 17 M
Source       : ansible-2.9.15-1.el8.src.rpm
Repository   : ol8_developer_EPEL
Summary      : SSH-based configuration management, deployment, and task
             : execution system
URL          : http://ansible.com
License      : GPLv3+
Description  : Ansible is a radically simple model-driven configuration
             : management, multi-node deployment, and remote task execution
             : system. Ansible works over SSH and does not require any software
             : or daemons to be installed on remote nodes. Extension modules can
             : be written in any language and are transferred to managed
             : machines automatically.

[...]

[vagrant@dev ~]$  

Mind you, 2.9.15 was the current release at the time of writing. If you hit the blog by means of a search engine, the version will most likely be different. Let’s install Ansible:

[vagrant@dev ~]$ sudo dnf install ansible ansible-doc
Last metadata expiration check: 0:01:06 ago on Wed 10 Feb 2021 11:26:57 UTC.
Dependencies resolved.
================================================================================
 Package            Arch   Version                     Repository          Size
================================================================================
Installing:
 ansible            noarch 2.9.15-1.el8                ol8_developer_EPEL  17 M
 ansible-doc        noarch 2.9.15-1.el8                ol8_developer_EPEL  12 M
Installing dependencies:
 python3-babel      noarch 2.5.1-5.el8                 ol8_appstream      4.8 M
 python3-jinja2     noarch 2.10.1-2.el8_0              ol8_appstream      538 k
 python3-jmespath   noarch 0.9.0-11.el8                ol8_appstream       45 k
 python3-markupsafe x86_64 0.23-19.el8                 ol8_appstream       39 k
 python3-pip        noarch 9.0.3-18.el8                ol8_appstream       20 k
 python3-pytz       noarch 2017.2-9.el8                ol8_appstream       54 k
 python3-pyyaml     x86_64 3.12-12.el8                 ol8_baseos_latest  193 k
 python3-setuptools noarch 39.2.0-6.el8                ol8_baseos_latest  163 k
 python36           x86_64 3.6.8-2.module+el8.3.0+7694+550a8252
                                                       ol8_appstream       19 k
 sshpass            x86_64 1.06-9.el8                  ol8_developer_EPEL  28 k
Enabling module streams:
 python36                  3.6                                                 

Transaction Summary
================================================================================
Install  12 Packages

Total download size: 35 M
Installed size: 459 M
Is this ok [y/N]: y

[...]  

Installed:
  ansible-2.9.15-1.el8.noarch                                                   
  ansible-doc-2.9.15-1.el8.noarch                                               
  python3-babel-2.5.1-5.el8.noarch                                              
  python3-jinja2-2.10.1-2.el8_0.noarch                                          
  python3-jmespath-0.9.0-11.el8.noarch                                          
  python3-markupsafe-0.23-19.el8.x86_64                                         
  python3-pip-9.0.3-18.el8.noarch                                               
  python3-pytz-2017.2-9.el8.noarch                                              
  python3-pyyaml-3.12-12.el8.x86_64                                             
  python3-setuptools-39.2.0-6.el8.noarch                                        
  python36-3.6.8-2.module+el8.3.0+7694+550a8252.x86_64                          
  sshpass-1.06-9.el8.x86_64                                                     

Complete!
[vagrant@dev ~]$   

A quick test reveals the software works as advertised:

[vagrant@dev ~]$ ansible --version
ansible 2.9.15
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.8 (default, Nov  5 2020, 18:03:20) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5.0.1)]
[vagrant@dev ~]$  

It seems that Ansible has been installed successfully.