Author Archives: Martin Bach

About Martin Bach

Oracle DBA and Linux enthusiast, part time author and presenter.

Silent installation: Oracle Restart 19c, ASM Filter Driver, RHCK edition

As promised in the earlier post here are my notes about installing Oracle Restart 19c on Oracle Linux 7.7 using the RedHat compatible kernel (RHCK). Please consult the ACFS/ASMFD compatibility matrix, My Oracle Support DocID 1369107.1 for the latest information about ASMFD compatibility with various kernels as well.

Why am I starting the series with a seemingly “odd” kernel, at least from the point of view of Oracle Linux? If you try to install the Oracle Restart base release with UEK 5, you get strange error messages back from gridSetup telling you about invalid ASM disks. While that’s probably true, it’s a secondary error. The main cause of the problem is this:

[root@server5 bin]# ./afddriverstate supported
AFD-620: AFD is not supported on this operating system version: '4.14.35-1902.300.11.el7uek.x86_64'
AFD-9201: Not Supported
AFD-9294: updating file /etc/sysconfig/oracledrivers.conf 

Which is easy to run into since gridSetup.sh doesn’t validate this for you when running in silent mode. The GUI version of the installer protects you from the mistake though. Upgrading to the latest UEK 5 doesn’t change this message, you need to check the certification matrix to learn that Oracle Restart 19.4.0 and later are required for UEK 5 if you’d like to use ASMFD (or ACFS for that matter). This scenario will be covered in a later post.

Using the Red Hat Compatible Kernel alleviates this problem for me. Just be aware of the usual caveats when using the Red Hat Kernel on Oracle Linux such as YUM changing the default kernel during yum upgrade etc. I’d also like to iterate that this post isn’t an endorsement for ASM Filter Driver, but since the documentation was a little unclear I thought I’d write up how I got to a working installation. It is up to you to ensure that ASMFD is a workable solution for your environment by following industry best known practices.

Configuration Options

In the post introducing this series I claimed to have identified 2 options for installing Oracle Restart 19c using ASMFD: the first one is to use UDEV to prepare ASM block devices, the second one is to label the ASM disks using asmcmd afd_label.

Huh, UDEV? That hasn’t really been blogged about at all in the context of ASMFD, or at least I didn’t find anyone who did. I’m inferring the possibility of using UDEV from “Configuring Oracle ASM Filter Driver During Installation” (link to documentation):

“If you do not use udev on the system where the Oracle Grid Infrastructure is installed, then you can also complete the following procedure to provision disks for Oracle ASMFD before the installer is launched”

You actually only have to choose one of them. Let’s start with the more frequently covered approach of labelling disks using asmcmd.

My environment

I have applied all the patches to this environment up to March 26th to my lab enviroment. The Oracle Linux release I’m using is 7.7:

[root@server4 ~]# cat /etc/oracle-release
Oracle Linux Server release 7.7 

The KVM VM I’m using for this blog post uses the latest Red Hat Compatible Kernel at the time of writing (kernel-3.10.0-1062.18.1.el7.x86_64). You will notice that I’m using the virtio driver, leading to “strange” device names. Instead of /dev/sd it’s /dev/vd. My first two block devices are reserved for the O/S and Oracle, the remaining ones are going to be used for ASM. I have an old (bad?) habit of partitioning block devices for ASM as you might notice. Most of the Oracle setup is done by the 19c preinstall RPM, which I used.

I created a grid owner – grid – to own the Oracle Restart installation. Quite a few blog posts I came across referenced group membership, and I’d like to do the same:

[root@server4 ~]# id -a grid 
uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54328(asmadmin),54327(asmdba) 

The block devices I’m intending to use for ASM are /dev/vdc to /dev/vdf – the first 2 are intended for +DATA, the other 2 will become part of +RECO. As you can see they are partitioned:

[root@server4 ~]# lsblk --ascii
NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vdf                   251:80   0   10G  0 disk 
`-vdf1                251:81   0   10G  0 part 
vdd                   251:48   0   10G  0 disk 
`-vdd1                251:49   0   10G  0 part 
vdb                   251:16   0   50G  0 disk 
`-vdb1                251:17   0   50G  0 part 
  `-oraclevg-orabinlv 252:2    0   50G  0 lvm  /u01
sr0                    11:0    1 1024M  0 rom  
vde                   251:64   0   10G  0 disk 
`-vde1                251:65   0   10G  0 part 
vdc                   251:32   0   10G  0 disk 
`-vdc1                251:33   0   10G  0 part 
vda                   251:0    0   12G  0 disk 
|-vda2                251:2    0 11.5G  0 part 
| |-rootvg-swaplv     252:1    0  768M  0 lvm  [SWAP]
| `-rootvg-rootlv     252:0    0 10.8G  0 lvm  /
`-vda1                251:1    0  500M  0 part /boot  

With all that out of the way it is time to cover the installation.

Labeling disks

I’m following the procedure documented in the 19c Administrator’s Guide chapter 20, section “Configuring Oracle ASM Filter Driver During Installation”. I have prepared my environment up to the step where I’d have to launch gridSetup.sh. This is a fairly well known process, and I won’t repeat it here.

Once the 19c install image has been extracted to my future Grid Home, the first step is to check if my system is supported:

[root@server4 ~]# cd /u01/app/grid/product/19.0.0/grid/bin
[root@server4 bin]# ./afddriverstate supported
AFD-9200: Supported 
[root@server4 bin]# uname -r
3.10.0-1062.18.1.el7.x86_64 

“AFD-9200: Supported” tells me that I can start labeling disks. This requires me to be root, and I have to set ORACLE_HOME and ORACLE_BASE. For some reason, the documentation suggests using /tmp as ORACLE_BASE, which I’ll use as well:

[root@server4 bin]# pwd
/u01/app/grid/product/19.0.0/grid/bin
[root@server4 bin]# export ORACLE_BASE=/tmp
[root@server4 bin]# export ORACLE_HOME=/u01/app/grid/product/19.0.0/grid
[root@server4 bin]# ./asmcmd afd_label DATA1 /dev/vdc1 --init
[root@server4 bin]# ./asmcmd afd_label DATA2 /dev/vdd1 --init 

[root@server4 bin]# ./asmcmd afd_lslbl /dev/vdc1
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DATA1                                 /dev/vdc1

[root@server4 bin]# ./asmcmd afd_lslbl /dev/vdd1
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DATA2                                 /dev/vdd1  

Note the use of the –init flag. This is only needed if Grid Infrastructure isn’t installed yet.

Labeling the disks did not have an effect on the block devices’ permissions. Right after finishing the 2 calls to label my 2 block devices, this is the output from my file system:

[root@server4 bin]# ls -l /dev/vd[c-d]*
brw-rw----. 1 root disk 252, 32 Mar 27 09:46 /dev/vdc
brw-rw----. 1 root disk 252, 33 Mar 27 12:55 /dev/vdc1
brw-rw----. 1 root disk 252, 48 Mar 27 09:46 /dev/vdd
brw-rw----. 1 root disk 252, 49 Mar 27 12:58 /dev/vdd1
[root@server4 bin]#  

The output of afd_lslbl indicated that both of my disks are ready to become part of an ASM disk group, so let’s start the installer.

Call gridSetup.sh

I haven’t been able to make sense of the options in the response file until I started the installer in GUI mode and created a response file based on my choices. To cut a long story short, here is my call to gridSetup.sh:

[grid@server4 ~]$ /u01/app/grid/product/19.0.0/grid/gridSetup.sh -silent \
> INVENTORY_LOCATION=/u01/app/oraInventory \
> SELECTED_LANGUAGES=en \
> ORACLE_BASE=/u01/app/grid \
> ORACLE_HOME_NAME=ASMFD_RHCK \
> -waitforcompletion -ignorePrereqFailure -lenientInstallMode \
> oracle.install.option=HA_CONFIG \
> oracle.install.asm.OSDBA=asmdba \
> oracle.install.asm.OSASM=asmadmin \
> oracle.install.asm.diskGroup.name=DATA \
> oracle.install.asm.diskGroup.disks=/dev/vdc1,/dev/vdd1 \
> oracle.install.asm.diskGroup.diskDiscoveryString=/dev/vd* \
> oracle.install.asm.diskGroup.redundancy=EXTERNAL \
> oracle.install.asm.diskGroup.AUSize=4 \
> oracle.install.asm.configureAFD=true \
> oracle.install.crs.rootconfig.executeRootScript=false \
> oracle.install.asm.SYSASMPassword=thinkOfASuperSecretPassword \
> oracle.install.asm.monitorPassword=thinkOfASuperSecretPassword
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/app/grid/product/19.0.0/grid/install/response/grid_2020-03-27_01-06-14PM.rsp

You can find the log of this install session at:
 /tmp/GridSetupActions2020-03-27_01-06-14PM/gridSetupActions2020-03-27_01-06-14PM.log

As a root user, execute the following script(s):
        1. /u01/app/oraInventory/orainstRoot.sh
        2. /u01/app/grid/product/19.0.0/grid/root.sh

Execute /u01/app/grid/product/19.0.0/grid/root.sh on the following nodes:
[server4]

Successfully Setup Software.
As install user, execute the following command to complete the configuration.
/u01/app/grid/product/19.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/grid/product/19.0.0/grid/install/response/grid_2020-03-27_01-06-14PM.rsp [-silent]
Note: The required passwords need to be included in the response file.
Moved the install session logs to:
/u01/app/oraInventory/logs/GridSetupActions2020-03-27_01-06-14PM
[grid@server4 ~]$

It took a little while to work out that despite labeling the disks for ASMFD I didn’t have to put any reference to AFD into the call to gridSetup.sh. Have a look at the ASM disk string and the block devices: that’s what I’d use if I were using UDEV rules for device name persistence. The syntax might appear counter-intuitive. However there’s a “configureAFD” flag you need to set to true.

Since this is a lab environment I’m ok with external redundancy. Make sure you pick a redundancy level appropriate for your use case.

Running the configuration tools

The remaining steps are identical to a non ASMFD setup. First you run orainstRoot.sh followed by root.sh. The output of the latter showed this for me, indicating success:

[root@server4 ~]# /u01/app/grid/product/19.0.0/grid/root.sh
Check /u01/app/grid/product/19.0.0/grid/install/root_server4_2020-03-27_13-11-05-865019723.log for the output of root script

[root@server4 ~]#
[root@server4 ~]# cat /u01/app/grid/product/19.0.0/grid/install/root_server4_2020-03-27_13-11-05-865019723.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/grid/product/19.0.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/product/19.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/server4/crsconfig/roothas_2020-03-27_01-11-06PM.log
2020/03/27 13:11:13 CLSRSC-363: User ignored prerequisites during installation
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node server4 successfully pinned.
2020/03/27 13:13:55 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'

server4     2020/03/27 13:16:59     /u01/app/grid/crsdata/server4/olr/backup_20200327_131659.olr     724960844
2020/03/27 13:17:54 CLSRSC-327: Successfully configured Oracle Restart for a standalone server
[root@server4 ~]# 

Well that looks ok, now on to the final step, configuration! As indicated in the output, you need to update the response (/u01/app/grid/product/19.0.0/grid/install/response/grid_2020-03-27_01-06-14PM.rsp) file with the required passwords. For me that was oracle.install.asm.monitorPassword and oracle.install.asm.SYSASMPassword. Once the response file was updated, I called gridSetup.sh once again:

[grid@server4 ~]$ /u01/app/grid/product/19.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/grid/product/19.0.0/grid/install/response/grid_2020-03-27_01-06-14PM.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2020-03-27_01-20-47PM

You can find the log of this install session at:
 /u01/app/oraInventory/logs/UpdateNodeList2020-03-27_01-20-47PM.log
Successfully Configured Software. 

And that’s it! The software has been configured successfully. Don’t forget to remove the passwords from the response file!

Verification

After a little while I have been able to configure Oracle Restart 19c/ASMFD on Oracle Linux 7.7/RHCK. Let’s check what this implies.

I’ll first look at the status of ASM Filter Driver:

[grid@server4 ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM
The Oracle base has been set to /u01/app/grid
[grid@server4 ~]$ afddriverstate installed
AFD-9203: AFD device driver installed status: 'true'
[grid@server4 ~]$ afddriverstate loaded
AFD-9205: AFD device driver loaded status: 'true'
[grid@server4 ~]$ afddriverstate version
AFD-9325:     Driver OS kernel version = 3.10.0-862.el7.x86_64.
AFD-9326:     Driver build number = 190222.
AFD-9212:     Driver build version = 19.0.0.0.0.
AFD-9547:     Driver available build number = 190222.
AFD-9548:     Driver available build version = 19.0.0.0.0.
[grid@server4 ~]$  

That’s encouraging: ASMFD is loaded and works on top of kernel-3.10 (RHCK)

I am indeed using the base release (and have to patch now!)

[grid@server4 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

OPatch succeeded. 

And … I’m also using ASMFD:

SQL> col name for a20
SQL> col path for a10
SQL> col library for a50
SQL> set lines 120
SQL> select name, path, library from v$asm_disk where group_number <> 0;

NAME                 PATH       LIBRARY
-------------------- ---------- --------------------------------------------------
DATA1                AFD:DATA1  AFD Library - Generic , version 3 (KABI_V3)
DATA2                AFD:DATA2  AFD Library - Generic , version 3 (KABI_V3)

SQL> show parameter asm_diskstring

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      /dev/vd*, AFD:*
SQL>  

This concludes the setup of my lab environment.

Oracle Restart 19c: silent installation and ASM Filter Driver

Oracle 19c is has been getting a lot of traction recently, and I have been researching various aspects around its installation and use. One topic that came up recently was the installation of Oracle Restart 19c using ASM Filter Driver. ASM Filter Driver has been around for a little while, but I never really looked at it closely. I found very little has been written about ASMFD in the context of Oracle 19c either, so I thought I’d revert the trend and write a series of posts about it (maybe I just didn’t find the relevant articles, I didn’t look too closely)

This blog post and all those that follow in the series are by no means an endorsement for the technology! My only goal is to make the documentation more accessible, I found it a little hard to work out the steps and hope to save you some time. As with every new-ish storage technology it’s imperative to make sure (by means of rigorous testing) that it meets your requirements.

It’s not as simple as it seems

There are actually quite a few nuances to the installation process when trying to install ASM with ASMFD from the beginning, which I’ll detail in the short blog post series to follow. The idea is to install Oracle Restart 19c with ASMFD straight away, no upgrade from ASMLib, no changing from UDEV to ASMFD. Plus it’s a fresh installation, no upgrade from a previous release.

As always I’m using Oracle Linux as the basis for my tests. And since I’m a good citizen I have updated my KVM VMs to the latest and greatest at the time of writing. More details about the environment used can be found in each of the posts in the series.

How do I install ASMFD together with Oracle Restart 19c?

I have studied the documentation, and the way I see it there are essentially 2 ways of installing Oracle Restart 19c with ASMFD:

  • Using UDEV to change permissions on all future ASM disks
  • Labeling future ASM disks using asmcmd to achieve the same goal

According to the certification matrix (MOS 1369107.1), it also matters which Oracle Linux 7/kernel combination you are using.

The easiest thing to do should be switching the Oracle Linux from UEK to the Red Hat Compatible Kernel, and I’m going to write about that first. The simplicity gained by using RHCK is slightly offset by operational caveats such as kernel upgrades etc. But this is a post about Oracle Restart, not the intricacies of switching from UEK to RHCK …

For quite a while now, UEK 5 has been the default kernel for Oracle Linux 7. If you’d like to install Oracle Restart 19c/ASMFD on UEK 5 you can’t do that out of the box, a little magic is necessary.

The following is a list of things I hope to write in the upcoming days. It’s all about a silent installation of Oracle Restart 19c for use with ASMFD:

Happy installing!

Passing complex data types to Ansible on the command line

Earlier this year I wrote a post about passing JSON files as --extra-vars to ansible-playbook in order to simplify deployments and to make them more flexible. JSON syntax must be used to pass more complex data types to Ansible playbooks, the topic of this post. Unlike last time though I’ll pass the arguments directly to the playbook rather than by means of a JSON file. This should cover both methods of passing extra variables.

A lot of what you are about to read depends on Ansible configuration settings. I have used Ansible on Debian 10. When I installed it earlier today I found it to be version 2.7.7. It’s the distribution’s (stock) Ansible version:

vagrant@debian10:~$ lsb_release -a
No LSB modules are available.
Distributor ID:    Debian
Description:       Debian GNU/Linux 10 (buster)
Release:           10
Codename:          buster
vagrant@debian10:~$ ansible-playbook --version
ansible-playbook 2.7.7
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 3.7.3 (default, Dec 20 2019, 18:57:59) [GCC 8.3.0]
vagrant@debian10:~$  

The only change I consciously made was to set the output to debug:

vagrant@debian10:~$ export ANSIBLE_STDOUT_CALLBACK=debug

However, as with all information you find on the Internet – this post explicitly included – your mileage may vary. Don’t blindly copy/paste. Test everything you deem useful on an unimportant, disposable, lower-tier test system and make sure you understand any code before you even think about using it! Vagrant is a pretty good tool for this purpose by the way.

Having said that, let’s go over some examples.

Dictionaries

Let’s assume you’d like to use a dictionary in your playbook, like so:

 ---
- hosts: localhost
  connection: local
  vars:
    dictExample:
      propertyA: propertyA-key
      propertyB: propertyB-key

  tasks:
  - name: dump dictExample
    debug:
      var: dictExample 

Unsurprisingly, when invoking the playbook, the output matches the code exactly:

$ ansible-playbook -i localhost, dict-example.yml 

PLAY [localhost] ***************************************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [dump dictExample] ********************************************************
ok: [localhost] => {
    "dictExample": {
        "propertyA": "propertyA-key",
        "propertyB": "propertyB-key"
    }
}

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0 

Overriding dictExample on the command line requires the use of JSON, while paying attention to shell expansion at the same time. Here is an example:

$ ansible-playbook -i localhost, dict-example.yml --extra-vars "{
>     "dictExample": {
>       "propertyA": "'property A set on the command line'",
>       "propertyB": "'property B set on the command line'"
>     }
> }"
Using /etc/ansible/ansible.cfg as config file

PLAY [localhost] ***************************************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [dump dictExample] ********************************************************
ok: [localhost] => {
    "dictExample": {
        "propertyA": "property A set on the command line",
        "propertyB": "property B set on the command line"
    }
}

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0 

As per the Ansible documentation, variables passed as extra-vars take precedence over those defined in the playbook.

Lists

Similarly it is possible to pass lists to playbooks. Here is an example:

---
- hosts: localhost
  connection: local
  vars:
    listExample:
    - one
    - two
    - three

  tasks:
  - name: dump listExample
    debug:
      var: listExample 

Invoking it with the defaults yields the expected result:

$ ansible-playbook -i localhost, list-example.yml 

PLAY [localhost] ***************************************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [dump listExample] ********************************************************
ok: [localhost] => {
    "listExample": [
        "one",
        "two",
        "three"
    ]
}

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0 

You can override listExample as shown here:

$ ansible-playbook -i localhost, list-example.yml --extra-vars "{
>     "listExample": [
>         "'commandline one'",
>         "'commandline two'",
>         "'commandline three'"
>     ]
> }"

PLAY [localhost] ***************************************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [dump listExample] ********************************************************
ok: [localhost] => {
    "listExample": [
        "commandline one",
        "commandline two",
        "commandline three"
    ]
}

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0 

Combinations

If you worked with perl, you probably used a Hash Of Hashes (HoH) as it’s a very powerful data structure. Something similar is also possible in Ansible. Here is the example playbook:

---
- hosts: localhost
  connection: local
  vars:
    complexExample:
      propertyA:
      - a_one
      - a_two
      - a_three
      propertyB:
      - b_one
      - b_two
      - b_three

  tasks:
  - name: dump complexExample
    debug:
      var: complexExample 

By now you are probably tired of seeing the result of the call to the playbook, so I’ll skip that and move on to an example where I’m overriding the variable:

$ ansible-playbook -i localhost, complex-example.yml --extra-vars "{
    "complexExample": {
        "propertyA": [
            "a_one_changed",
            "a_two_changed",
            "a_three_changed"
        ],
        "propertyB": [
            "b_one_changed",
            "b_two_changed",
            "b_three_changed"
        ]
    }
}"

PLAY [localhost] ***************************************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [dump complexExample] *****************************************************
ok: [localhost] => {
    "complexExample": {
        "propertyA": [
            "a_one_changed",
            "a_two_changed",
            "a_three_changed"
        ],
        "propertyB": [
            "b_one_changed",
            "b_two_changed",
            "b_three_changed"
        ]
    }
}

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0 

Summary

Passing variables to Ansible playbooks is a powerful way of working with automation. Apart from simple variables you could read about in the previous post on the topic, you can pass dictionaries, lists and combinations thereof using JSON notation. Happy automating!

Printing all table preferences affecting dbms_stats.gather_table_stats

Oracle 11g introduced the abilty to control the behaviour of the dbms_stats package by setting preferences on the database, schema, and table level. These affect the way dbms_stats goes about doing its work. This feature has been extensively documented, I found the post by Maria Colgan exceptionally good at explaining the mechanism.

I often have to check table preferences in case statistics look a little suspicious but there isn’t an Oracle supplied way that I’m aware of to print all table preferences so I came up with a little script to do so. It currently supports Oracle 19c but it’s super easy to update the code for other releases. The table – passed as a parameter – needs to be in the user’s schema:

$ cat table_prefs.sql 
set serveroutput on verify off

prompt
prompt getting table prefs for &1
prompt ----------------------------------------

declare 
        v_version varchar2(100);
        v_compat  varchar2(100);

        type prefs_t is table of varchar2(100);

        v_prefs_19c prefs_t := prefs_t(
                'APPROXIMATE_NDV_ALGORITHM',
                'AUTO_STAT_EXTENSIONS',
                'AUTO_TASK_STATUS',
                'AUTO_TASK_MAX_RUN_TIME',
                'AUTO_TASK_INTERVAL',
                'CASCADE',
                'CONCURRENT',
                'DEGREE',
                'ESTIMATE_PERCENT',
                'GLOBAL_TEMP_TABLE_STATS',
                'GRANULARITY',
                'INCREMENTAL',
                'INCREMENTAL_STALENESS',
                'INCREMENTAL_LEVEL',
                'METHOD_OPT',
                'NO_INVALIDATE',
                'OPTIONS',
                'PREFERENCE_OVERRIDES_PARAMETER',
                'PUBLISH',
                'STALE_PERCENT',
                'STAT_CATEGORY',
                'TABLE_CACHED_BLOCKS');

        procedure print_prefs(pi_prefs prefs_t) as
                v_value varchar2(100);
        begin   
                for i in pi_prefs.first .. pi_prefs.last loop
                        v_value := sys.dbms_stats.get_prefs(
                                pname => pi_prefs(i),
                                ownname => user,
                                tabname => sys.dbms_assert.sql_object_name('&1'));

                sys.dbms_output.put_line(rpad(pi_prefs(i), 50) || ': ' || v_value);
                end loop;
        end;

begin   
        sys.dbms_utility.db_version(v_version, v_compat);

        if v_version = '19.0.0.0.0' then
                print_prefs(v_prefs_19c);
        else
                raise_application_error(-20001, 'Oracle ' || v_version || ' not yet supported');
        end if;

end;
/ 

Extending the code is very straight-forward. Simply create a new variable like v_prefs_12102 and copy/paste all the possible values from the 12c documentation for dbms_stats.get_prefs(). Then add a branch for your release and off you go.

Happy troubleshooting!

Ansible tips’n’tricks: defining –extra-vars as JSON

While I’m continuing to learn more about Ansible I noticed a nifty little thing I wanted to share: it is possible to specify –extra-vars for an Ansible playbook in a JSON document in addition to the space-separated list of key=value pairs I have used so often. This can come in handy if you have many parameters in your play and want to test changing them without having to modify your defaults stored in group_vars/*.yml or wherever else you stored them. If you do change your global variables, you can almost be certain that your version control system notifies you about a change in the file and it wants to commit it next time. This might not be exactly what you had in mind.

For later reference, this article was composed using Ubuntu 18.04.4 LTS with all updates up to February 3rd, 2020.

The documentation reference for this article can be found in Docs – User Guide – Working With Playbooks – Using Variables. It links to the “latest” Ansible version though, so you might have to go to your specific Ansible version’s documentation in case stuff changes.

The Playbook

Let’s assume the following, simple playbook:

---
- hosts: localhost
  connection: local
  vars:
    var1: "var1 set in playbook"
    var2: "var2 set in playbook"
    var3: "var3 set in playbook"
    var4: "var4 set in playbook"
    var5: "var5 set in playbook"
    var6: "var6 set in playbook"
    var7: "var7 set in playbook"
    var8: "var8 set in playbook"
    var9: "var9 set in playbook"

  tasks:

  - name: print var1
    debug: var=var1

  - name: print var2
    debug: var=var2

  - name: print var3
    debug: var=var3

  - name: print var4
    debug: var=var4

  - name: print var5
    debug: var=var5

  - name: print var6
    debug: var=var6

  - name: print var7
    debug: var=var7

  - name: print var8
    debug: var=var8

  - name: print var9
    debug: var=var9

I appreciate it isn’t the usual fancy code, spread out nicely into roles and group_vars, but it keeps the discussion simple enough and using a more elaborate coding structure wouldn’t change the end result, so please bear with me…

As you can see I defined 9 variables in the playbook, and each of them is assigned a value. Executing the playbook, the output is what would be expected:

$ ansible-playbook -i 127.0.0.1, -v main.yml
Using /etc/ansible/ansible.cfg as config file

PLAY [localhost] ****************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************
ok: [127.0.0.1]

TASK [print var1] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var1": "var1 set in playbook"
}

TASK [print var2] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var2": "var2 set in playbook"
}

TASK [print var3] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var3": "var3 set in playbook"
}

TASK [print var4] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var4": "var4 set in playbook"
}

TASK [print var5] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var5": "var5 set in playbook"
}

TASK [print var6] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var6": "var6 set in playbook"
}

TASK [print var7] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var7": "var7 set in playbook"
}

TASK [print var8] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var8": "var8 set in playbook"
}

TASK [print var9] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var9": "var9 set in playbook"
}

PLAY RECAP **********************************************************************************************************************************
127.0.0.1                  : ok=10   changed=0    unreachable=0    failed=0  

Not really a surprise.

Overriding variables on the command line

Now let’s assume I want to override var8 and var9 with a custom parameter, and without changing the code. Pretty straight forward, since ansible-playbook allows us to do so.

$ ansible-playbook --help 2>&1 | grep -i extra-vars
  -e EXTRA_VARS, --extra-vars=EXTRA_VARS

With that in mind, let’s change var8 and var9:

$ ansible-playbook -i 127.0.0.1, -v main.yml --extra-vars "var8='var8 set on the command line' var9='var9 set on the command line'"
Using /etc/ansible/ansible.cfg as config file

PLAY [localhost] ****************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************
ok: [127.0.0.1]

TASK [print var1] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var1": "var1 set in playbook"
}

TASK [print var2] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var2": "var2 set in playbook"
}

TASK [print var3] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var3": "var3 set in playbook"
}

TASK [print var4] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var4": "var4 set in playbook"
}

TASK [print var5] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var5": "var5 set in playbook"
}

TASK [print var6] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var6": "var6 set in playbook"
}

TASK [print var7] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var7": "var7 set in playbook"
}

TASK [print var8] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var8": "var8 set on the command line"
}

TASK [print var9] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var9": "var9 set on the command line"
}

PLAY RECAP **********************************************************************************************************************************
127.0.0.1                  : ok=10   changed=0    unreachable=0    failed=0   

Great! But you can probably spot where this is heading … changing lots of parameters on the command line is a real pain and you are almost guaranteed to introduce a typo on the way.

Passing parameters as a JSON file

As per the documentation reference I mentioned earlier, I can use a JSON document to set all parameters. For this example, I’ll use the following file:

$ cat parameters.json 
{
    "var1" : "var1 as defined in the JSON parameter file",
    "var2" : "var2 as defined in the JSON parameter file",
    "var3" : "var3 as defined in the JSON parameter file",
    "var4" : "var4 as defined in the JSON parameter file",
    "var5" : "var5 as defined in the JSON parameter file",
    "var6" : "var6 as defined in the JSON parameter file",
    "var7" : "var7 as defined in the JSON parameter file",
    "var8" : "var8 as defined in the JSON parameter file",
    "var9" : "var9 as defined in the JSON parameter file"
} 

With the file in place, I can easily reference it as shown in the next example:

$ ansible-playbook -i 127.0.0.1, -v main.yml --extra-vars "@parameters.json"

You include the JSON file containing all your parameters using the same command line argument, eg –extra-vars. Note however that this time you use the “at” sign followed immediately by the filename.

The result is exactly what I wanted:

$ ansible-playbook -i 127.0.0.1, -v main.yml --extra-vars "@parameters.json"
Using /etc/ansible/ansible.cfg as config file

PLAY [localhost] ****************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************
ok: [127.0.0.1]

TASK [print var1] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var1": "var1 as defined in the JSON parameter file"
}

TASK [print var2] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var2": "var2 as defined in the JSON parameter file"
}

TASK [print var3] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var3": "var3 as defined in the JSON parameter file"
}

TASK [print var4] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var4": "var4 as defined in the JSON parameter file"
}

TASK [print var5] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var5": "var5 as defined in the JSON parameter file"
}

TASK [print var6] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var6": "var6 as defined in the JSON parameter file"
}

TASK [print var7] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var7": "var7 as defined in the JSON parameter file"
}

TASK [print var8] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var8": "var8 as defined in the JSON parameter file"
}

TASK [print var9] ***************************************************************************************************************************
ok: [127.0.0.1] => {
    "var9": "var9 as defined in the JSON parameter file"
}

PLAY RECAP **********************************************************************************************************************************
127.0.0.1                  : ok=10   changed=0    unreachable=0    failed=0 

Summary

Using a JSON file to pass parameters to an Ansible playbook is a great way of testing (parameter-driven) changes to your code without having to worry about your version control system wanting to check in changes to your variable definitions. I tend to stick with defaults in my playbooks, and use –extra-vars to deviate from them if needed and to deal with edge cases.

The extra benefit to me is that I can add the JSON file containing all the parameters to .gitignore as well.

Happy automating!

Initialising PL/SQL associative arrays in 18c and later

I can never remember how to initialise PL/SQL associative arrays and thought I’d write a short post about it. This is primarily based on an article on Oracle’s Ask Tom site, plus a little extra detail from Steven Feuerstein. Associative arrays were previously known as index-by tables, by the way.

Associative arrays before 18c

Prior to Oracle 18c, you had to initialise an associative array in a slightly cumbersome way, like so:

DECLARE
    TYPE array_t IS
        TABLE OF VARCHAR2(100) INDEX BY PLS_INTEGER;
    v_the_array   array_t;
    v_index       PLS_INTEGER;
BEGIN
    v_the_array(1) := 'one';
    v_the_array(2) := 'two';
    v_the_array(3) := 'three';
    v_the_array(9) := 'nine';
    v_index := v_the_array.first;
    WHILE ( v_index IS NOT NULL ) LOOP
        dbms_output.put_line('v_the_array(' || v_index || '): ' || v_the_array(v_index));
        v_index := v_the_array.next(v_index);
    END LOOP;
END;
/

There are slight variations to the theme as explained in example 5-3 of the Oracle Database PL/SQL language reference you might find interesting. You end up having to repeat yourself a lot, as you can see immediately below the BEGIN keyword.

Oracle 18c simplifies this task

With 18c the syntax reminds me a little of Perl, as in you define the type first; in the next step you add a variable to your anonymous code block, which you initialise in the same step:

DECLARE
    TYPE array_t IS
        TABLE OF VARCHAR2(100) INDEX BY PLS_INTEGER;

    -- this is new for 18c, see Steven Feuerstein's article here:
    -- https://blogs.oracle.com/oraclemagazine/easy-initializing-for-records-and-arrays
    v_the_array   array_t := array_t(
        1 => 'one', 
        2 => 'two', 
        3 => 'three',    -- note gap here ;)
        9 => 'nine');

    v_index       PLS_INTEGER;
BEGIN
    v_index := v_the_array.first;

    WHILE ( v_index IS NOT NULL ) LOOP
        dbms_output.put_line('v_the_array(' || v_index || '): ' || v_the_array(v_index));
        v_index := v_the_array.next(v_index);
    END LOOP;

END;
/ 

This way you can define the array in a much nicer looking way and with less code. I also find it more readable.

You are of course not limited to using PLS_INTEGER for indexing the array, you can index differently:

DECLARE
    TYPE capitals_t IS
        TABLE OF VARCHAR2(100) INDEX BY VARCHAR2(100);
    v_capitals   capitals_t := capitals_t(
        'France' => 'Paris', 
        'Belgium' => 'Brussels', 
        'Austria' => 'Vienna');

    v_index      VARCHAR2(100);
BEGIN
    v_index := v_capitals.first;
    WHILE ( v_index IS NOT NULL ) LOOP
        dbms_output.put_line('The capital of ' || v_index || ' is ' 
                             || v_capitals(v_index));

        v_index := v_capitals.next(v_index);
    END LOOP;

END;
/ 

Once I wrapped my head around this it all made sense, and I think I’ll use this syntax from now on.

Vagrant tips’n’tricks: changing /etc/hosts automatically for Oracle Universal Installer

Oracle Universal Installer, or OUI for short, doesn’t at all like it if the hostname resolves to an IP address in the 127.0.0.0/0 range. At best it complains, at worst it starts installing and configuring software only to abort and bury the real cause deep in the logs.

I am a great fan of HashiCorp’s Vagrant as you might have guessed reading some of the previous articles, and as such wanted a scripted solution to changing the hostname to something more sensible before I begin provisioning software. I should probably add that I’m using my own base boxes; the techniques in this post should equally apply to other boxes as well.

Each of the Vagrant VMs I’m creating is given a private network for communication with its peers. This is mainly done to prevent me from having to deal with port forwarding on the NAT device. If you haven’t used Vagrant before you might not know that by default, each Vagrant VM will come up with a single NIC that has to use NAT. The end goal for this post is to ensure that my VM’s hostname maps to the private network’s IP address, not 127.0.0.1 as it would normally do.

Setting the scene

By default, Vagrant doesn’t seem to mess with the hostname of the VM. This can be changed by using a configuration variable. Let’s start with the Vagrantfile for my Oracle Linux 7 box:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.define "ol7guest" do |u|
    # this is a base box I created and stored locally
    u.vm.box = "oracleLinux7Base"

    u.ssh.private_key_path = "/path/to/key"

    u.vm.hostname = "ol7guest"
    u.vm.network "private_network", ip: "192.168.56.204"

    u.vm.provider "virtualbox" do |v|
      v.memory = 2048
      v.name = "ol7guest"
      v.cpus = 1
    end
  end
end 

Please ignore the fact that my Vagrantfile is slightly more complex than it needs to be. I do like having speaking names for my VMs, rather than “default” showing up in vagrant status. Using this terminology in the Vagrantfile also makes it easier to add more VMs to the configuration should I so need.

Apart from you just read the only remarkable thing to mention about this file is this line:

    u.vm.hostname = "ol7guest"

As per the Vagrant documentation, I can use this directive to set the hostname of the VM. And indeed, it does:

$ vagrant ssh ol7guest
Last login: Thu Jan 09 21:14:59 2020 from 10.0.2.2
[vagrant@ol7guest ~]$  

The hostname is set, however it resolves to 127.0.0.1 as per /etc/hosts:

[vagrant@ol7guest ~]$ cat /etc/hosts
127.0.0.1    ol7guest    ol7guest
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 

Not quite what I had in mind, but apparently expected behaviour. So the next step is to change the first line in /etc/hosts to match the private IP address I assigned to the second NIC. As an Ansible fan I am naturally leaning towards using a playbook, but I also understand that not everyone has Ansible installed on the host and using the ansible_local provisioner might take longer than necessary unless your box has Ansible pre-installed.

The remainder of this post deals with an Ansible solution and the least common denominator, the shell provisioner.

Using an Ansible playbook

Many times I’m using Ansible playbooks to deploy software to Vagrant VMs anyway, so embedding a little piece of code into my playbooks to change /etc/hosts isn’t a lot of work. The first step is to amend the Vagrantfile to reference the Ansible provisioner. One possible way to do this in the context of my example is this:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.define "ol7guest" do |u|
    # this is a base box I created and stored locally
    u.vm.box = "oracleLinux7Base"

    u.ssh.private_key_path = "/path/to/key"

    u.vm.hostname = "ol7guest"
    u.vm.network "private_network", ip: "192.168.56.204"

    u.vm.provision "ansible" do |ansible|
      ansible.playbook = "change_etc_hosts.yml"
      ansible.verbose = "v"
    end

    u.vm.provider "virtualbox" do |v|
      v.memory = 2048
      v.name = "ol7guest"
      v.cpus = 1
    end
  end
end  

It is mostly the same file with the addition of the call to Ansible. As you can imagine the playbook is rather simple:

---
- hosts: ol7guest
  become: yes
  tasks:
  - name: change /etc/hosts
    lineinfile:
      path: '/etc/hosts'
      regexp: '.*ol7guest.*' 
      line: '192.168.56.204   ol7guest.example.com   ol7guest' 
      backup: yes

It uses the lineinfile module to find lines containing ol7guest and replaces that line with the “correct” IP address. The resulting hosts file is exactly what I need:

[vagrant@ol7guest ~]$ cat /etc/hosts
192.168.56.204   ol7guest.example.com   ol7guest
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
[vagrant@ol7guest ~]$ 

The first line of the original file has been replaced with the private IP which should enable OUI to progress past this potential stumbling block.

Using the shell provisioner

The second solution involves the shell provisioner, which – unlike Ansible – isn’t distribution agnostic and needs to be tailored to the target platform. On Oracle Linux, the following worked for me:

# -*- mode: ruby -*-
# vi: set ft=ruby :

$script = <<-SCRIPT
/usr/bin/cp /etc/hosts /root && \
/usr/bin/sed -ie '/ol7guest/d' /etc/hosts && \
/usr/bin/echo '192.168.56.204 ol7guest.example.com ol7guest' >> /etc/hosts
SCRIPT

Vagrant.configure("2") do |config|
  config.vm.define "ol7guest" do |u|
    # this is a base box I created and stored locally
    u.vm.box = "oracleLinux7Base"

    u.ssh.private_key_path = "/path/to/key"

    u.vm.hostname = "ol7guest"
    u.vm.network "private_network", ip: "192.168.56.204"

    u.vm.provision "shell", inline: $script

    u.vm.provider "virtualbox" do |v|
      v.memory = 2048
      v.name = "ol7guest"
      v.cpus = 1
    end
  end
end 

The script copies /etc/hosts to root’s home directory and then changes it to match my needs. At the end, the file is in exactly the shape I need it to be in.

Summary

Whether you go with the shell provisioner or embed the change to the hostname in an (existing) Ansible playbook doesn’t matter much. I would definitely argue in support of having the code embedded in a playbook if that’s what will provision additional software anyways. If installing Ansible on the host isn’t an option, using the shell as a fallback mechanism is perfectly fine, too. Happy hacking!