Author Archives: Martin Bach

About Martin Bach

Oracle DBA and Linux enthusiast, part time author and presenter.

Ansible tips’n’tricks: provision multiple machines in parallel with Vagrant and Ansible

Vagrant is a great tool that I’m regularly using for building playground environments on my laptop. I recently came across a slight inconvenience with Vagrant’s Virtualbox provider: occasionally I would like to spin up a Data Guard environment and provision both VMs in parallel to save time. Sadly you can’t bring up multiple machines in parallel using the VirtualBox provisioner according to the documentation . This was true as of April 11 2019 and might change in the future, so keep an eye out on the reference.

I very much prefer to save time by doing things in parallel, and so I started digging around how I could achieve this goal.

The official documentation mentions something that looks like a for loop to wait for all machines to be up. This isn’t really an option, I wanted more control over machine names and IP addresses. So I came up with this approach, it may not be the best, but it falls into the “good enough for me” category.


The Vagrantfile is actually quite simple and might remind you of a previous article:

  1 Vagrant.configure("2") do |config|
  2   config.ssh.private_key_path = "/path/to/key"
  4   config.vm.define "server1" do |server1|
  5 = "ansibletestbase"
  6     server1.vm.hostname = "server1"
  7 "private_network", ip: ""
  8     server1.vm.synced_folder "/path/to/stuff", "/mnt",
  9       mount_options: ["uid=54321", "gid=54321"]
 11     config.vm.provider "virtualbox" do |vb|
 12       vb.memory = 2048
 13       vb.cpus = 2
 14     end
 15   end
 17   config.vm.define "server2" do |server2|
 18 = "ansibletestbase"
 19     server2.vm.hostname = "server2"
 20 "private_network", ip: ""
 21     server2.vm.synced_folder "/path/to/stuff", "/mnt",
 22       mount_options: ["uid=54321", "gid=54321"]
 24     config.vm.provider "virtualbox" do |vb|
 25       vb.memory = 2048
 26       vb.cpus = 2
 27     end
 28   end
 30   config.vm.provision "ansible" do |ansible|
 31     ansible.playbook = "hello.yml"
 32     ansible.groups = {
 33       "oracle_si" => ["server[1:2]"],
 34       "oracle_si:vars" => { 
 35         "install_rdbms" => "true",
 36         "patch_rdbms" => "true",
 37       }
 38     }
 39   end
 41 end

Ansibletestbase is my custom Oracle Linux 7 image that I keep updated for personal use. I define a couple of machines, server1 and server2 and from line 30 onwards let Ansible provision them.

A little bit of an inconvenience

Now here is the inconvenient bit: if I provided an elaborate playbook to provision Oracle in line 31 of the Vagrantfile, it would be run serially. First for server1, and only after it completed (or failed…) server2 will be created and provisioned. This is the reason for a rather plain playbook, hello.yml:

$ cat hello.yml 
- hosts: oracle_si
  - name: say hello
    debug: var=ansible_hostname

This literally takes no time to execute at all, so no harm is done running it serially once per VM. Not only is no harm done, quite the contrary: Vagrant discovered an Ansible provider in the Vagrantfile and created a suitable inventory file for me. I’ll gladly use it later.

How does this work out?

Enough talking, time to put this to test and to bring up both machines. As you will see in the captured output, they start one-by-one, run their provisioning tool and proceed to the next system.

$ vagrant up 
Bringing machine 'server1' up with 'virtualbox' provider...
Bringing machine 'server2' up with 'virtualbox' provider...
==> server1: Importing base box 'ansibletestbase'...
==> server1: Matching MAC address for NAT networking...


==> server1: Running provisioner: ansible...


    server1: Running ansible-playbook...

PLAY [oracle_si] ***************************************************************

TASK [Gathering Facts] *********************************************************
ok: [server1]

TASK [say hello] ***************************************************************
ok: [server1] => {
    "ansible_hostname": "server1"

PLAY RECAP *********************************************************************
server1                    : ok=2    changed=0    unreachable=0    failed=0   

==> server2: Importing base box 'ansibletestbase'...
==> server2: Matching MAC address for NAT networking...


==> server2: Running provisioner: ansible...

    server2: Running ansible-playbook...

PLAY [oracle_si] ***************************************************************

TASK [Gathering Facts] *********************************************************
ok: [server2]

TASK [say hello] ***************************************************************
ok: [server2] => {
    "ansible_hostname": "server2"

PLAY RECAP *********************************************************************
server2                    : ok=2    changed=0    unreachable=0    failed=0

As always the Ansible provisioner created an inventory file I can use in ./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory. The inventory looks exactly as described in the |ansible| block, and it has the all important global variables as well.

$cat ./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
# Generated by Vagrant
server2 ansible_host= ansible_port=2201 ansible_user='vagrant' ansible_ssh_private_key_file='/path/to/key'
server1 ansible_host= ansible_port=2200 ansible_user='vagrant' ansible_ssh_private_key_file='/path/to/key'



After ensuring both that machines are up using the “ping” module, I can run the actual playbook. You might have to confirm the servers’ ssh keys the first time you run this:

$ ansible -i ./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory -m ping oracle_si
server1 | SUCCESS => {
"changed": false,
"ping": "pong"
server2 | SUCCESS => {
"changed": false,
"ping": "pong"

All good to go! Let’s call the actual playbook to provision my machines.

$ ansible-playbook -i ./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory provisioning/oracle.yml 

TASK [Gathering Facts] ***************************************************************
ok: [server1]
ok: [server2]

TASK [test for Oracle Linux 7] *******************************************************
skipping: [server1]
skipping: [server2]

And we’re off to the races. Happy automating!


Ansible tips’n’tricks: testing and debugging Ansible scripts using Vagrant

At last year’s UKOUG I presented about Vagrant and how to use this great piece of software to test and debug Ansible scripts easily. Back then in December I promised a write-up, but for various reasons only now got around to finishing it.

Vagrant’s Ansible Provisioner

Vagrant offers two different Ansible provisioners: “ansible” and “ansible_local”. The “ansible” provisioner depends on a local Ansible installation, on the host. If this isn’t feasible, you can use “ansible_local” instead. As the name implies it executes code on the VM instead of on the host. This post is about the “ansible” provisioner.

Most people use Vagrant with the default VirtualBox provider, and so do I in this post.

A closer look at the Vagrantfile

It all starts with a Vagrantfile. A quick “vagrant init ” will get you one. My test image I use for deploying the Oracle database comes with all the necessary block devices and packages needed, saving me quite some time. Naturally I’ll start with that one.

$ cat -n Vagrantfile 
     1    # -*- mode: ruby -*-
     2    # vi: set ft=ruby :
     4    Vagrant.configure("2") do |config|
     6      config.ssh.private_key_path = "/path/to/ssh/key"
     8 = "ansibletestbase"
     9      config.vm.define "server1" do |server1|
    10 = "ansibletestbase"
    11        server1.vm.hostname = "server1"
    12 "private_network", ip: ""
    14        config.vm.provider "virtualbox" do |vb|
    15          vb.memory = 2048
    16          vb.cpus = 2 
    17        end 
    18      end 
    20      config.vm.provision "ansible" do |ansible|
    21        ansible.playbook = "blogpost.yml"
    22        ansible.groups = { 
    23          "oracle_si" => ["server1"],
    24          "oracle_si:vars" => { 
    25            "install_rdbms" => "true",
    26            "patch_rdbms" => "true",
    27            "create_db" => "true"
    28          }   
    29        }   
    30      end 
    32    end

Since I have decided to create my own custom image without relying on the “insecure key pair” I need to keep track of my SSH keys. This is done in line 6. Otherwise there wouldn’t be an option to connect to the system and Vagrant couldn’t bring the VM up.

Lines 8 to 18 define the VM – which image to derive it from, and how to configure it. The settings are pretty much self-explanatory so I won’t go into too much detail. Only this much:

  • I usually want a host-only network instead of just a NAT device, and I create one in line 12. The IP address maps to and address on vboxnet0 in my configuration. If you don’t have a host-only network and want one, you can create it in VirtualBox’s preferences.
  • In line 14 to 17 I set some properties of my VM. I want it to come up with 2 GB of RAM and 2 CPUs.

Integrating Ansible into the Vagrantfile

The Ansible configuration is found on lines 20 to 30. As soon as the VM comes up I want Vagrant to run the Ansible provisioner and execute my playbook named “blogpost.yml”.

Most of my playbooks rely on global variables I define in the inventory file. Vagrant will create an inventory for me when it finds an Ansible provisioner in the Vagrantfile. The inventory it creates doesn’t fit my needs though, but that is easy to change. Recent Vagrant versions allow me to create the inventory just as I need it. You see this in lines 22 to 28. The resulting inventory file is created in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory and looks like this:

$ cat .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory 
# Generated by Vagrant

server1 ansible_host= ansible_port=2222 ansible_user='vagrant' ansible_ssh_private_key_file='/path/to/ssh/key'



That’s exactly what I’d use if I manually edited the inventory file, except that I don’t need to use “vagrant ssh-config” to figure out what the current SSH configuration is.

I define a group of hosts, and a few global variables for my playbook. This way all I need to do is change the Vagrantfile and control the execution of my playbook rather than maintaining information in 2 places (Vagrantfile and static inventory).

Ansible Playbook

The final piece of information is the actual Ansible playbook. Except for the host group I’m not going to use the inventory’s variables to keep the example simple.

$ cat -n blogpost.yml 
     1    ---
     2    - name: blogpost
     3      hosts: oracle_si
     4      vars:
     5      - oravg_pv: /dev/sdb
     6      become: yes
     7      tasks:
     8      - name: say hello
     9        debug: msg="hello from {{ ansible_hostname }}"
    11      - name: partitioning PVs for the volume group
    12        parted:
    13          device: "{{ oravg_pv }}"
    14          number: 1
    15          state: present
    16          align: optimal
    17          label: gpt

Expressed in plain English, it reads: take the block device indicated by the variable oravg_pv and create a single partition on it spanning the entire device.

As soon as I “vagrant up” the VM, it all comes together:

$ vagrant up
Bringing machine 'server1' up with 'virtualbox' provider…
==> server1: Importing base box 'ansibletestbase'…
==> server1: Matching MAC address for NAT networking…
==> server1: Setting the name of the VM: blogpost_server1_1554188252201_2080
==> server1: Clearing any previously set network interfaces…
==> server1: Preparing network interfaces based on configuration…
server1: Adapter 1: nat
server1: Adapter 2: hostonly
==> server1: Forwarding ports…
server1: 22 (guest) => 2222 (host) (adapter 1)
==> server1: Running 'pre-boot' VM customizations…
==> server1: Booting VM…
==> server1: Waiting for machine to boot. This may take a few minutes…
server1: SSH address:
server1: SSH username: vagrant
server1: SSH auth method: private key
==> server1: Machine booted and ready!
==> server1: Checking for guest additions in VM…
==> server1: Setting hostname…
==> server1: Configuring and enabling network interfaces…
server1: SSH address:
server1: SSH username: vagrant
server1: SSH auth method: private key
==> server1: Mounting shared folders…
server1: /vagrant => /home/martin/vagrant/blogpost
==> server1: Running provisioner: ansible…

Vagrant has automatically selected the compatibility mode '2.0'according to the Ansible version installed (2.7.7).

Alternatively, the compatibility mode can be specified in your Vagrantfile:

server1: Running ansible-playbook...

PLAY [blogpost] *************************************************************

TASK [Gathering Facts] ******************************************************
ok: [server1]

TASK [say hello] ************************************************************
ok: [server1] => {
"msg": "hello from server1"

TASK [partitioning PVs for the volume group] ********************************
changed: [server1]

PLAY RECAP ******************************************************************
server1 : ok=3 changed=1 unreachable=0 failed=0

Great! But I forgot to partition /dev/sd[cd] in the same way as I partition /dev/sdb! That’s a quick fix:

- name: blogpost
  hosts: oracle_si
  - oravg_pv: /dev/sdb
  - asm_disks:
      - /dev/sdc
      - /dev/sdd
  become: yes
  - name: say hello
    debug: msg="hello from {{ ansible_hostname }}"

  - name: partitioning PVs for the Oracle volume group
      device: "{{ oravg_pv }}"
      number: 1
      state: present
      align: optimal
      label: gpt

  - name: partition block devices for ASM
      device: "{{ item }}"
      number: 1
      state: present
      align: optimal
      label: gpt
    loop: "{{ asm_disks }}"

Re-running the provisioning script couldn’t be easier. Vagrant has a command for this: “vagrant provision”. This command re-runs the provisioning code against a VM. A quick “vagrant provision” later my system is configured exactly the way I want:

$ vagrant provision
==> server1: Running provisioner: ansible...
Vagrant has automatically selected the compatibility mode '2.0'
according to the Ansible version installed (2.7.7).

Alternatively, the compatibility mode can be specified in your Vagrantfile:

    server1: Running ansible-playbook...

PLAY [blogpost] ****************************************************************

TASK [Gathering Facts] *********************************************************
ok: [server1]

TASK [say hello] ***************************************************************
ok: [server1] => {
    "msg": "hello from server1"

TASK [partitioning PVs for the Oracle volume group] ****************************
ok: [server1]

TASK [partition block devices for ASM] *****************************************
changed: [server1] => (item=/dev/sdc)
changed: [server1] => (item=/dev/sdd)

PLAY RECAP *********************************************************************
server1                    : ok=4    changed=1    unreachable=0    failed=0   

This is it! Using just a few commands I can spin up VMs, test my Ansible scripts and later on when I’m happy with them, check the code into source control.

Using dbca to create a physical standby database

While investigating new options I discovered with dbca for the previous article I noticed that it’s now possible to use Oracle’s Database Creation Assistant (dbca) to create a physical standby database using the -createDuplicateDB flag.

I wanted to know how easily this can be done on my Oracle 18.5.0 system. I have 2 VMs, server3 and server4 running Oracle Linux 7.6. I created the primary database on server3. The database name is NCDB with the db_unique_name set to SITEA. I also use SITEA as the database SID to prevent confusion. My physical standby database with db_unique_name SITEB will reside on server4.

The setup is kept as simple as possible to avoid any unintended complications. In other words, this is a lab setup, not something you’d entertain using for any other purpose. As always, this is a tech post and not a piece about licencing. It is up to you to ensure you aren’t performing any tasks that you aren’t licensed for.

Creating the primary database

The primary database can be created using dbca as always. Unlike the last post, this one uses ASM, not a filesystem to show how easy it is to do both.

The primary database is a non-Container Database (NCDB). I intend to create the primary instance as SITEA (Site A) with the standby named SITEB. This way no one gets confused when the database runs in primary role in the designated disaster recovery location. I haven’t seen gdbName <> sid/db_unique_name in many single instance deployments and appreciate that might be unfamiliar but bear with me. I’ll try to make this as transparent as possible.

Invoking dbca

Enough talking, let’s create the database:

[oracle@server3 ~]$ dbca -silent -createDatabase \
> -gdbName NCDB \
> -templateName lab_template.dbc \
> -sid SITEA \
> -enableArchive true -archiveLogMode auto \
> -memoryMgmtType auto_sga \
> -createAsContainerDatabase false \
> -recoveryAreaDestination +RECO \
> -totalMemory 2048 \
> -datafileDestination +DATA \
> -initParams db_unique_name=SITEA,dg_broker_start=true \
> -customScripts add_srls_dbca.sql


Prepare for db operation
10% complete
Registering database with Oracle Restart
14% complete
Copying database files
43% complete
Creating and starting Oracle instance
45% complete
49% complete
53% complete
56% complete
62% complete
Completing Database Creation
68% complete
70% complete
71% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
Database Information:
Global Database Name:SITEA
System Identifier(SID):SITEA
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/SITEA/SITEA1.log" for further details.

A few words of explanation: As I said I use SITEA as db_unique_name and the database’s SID. In Oracle Managed Files (OMF) deployments a file name references the db_unique name, and I wanted my data files, temp files and log files all to be in +DATA/SITEA/. db_unique_name defaults to db_name, and not changing it would have resulted in the database being created in +DATA/NCDB and that’s not what I wanted.

I decided to include starting the Data Guard broker as part of the initialisation parameters. Archiving is enabled right from the start as well.

The custom script you see referenced creates the necessary number of standby redo logs. This will save me a bit of work, because RMAN is smart. When duplicating the database for use as a standby it creates standby redo logs (SRL) for me if if finds SRLs in the target database. Yet another thing less for me to worry about.

Implications of using gdbname and sid (+ db_unique_name) in dbca

If you haven’t used a SID <> gDBName combination before, you might feel about unsure about how Oracle creates things. First of all, it created the database with the db_name you passed to dbca as gdbname. In this post that’s NCDB. Normally, if you grep’d the operating system’s process table for NCDB you would find all the instance’s background processes. With my dbca command however (where an Oracle SID <> gdbname is provided) you won’t find anything though. Grep for the ORACLE_SID “SITEA” instead. Likewise, if you want to create a bequeath connection to the database, set your ORACLE_SID to SITEA.

Since this is Oracle Restart the database is registered with Grid Infrastructure under its db_unique_name/ORACLE_SID combination in the RDBMS home:

[oracle@server3 ~]$ srvctl config database
[oracle@server3 ~]$ srvctl config database -db SITEA
Database unique name: SITEA
Database name: NCDB
Oracle home: /u01/app/oracle/product/18.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/SITEA/PARAMETERFILE/spfile.273.1003226559
Password file:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA,RECO
OSDBA group:
OSOPER group:
Database instance: SITEA
[oracle@server3 ~]$

Interacting with the database via srvctl requires you to use the db_unique_name SITEA as shown.

This is my preferred way to set things up these days. I’d like to invite you to critically reflect about the approach I took. I prefer it over other designations such as “PROD” and “STDBY”.

Creating the standby database

With the primary database created, archivelog mode enabled plus a working Fast Recovery Area I can move on to creating the physical standby database.

Before I do this a few small changes have to be made to my network configuration. I always add TNS entries for both primary and standby database in the “primary” host’s $ORACLE_HOME/network/admin/tnsnames.ora file and ship this across to the “standby” host. They are needed later. With the preparation done, I proceed to calling dbca:

[oracle@server4 ~]$ dbca -silent -createDuplicateDB \
> -gdbName NCDB \
> -sid SITEB \
> -initParams db_create_file_dest=+DATA,db_recovery_file_dest=+RECO,dg_broker_start=true \
> -primaryDBConnectionString server3:1521/SITEA \
> -createAsStandby \
> -dbUniqueName SITEB

[ ... ]

Prepare for db operation
22% complete
Listener config step
44% complete
Auxiliary instance creation
67% complete
RMAN duplicate
89% complete
Post duplicate database operations
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/SITEB/SITEB1.log" for further details.

So far so good. What did dbca do? And more importantly, how did it create the standby database? I didn’t have a backup of NCDB/SITEA at the time I called dbca on server4 so it must have performed a duplicate … from active database. Looking at the dbca trace in $ORACLE_BASE/cfgtoollogs/dbca/SITEB I can confirm this. After wading through a lot of messages, here’s the relevant bit:

INFO: Mar 11, 2019 12:14:04 PM oracle.assistants.dbca.driver.backend.steps.RmanDuplicateStep executeImpl
INFO: Running rman duplicate command:run {
set newname for datafile 1 to new;
set newname for datafile 3 to new;
set newname for datafile 4 to new;
set newname for datafile 7 to new;
duplicate target database
for standby
from active database

This might be an issue for some users because it requires a direct connection to the source database. I don’t know if RMAN might use the “from service … using backupset …” clause if you provided an offline backup.

The files I care about are all in the right place:

SQL> select name from v$datafile union all
2 select name from v$tempfile union all
3 select member from v$logfile union all
4 select value from v$parameter2 where name = 'control_files';


21 rows selected.

As reported by other bloggers, dbca creates a “temporary” listener that it doesn’t remove after the database creation process is completed. I specifically wanted to see if this is the case with Oracle Restart as well and yes, it is:

[oracle@server4 SITEB]$ ps -ef | grep tnslsnr
oracle 1125 1 0 10:27 ? 00:00:00 /u01/app/oracle/product/18.0.0/dbhome_1/bin/tnslsnr LISTENER20190318102721 -inherit
grid 5837 1 0 09:01 ? 00:00:00 /u01/app/grid/product/18.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
oracle 7588 31809 0 10:44 pts/1 00:00:00 grep --color=auto tnslsnr

The extra listener didn’t survive a reboot though and the processes should disappear eventually. dbca persists the configuration into $ORACLE_HOME/network/admin/listener.ora though, and those entries should probably be removed. This is especially true with Oracle Restart (and RAC for that matter) because I want all listeners to be controlled from the GRID home, not the RDBMS home.

There are other leftover files in $ORACLE_HOME/dbs as well for which I’d hope dbca removes them one day.

On the plus side dbca registers the database in Oracle Restart for me as part of the work it did:

[oracle@server4 ~]$ srvctl config database -db SITEB
Database unique name: SITEB
Database name: NCDB
Oracle home: /u01/app/oracle/product/18.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/SITEB/PARAMETERFILE/spfile.275.1003228201
Password file: +DATA/SITEB/orapwsiteb
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA,RECO
OSDBA group:
OSOPER group:
Database instance: SITEB

It’s good to see that I don’t need to worry about storing password file and spfile in ASM myself, so that job is already taken care off. A few other things need to be changed though. The database is still considered a primary (of which I know it isn’t) and the start option therefore is wrong, too. There are licensing implications opening a standby read only rather than just mounting it when turning on redo apply. Refer to the documentation for more details.

Out of habit I changed the role to PHYSICAL_STANDBY and the start option to MOUNT. This is what I ended up with in my lab, YMMV and depends on how you are licensed.

[oracle@server4 ~]$ srvctl config database -db SITEB
Database unique name: SITEB
Database name: NCDB
Oracle home: /u01/app/oracle/product/18.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/SITEB/PARAMETERFILE/spfile.275.1003228201
Password file: +DATA/SITEB/orapwsiteb
Start options: mount
Stop options: immediate
Management policy: AUTOMATIC
Disk Groups: DATA,RECO
OSDBA group:
OSOPER group:
Database instance: SITEB

Creating the Broker configuration

The standby redo logs have been created by RMAN and with dg_broker_start set to true by dbca for both databases I can now create a Data Guard Broker configuration. With Oracle Restart I don’t have to statically register the databases with the listener anymore. Clusterware takes care of starting/stopping them during role changes.

[oracle@server3 ~]$ dgmgrl
DGMGRL for Linux: Release - Production on Mon Mar 18 11:11:55 2019 Version

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@SITEA
Connected to "SITEA"
Connected as SYSDBA.
DGMGRL> create configuration autoconf
> primary database is 'SITEA'
> connect identifier is 'SITEA';
Configuration "autoconf" created with primary database "SITEA"
DGMGRL> add database 'SITEB' as connect identifier is 'SITEB';
Database "SITEB" added
DGMGRL> edit database 'SITEA' set property standbyfilemanagement=auto;
Property "standbyfilemanagement" updated
DGMGRL> edit database 'SITEB' set property standbyfilemanagement=auto;
Property "standbyfilemanagement" updated
DGMGRL> enable configuration
DGMGRL> show configuration

Configuration - autoconf

Protection Mode: MaxPerformance
SITEA - Primary database
SITEB - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 17 seconds ago)

You might want to increase the protection level from MaxPerformance to something that better suits your requirements. Three might be additional changes you want to do within the broker as well. The code output you just saw is the bare minimum I put into place.


All in all this has been pretty straight forward without too many hiccups along the way. If you want to you can put a lot of those steps into your favourite automation toolkit and you’ll end up with a standby database reasonably quickly provided your environment supports this kind of setup.

dbca now makes it easy to configure OMF on file systems

Up until – and including – Oracle 12.1 I always found it not-quite-so-straight-forward to create a new database using Database Creation Assistant (dbca) and configure it with Oracle Managed Files (OMF) on a file system in silent mode. I really like to use OMF in my lab databases as it saves me a lot of typing. I have also seen Oracle databases deployed in the cloud on file systems without ASM. So I was quite happy to see the syntax for dbca -silent -createDatabase was extended.

This post has been written using Oracle 18.4.0 on Linux.

Why might you want to do that?

When writing code to automatically provision databases you can’t have any interactive parts if you want an end-to-end provisioning workflow. Oracle thankfully provides options to call many GUI tools on the command line in so-called silent mode as well. This includes, but is not limited to, dbca. You can invoke the tool with a help flag, and it’ll tell you about its many options. They are so numerous that I refrain from showing them here, but you can get them from the Administration Guide.

And how does it work?

Using the minimum number of relevant options I can easily create a database using dbca and tell it to use OMF, pointing to a file system destination instead of ASM. Your choice of flags is most likely quite different from mine. This one worked for me in the lab:

[oracle@server1 ~]$ dbca -silent -createDatabase -gdbName NCDB -templateName lab_database.dbc \
-enableArchive true -archiveLogMode auto -memoryMgmtType auto_sga \
-createAsContainerDatabase false -recoveryAreaDestination /u01/fast_recovery_area \
-useOMF true -datafileDestination /u01/oradata -totalMemory 2048
Prepare for db operation
10% complete
Copying database files
40% complete
Creating and starting Oracle instance
42% complete
46% complete
50% complete
54% complete
60% complete
Completing Database Creation
66% complete
69% complete
70% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
Database Information:
Global Database Name:NCDB
System Identifier(SID):NCDB
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/NCDB/NCDB.log" for further details.
[oracle@server1 ~]$

You might have noticed the options to enable archivelog-mode as well. I’ll write about these in the next post :) Oh and the database is really created using OMF:

SQL> select name from v$tempfile union all
2 select name from v$datafile union all
3 select member from v$logfile union all
4 select value from v$parameter2 where name = 'control_files';


Final thoughts

Until now it’s been quite difficult (at least for me) to enable OMF on a file system when using dbca in CLI mode. This is no longer the case.

Ansible tips’n’tricks: understanding your Ansible configuration

When writing automation scripts I tend to use a local Ansible configuration file. This has certain advantages for me, such as including it in a version control system (VCS). It also is a valid option for developers without access to the global configuration file installed by the package manager. And more convenient to use than setting environment variables.

WARNING: There are some very important security considerations though, which you must be aware of before using a local configuration file.

Until now I haven’t spent a lot of time thinking about configuration variables and the order of precedence, but that is exactly what I’d like to do in this post.

When putting this short article together I used Ansible 2.7.6 on Oracle Linux 7.6

What does the documentation have to say?

The official documentation does a really good job explaining the configuration settings. You have the choice of:

  • Environment variable
  • configuration file (ansible.cfg) in your local directory
  • A playbook-independent configuration file in ~/.ansible.cfg
  • The global settings in /etc/ansible/ansible.cfg

Let’s try this

This is an example of a very minimalist project folder in my lab, deliberately omitting a local configuration file.

[vagrant@server1 ansible]$ ls -la
total 16
drwxrwxr-x. 3 vagrant vagrant   53 Feb 14 09:09 .
drwx------. 5 vagrant vagrant 4096 Feb 14 08:40 ..
drwxrwxr-x. 7 vagrant vagrant 4096 Feb 14 09:09 .git
-rw-rw-r--. 1 vagrant vagrant   17 Feb 14 09:09 .gitignore
-rw-rw-r--. 1 vagrant vagrant   94 Feb 14 08:40 hello.yml

The example is kept super-short on purpose.


Without a local configuration file (~/.ansible.cfg or $(pwd)/ansible.cfg) and/or defined environment variables, the settings made by your friendly system administrator in /etc/ansible/ansible.cfg govern every execution. You can use ansible-config to view the non-default values:

[vagrant@server1 ansible]$ ansible-config dump --only-changed
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
[vagrant@server1 ansible]$ 

As you can see, the global configuration file prevents the use of cowsay.

Now if I as an automation developer wanted to override a certain setting, I could use an environment variable. The documentation tricked me for a moment – when you see “DEFAULT_LOG_PATH” for example, exporting that variable does not have any effect. What you need to use is shown in the same section, but further down, with the Environment: label. To set the default log path on your shell, use ANSIBLE_LOG_PATH as in this example:

[vagrant@server1 ansible]$ export ANSIBLE_LOG_PATH=/home/vagrant/ansible/logs/my_ansible.log
[vagrant@server1 ansible]$ ansible-config dump --only-changed
ANSIBLE_NOCOWS(/etc/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(env: ANSIBLE_LOG_PATH) = /home/vagrant/ansible/logs/my_ansible.log

Thankfully ansible-config shows me the origin for each setting.

Using a local configuration file

Now how does a local configuration file play into this? Let’s try! I keep the global configuration file as it is, but I unset the environment variable. Here is the result:

[vagrant@server1 ansible]$ unset ANSIBLE_LOG_PATH
[vagrant@server1 ansible]$ cat ansible.cfg 

stdout_callback = debug
log_path = /home/vagrant/ansible/logs/ansible_blogpost.log
[vagrant@server1 ansible]$ ansible-config dump --only-changed
DEFAULT_LOG_PATH(/home/vagrant/ansible/ansible.cfg) = /home/vagrant/ansible/logs/ansible_blogpost.log
DEFAULT_STDOUT_CALLBACK(/home/vagrant/ansible/ansible.cfg) = debug
[vagrant@server1 ansible]$ 

What’s interesting is that my “nocows” setting from the global configuration file in /etc/ansible/ansible.cfg isn’t merged into the configuration. Without the use of environment variables, the only settings coming into play are those of the local configuration file. The same seems to apply if a local configuration file exists in addition to ~/.ansible.cfg. The file in the current working directory always took precedence in my testing.

This did not affect environment variables, they have always been considered by ansible-config. If for example I wanted to temporarily save the logfile to a different place and was too lazy to fire up vim, I could use this approach
[vagrant@server1 ansible]$ ANSIBLE_LOG_PATH=/home/vagrant/ansible/logs/overriding.log ansible-config dump --only-changed
DEFAULT_LOG_PATH(env: ANSIBLE_LOG_PATH) = /home/vagrant/ansible/logs/overriding.log
DEFAULT_STDOUT_CALLBACK(/home/vagrant/ansible/ansible.cfg) = debug
[vagrant@server1 ansible]$ 
Happy scripting!

Oracle Linux 7 and a new YUM configuration since January 2019

For quite some time I used to have a configuration file /etc/yum.repos.d/public-yum-ol7.repo managing my package sources in lab VMs. Today I noticed that this configuration file is deprecated, and has been replaced by a new model. This is explained in the Oracle Linux 7 administrator guide and a few other sources I totally missed. I thought I’d show you the full story in this post before I go and change all my Ansible scripts :)

State of play before

To repeat the scenario I just went through, I created a new machine, server3, based on the stock Oracle Linux 7.6 image. After its initial boot I log in to the console to perform a yum upgrade.

This is of course only acceptable in a lab environment, anything deployed in an enterprise environment would use a local, change-controlled mirror based on Spacewalk or comparable software.

Prior to starting the actual upgrade, let’s have a look at how things were before the new model was rolled out. There is only a single repository configuration file present after the initial reboot and without any calls to yum whatsoever:

[ansible@server3 ~]$ ls -l /etc/yum.repos.d/
total 16
-rw-r--r--. 1 root root 13093 Nov 2 07:02 public-yum-ol7.repo
[ansible@server3 ~]$


Next I run the upgrade command, I have removed a lot of output for clarity

[ansible@server3 ~]$ sudo yum upgrade
Loaded plugins: ulninfo
ol7_UEKR5 | 1.2 kB 00:00
ol7_latest | 1.4 kB 00:00
(1/5): ol7_UEKR5/x86_64/updateinfo | 27 kB 00:00
(2/5): ol7_UEKR5/x86_64/primary | 2.4 MB 00:00
(3/5): ol7_latest/x86_64/group | 659 kB 00:00
(4/5): ol7_latest/x86_64/updateinfo | 767 kB 00:00
(5/5): ol7_latest/x86_64/primary | 11 MB 00:00
ol7_UEKR5 120/120
ol7_latest 11799/11799
Resolving Dependencies
--> Running transaction check
---> Package NetworkManager.x86_64 1:1.12.0-6.el7 will be updated
---> Package NetworkManager.x86_64 1:1.12.0-8.el7_6 will be an update

[ ... more output ... ]

Updating : 1:grub2-pc-2.02- 15/85
Updating : cronie-1.4.11-20.el7_6.x86_64 16/85
Updating : cronie-anacron-1.4.11-20.el7_6.x86_64 17/85
Installing : python-chardet-2.2.1-1.el7_1.noarch 18/85
Installing : python-kitchen-1.1.1-5.el7.noarch 19/85
Installing : yum-utils-1.1.31-50.0.1.el7.noarch 20/85

IMPORTANT: A legacy Oracle Linux yum server repo file was found.
Oracle Linux yum server repository configurations have changed which
means public-yum-ol7.repo will no longer be updated. New repository
configuration files have been installed but are disabled. To complete
the transition, run this script as the root user:


See for more information.

Installing : oraclelinux-release-el7-1.0-3.el7.x86_64 21/85
Updating : rhn-client-tools-2.0.2-24.0.5.el7.x86_64 22/85
Updating : ipset-libs-6.38-3.el7_6.x86_64 23/85
Updating : selinux-policy-3.13.1-229.0.1.el7_6.6.noarch 24/85

[ ... more output ...]

The message that “A legacy Oracle Linux yum server repo file was found” started this blog post. So what is there to be done? The upgrade created a few more files in /etc/yum.repos.d/:

[ansible@server3 ~]$ ls -l /etc/yum.repos.d/
total 28
-rw-r--r--. 1 root root 3354 Jan 22 16:14 oracle-linux-ol7.repo.disabled
-rw-r--r--. 1 root root 13093 Nov 2 07:02 public-yum-ol7.repo
-rw-r--r--. 1 root root 2116 Jan 22 16:14 uek-ol7.repo.disabled
-rw-r--r--. 1 root root 226 Jan 15 21:31 virt-ol7.repo.disabled
[ansible@server3 ~]$

The files ending in *disabled are not considered eligible during any execution of “yum”.

Transitioning to the new model

According to the following steps are only needed if a) you have public-yum-ol7.repo in use and the new configuration files – provided by oraclelinux-release-el7 in my case – are present as well. This applies to this VM, so I decided to go ahead and call the script /usr/bin/ to see what happens.

[root@server3 ~]# /usr/bin/
Repository ol7_UEKR5 Fine
Repository ol7_latest Fine
[root@server3 ~]# ls -l /etc/yum.repos.d/
total 28
-rw-r--r--. 1 root root 3354 Jan 22 16:14 oracle-linux-ol7.repo
-rw-r--r--. 1 root root 13093 Nov 2 07:02 public-yum-ol7.repo.sav
-rw-r--r--. 1 root root 2116 Jan 22 16:14 uek-ol7.repo
-rw-r--r--. 1 root root 226 Jan 15 21:31 virt-ol7.repo

It appears as if switched the all-in-one configuration for the new, modular one.


In other posts of mine I described how I’m upgrading public-yum-ol7.repo from Oracle’s yum server in my lab (and only in the lab!). Based on the monolithic file I can call yum-config-manager to enable and disable any repository I need. With the new modular approach I might have to rethink this approach. The Administration Guide I referred to earlier has more details about the details of the change.

Building your own local Oracle Linux 7 Vagrant base box

I have been talking about Vagrant for a long time and use it extensively on my Ubuntu-powered laptop. I am using Oracle Linux 7.6 for most of my lab builds, and I like to have specific tools such as collectl, perf, and many others available when the VM boots. I als like to stay in control of things, especially when it comes to downloading otherwise unknown things from the Internet I decided to learn how to create a Vagrant box myself.

Using Vagrant with my custom images, all I need to do is run a single command and it will spin up a clean VM using the VirtualBox provider with the exact software configuration I want. I can also supply so-called provisioners to further configure my environment. I found this particularly useful when writing and testing Ansible scripts. Sometimes I just wanted to go back to my starting point but that can be tricky at times: imagine you just partitioned your block devices for use with the database and discovered you wanted to change the flow. Getting back to unpartitioned, unformatted block devices is possible, but I don’t think it’s terribly elegant. Plus I have to manually do it, and I prefer the Ansible approach.

Building a base box

The Vagrant documentation is pretty good, so this is mostly pulling together information from 2 sources: The starting point I used was Creating a Base Box with specifics for the VirtualBox driver I’m using. I don’t claim I’m an expert in this field.

Running Vagrant VMs can be inherently insecure as you will see in a bit. It’s fine for me because I’m creating/trashing short-lived VMs on a regular basis and all I do is play around with them whilst they remain perfectly isolated from the rest of the world. If you are ok with this limitation feel free to read on, otherwise please refrain from following the steps in this blog post.

The overall process isn’t too hard to follow:

  • Create your gold image
    • Install the Operating System in VirtualBox
    • Install/upgrade any software you want to have available
    • Configure the system for Vagrant specifics
  • Create a base box off your gold image
  • Add the box to your environment
  • Start the VM and enjoy

Creating the VM and installing the Operating System

The first step obviously is to create the VM and install the operating system. For quite some time now I’m creating a VM with sufficient RAM and a couple of block devices: the first one is used as the root volume group, the second block device will be used for Oracle. Plenty of articles have been written about installing Oracle Linux on VirtualBox, I won’t write the 42nd variation here ;)

There are only a few things to pay attention to. These can all be found in the documentation I referenced earlier. First of all, please ensure that your network adaptor uses NAT. You can use port forwarding to access a NAT device in VirtualBox (configured later). The documentation furthermore recommends removing any necessary components such as USB and audio from the VM. I have used a strong password for “root” as I have no intention at all of sharing my VM. Apply security hardening at this stage.

A common error is not to enable the network device to start up automatically when the system boots. Vagrant uses port-forwarding to the NAT device and SSH keys to authenticate, there doesn’t appear to be a mechanism circumventing the network stack. With the network interface down it’s quite hard to connect via SSH.

Install/upgrade software

Once the operating system is installed and the VM rebooted, it’s time to configure it for your needs. I usually end up completing the pre-requisites for an Oracle database installation. This, too, has been covered so many times that I don’t feel like adding value by telling you how to complete the steps.

Configure the system for Vagrant

At this stage your VM should be properly configured for whichever purpose you have in mind. All that remains now is the addition of the specific configuration for Vagrant. There are a few steps to this, all to be completed on the guest.

Install VirtualBox Guest Additions

Vagrant offers the option of mounting a file system from your host on the guest VM. I very much like this feature, which is enabled by default. Please refer to the Vagrant documentation for security implications of sharing file systems between guest and host.

As with every VirtualBox VM, shared folders won’t work without installing the guest additions though so that’s what I do next. This is pretty straight forward and for Oracle Linux 7 generally speaking requires tar, bzip2, gcc and kernel-uek-devel matching your current kernel-uek. If you just completed a “yum upgrade” and your kernel was upgraded you need to reboot first. After has completed successfully (I am using VirtualBox 5.2.x) it’s time to move on to the next step.

Add a Vagrant user

Vagrant expects a user named vagrant to be present on the VM. It uses SSH-keys when connecting to the VM. The documentation mentions a so-called insecure key-pair I decided not to use. Instead, I created my own key pair for use with the machine and added it to ~/.ssh/authorized_keys in the vagrant user’s home directory. It is a new keypair I created on the host specifically for use with Vagrant. If you are on MacOS or Linux it’s convenient to add it to the SSH agent (ssh-add …). There are similar tools for Windows users.

Creating the user is easy and should be completed now unless you already created the user during the initial installation:

# useradd -c 'vagrant user' -m -s $(which bash) vagrant 

The user should have passwordless sudo enabled as well as per the documentation. It is also recommended by the Vagrant documentation to assign a weak password to the vagrant account, which I didn’t. I never ran the passwd command to set a password for the vagrant user and so far seem to be doing ok.

Create a base box

This concludes the preparations on the VM side. Next up you need to create the base box, which you can then refer to in your own Vagrantfile. The command to do so is just one line. Be careful though: it will create a compressed file named in your current working directory. This file can be rather large, so make sure you have enough space to store it.

$ vagrant package --base <your newly created VM name>

Depending on how powerful your laptop is this can take a little while.

Add the box to your environment

The previous command will complete eventually. This is the moment where you add the box to Vagrant’s local inventory as shown here:

$ vagrant box add --name blogpost /home/martin/ 

This command shouldn’t take too long to complete. If you see a line “box: successfully added box ‘blogpost’ (v0) for ‘virtualbox’ you are good. You can assign any name to the box you add, it will alter on show up under that designation when you run “vagrant box list”

Start the VM and enjoy

The remaining tasks are identical to using Vagrant boxes off their repository. Start off by vagrant init <your box name> and make all the changes you normally do to the Vagrantfile. As I’m using my own SSH key I have to make sure that I’m telling Vagrant where to find it using a configuration option:

config.ssh.private_key_path = "/path/to/ssh/keyfile" 

Once you start the VM using “vagrant up” you are good to go!