Category Archives: Linux

Device name persistence in the cloud: OCI + Terraform

This is a really short post (by my standards at least) demonstrating how I ensure device name persistence in Oracle Cloud Infrastructure (OCI). Device name persistence matters for many reasons, not the least for my Ansible scripts expecting a given block device to be of a certain size and used for a specific purpose. And I’m too lazy to write discovery code in Ansible, I just want to be able to use /dev/oracleoci/oraclevdb for LVM so that I can install the database.

The goal is to provision a VM with a sufficient number of block devices for use with the Oracle database. I wrote about the basics of device name persistence in December last year. In my earlier post I used the OCI Command Line Interface (CLI). Today I rewrote my code, switching from shell to Terraform.

As always I shall warn you that creating cloud resources as shown in this post will incur cost so please make sure you are aware of the fact. You should also be authorised to spend money if you use the code for your purposes.

Terraform Compute and Block Storage

When creating a VM in OCI, you make use of the oci_core_instance Terraform resource. Amongst the arguments you pass to it is the (operating system) image as well as the boot volume size. The boot volume is attached to the VM instance without any further input on your behalf.

Let’s assume you have already defined a VM resource named sitea_instance in your Terraform code.

I generally attach 5 block volumes to my VMs unless performance requirements mandate a different approach.

  • Block device number 1 hosts the database binaries
  • Devices number 2 and 3 are used for +DATA
  • The remaining devices (4 and 5) will be used for +RECO

Creating block volumes

The first step is to create block volumes. I know I want five, and I know they need to end up as /dev/oracleoci/oraclevd[b-f]. Since I’m pretty lazy I thought I’d go with some kind of loop instead of hard-coding 5 block devices. It should also allow for more flexibility in the long run.

I tried to use the count meta-argument but failed to get it to work the way I wanted. Which might be a PBKAC issue. The other option in Terraform is to use the for each meta-argument instead. This sounded a lot better for my purpose. To keep my code flexible I decided to store the future block devices’ names in a variable:

variable "block_volumes" {
    type = list(string)
    default = [ 

Remember that Oracle assigns /dev/oracleoci/oraclevda to the boot volume. You definitely want to leave that one alone.

Next I’ll use the for-each block to get the block device name. I’m not sure if this is considered good code, all I know is that it does the job. The Terraform entity to create block devices is name oci_core_volume:

resource "oci_core_volume" "sitea_block_volume" {
  for_each = toset(var.block_volumes)

  availability_domain  =
  compartment_id       = var.compartment_ocid
  display_name         = "sitea-${each.value}"
  size_in_gbs          = 50


This takes care of creating 5 block volumes. On their own they aren’t very useful yet, they need to be attached to a VM.

Attaching block devices to the VM

In the next step I have to create a block device attachment. This is where the count meta-argument failed me as I couldn’t find a way to generate the persistent device name. I got around that issue using for-each, as shown here:

resource "oci_core_volume_attachment" "sitea_block_volume_attachement" {
  for_each = toset(var.block_volumes)

  attachment_type = "iscsi"
  instance_id     =
  volume_id       = oci_core_volume.sitea_block_volume[each.value].id
  device          = "/dev/oracleoci/${each.value}"

Using the contents of each.value I can refer to the block volume and also assign a suitable device name. Note that I’m specifying “iscsi” as the attachement type. Instead of the remote-exec provisioner I rely on cloud-init to make my iSCSI devices available to the VM.

The result

Once the Terraform script completes, I have a VM with block storage ready for Ansible provisioning scripts.

[opc@sitea ~]$ ls -l /dev/oracleoci/
total 0
lrwxrwxrwx. 1 root root 6 Mar 25 14:47 oraclevda -> ../sda
lrwxrwxrwx. 1 root root 7 Mar 25 14:47 oraclevda1 -> ../sda1
lrwxrwxrwx. 1 root root 7 Mar 25 14:47 oraclevda2 -> ../sda2
lrwxrwxrwx. 1 root root 7 Mar 25 14:47 oraclevda3 -> ../sda3
lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevdb -> ../sdc
lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevdc -> ../sdd
lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevdd -> ../sde
lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevde -> ../sdb
lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevdf -> ../sdf
[opc@sitea ~]$ lsblk
sda                  8:0    0   50G  0 disk 
├─sda1               8:1    0  100M  0 part /boot/efi
├─sda2               8:2    0    1G  0 part /boot
└─sda3               8:3    0 48,9G  0 part 
  ├─ocivolume-root 252:0    0 38,9G  0 lvm  /
  └─ocivolume-oled 252:1    0   10G  0 lvm  /var/oled
sdb                  8:16   0   50G  0 disk 
sdc                  8:32   0   50G  0 disk 
sdd                  8:48   0   50G  0 disk 
sde                  8:64   0   50G  0 disk 
sdf                  8:80   0   50G  0 disk 


There are many ways to complete tasks, and cloud providers usually offer plenty of them. I previously wrote about ensuring device name persistence using the OCI CLI whereas this post covers Terraform. Looking back and comparing both I have to say that I like the new approach better.

Installing Ansible on Oracle Linux 8 for test and development use

I have previously written about installing Ansible on Oracle Linux 7 for non-production use. A similar approach can be taken to install Ansible on Oracle Linux 8. This is a quick post to show you how I did that in my Vagrant (lab) VM.

As it is the case with Oracle Linux 7, the Extra Packages for Enterprise Linux (EPEL) repository is listed in a section labelled “Packages for Test and Development“. As per, these packages come with the following warning:

Note: The contents in the following repositories are for development purposes only. Oracle suggests these not be used in production.

This is really important!

If you are ok with the limitation I just quoted from Oracle’s YUM server, please read on. If not, head back to the official Ansible documentation and use a different installation method instead. I only use Ansible in my own lab and therefore don’t mind.

Enabling the EPEL repository

The first step is to enable the EPEL repository. For quite some time now, Oracle has split the monolithic YUM configuration file into smaller, more manageable pieces. For EPEL, you need to install oracle-epel-release-el8.x86_64:

[vagrant@dev ~]$ sudo dnf info oracle-epel-release-el8.x86_64
Last metadata expiration check: 1:51:09 ago on Wed 10 Feb 2021 09:30:41 UTC.
Installed Packages
Name         : oracle-epel-release-el8
Version      : 1.0
Release      : 2.el8
Architecture : x86_64
Size         : 18 k
Source       : oracle-epel-release-el8-1.0-2.el8.src.rpm
Repository   : @System
From repo    : ol8_baseos_latest
Summary      : Extra Packages for Enterprise Linux (EPEL) yum repository
             : configuration
License      : GPLv2
Description  : This package contains the  Extra Packages for Enterprise Linux
             : (EPEL) yum repository configuration.

[vagrant@dev ~]$  

A quick sudo dnf install oracle-epel-release-el8 will install the package and create the EPEL repository configuration. Until this stage the new repository is known, but still disabled. This is what it looked like on my (custom built) Oracle Linux 8.3 Vagrant box, booted into UEK 6:

[vagrant@dev ~]$ sudo dnf repolist
repo id           repo name
ol8_UEKR6         Latest Unbreakable Enterprise Kernel Release 6 for Oracle Linux 8 (x86_64)
ol8_appstream     Oracle Linux 8 Application Stream (x86_64)
ol8_baseos_latest Oracle Linux 8 BaseOS Latest (x86_64)
[vagrant@dev ~]$  

If you are ok with the caveat mentioned earlier (development purpose, no production use…, see above) you can enable the EPEL repository:

[vagrant@dev ~]$ sudo yum-config-manager --enable ol8_developer_EPEL
[vagrant@dev ~]$ sudo dnf repolist
repo id            repo name
ol8_UEKR6          Latest Unbreakable Enterprise Kernel Release 6 for Oracle Linux 8 (x86_64)
ol8_appstream      Oracle Linux 8 Application Stream (x86_64)
ol8_baseos_latest  Oracle Linux 8 BaseOS Latest (x86_64)
ol8_developer_EPEL Oracle Linux 8 EPEL Packages for Development (x86_64)
[vagrant@dev ~]$   

The output of dnf repolist confirms that EPEL is now enabled.

Installing Ansible

With the repository enabled you can search for Ansible:

[vagrant@dev ~]$ sudo dnf info ansible
Last metadata expiration check: 0:00:10 ago on Wed 10 Feb 2021 11:26:57 UTC.
Available Packages
Name         : ansible
Version      : 2.9.15
Release      : 1.el8
Architecture : noarch
Size         : 17 M
Source       : ansible-2.9.15-1.el8.src.rpm
Repository   : ol8_developer_EPEL
Summary      : SSH-based configuration management, deployment, and task
             : execution system
URL          :
License      : GPLv3+
Description  : Ansible is a radically simple model-driven configuration
             : management, multi-node deployment, and remote task execution
             : system. Ansible works over SSH and does not require any software
             : or daemons to be installed on remote nodes. Extension modules can
             : be written in any language and are transferred to managed
             : machines automatically.


[vagrant@dev ~]$  

Mind you, 2.9.15 was the current release at the time of writing. If you hit the blog by means of a search engine, the version will most likely be different. Let’s install Ansible:

[vagrant@dev ~]$ sudo dnf install ansible ansible-doc
Last metadata expiration check: 0:01:06 ago on Wed 10 Feb 2021 11:26:57 UTC.
Dependencies resolved.
 Package            Arch   Version                     Repository          Size
 ansible            noarch 2.9.15-1.el8                ol8_developer_EPEL  17 M
 ansible-doc        noarch 2.9.15-1.el8                ol8_developer_EPEL  12 M
Installing dependencies:
 python3-babel      noarch 2.5.1-5.el8                 ol8_appstream      4.8 M
 python3-jinja2     noarch 2.10.1-2.el8_0              ol8_appstream      538 k
 python3-jmespath   noarch 0.9.0-11.el8                ol8_appstream       45 k
 python3-markupsafe x86_64 0.23-19.el8                 ol8_appstream       39 k
 python3-pip        noarch 9.0.3-18.el8                ol8_appstream       20 k
 python3-pytz       noarch 2017.2-9.el8                ol8_appstream       54 k
 python3-pyyaml     x86_64 3.12-12.el8                 ol8_baseos_latest  193 k
 python3-setuptools noarch 39.2.0-6.el8                ol8_baseos_latest  163 k
 python36           x86_64 3.6.8-2.module+el8.3.0+7694+550a8252
                                                       ol8_appstream       19 k
 sshpass            x86_64 1.06-9.el8                  ol8_developer_EPEL  28 k
Enabling module streams:
 python36                  3.6                                                 

Transaction Summary
Install  12 Packages

Total download size: 35 M
Installed size: 459 M
Is this ok [y/N]: y



[vagrant@dev ~]$   

A quick test reveals the software works as advertised:

[vagrant@dev ~]$ ansible --version
ansible 2.9.15
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.8 (default, Nov  5 2020, 18:03:20) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5.0.1)]
[vagrant@dev ~]$  

It seems that Ansible has been installed successfully.

Terraform tips’n’tricks: debugging data sources and testing interpolations

I have previously blogged about the use of Terraform data sources to fetch information concerning Oracle Cloud Infrastructure (OCI) resources. The documentation is pretty good, but sometimes you may want to know more about the data returned. This post describes a potential way to debug output of a data source and to evaluate interpolations.

Do you know Data::Dumper?

Perl is one of the programming languages I have worked with in the past. When I did (it really was a long time ago) there wasn’t a proper IDE allowing me to have nice breakpoints and inspect variables so I resorted to the good, old Data::Dumper. It worked pretty much everywhere and showed me the contents of complex data structures when I was a bit at a loss. For example:

 #!/usr/bin/env perl

use strict;
use warnings;

use Data::Dumper qw(Dumper);

my %dataStructure = (
    key1 => {
        a => "b",
        c => "d",
    key2 => {
        e => "f",
        g => "h"

# ---
# main

print Dumper(\%dataStructure); 

When executed, the last line would print the contents of the data structure:

$ perl 
$VAR1 = {
          'key2' => {
                      'g' => 'h',
                      'e' => 'f'
          'key1' => {
                      'a' => 'b',
                      'c' => 'd'

Who needs JSON if you can have hashes of hashes (and other data structures) in perl ;) More seriously though, why is this code relevant to the post? Please read on, there is a feature remarkably similar yet more powerful in Terraform.

Terraform console

When I first read about the Terraform console in the most excellent Terraform Up and Running, I didn’t pay too much attention to it. This proved to be wrong in hindsight, the console can do a lot more than I thought it could.

This example demonstrates how I can debug data structures in Terraform using the console. I take the example I blogged about recently: fetching an Oracle Cloud ID (OCID) for an Oracle-provided Linux 7 image. Please refer back to the post for more details. Let’s assume I have already run terraform apply.

Starting the console

That’s as simple as typing terraform console

$ terraform console
> help
The Terraform console allows you to experiment with Terraform interpolations.
You may access resources in the state (if you have one) just as you would
from a configuration. For example: "" would evaluate
to the ID of "" if it exists in your state.

Type in the interpolation to test and hit  to see the result.

To exit the console, type "exit" and hit , or use Control-C or

A useful help message for starters :)

Debugging my data source

My Terraform code uses the following data source:

 data "oci_core_images" "ol7_latest" {
        compartment_id = var.compartment_ocid

        operating_system = "Oracle Linux"
        operating_system_version = "7.9"
        shape = "VM.Standard.E2.1.Micro"

As per the OCI provider’s documentation, the data source returns an object of type images (a list of images). Let’s see if I can dump any of that. I am using the full path to the images object as in data.data_source_type.data_source_name.images. Note that the output is shortened to the relevant information

$ terraform console
> data.oci_core_images.ol7_latest.images
    "display_name" = "Oracle-Linux-7.9-2021.01.12-0"
    "id" = ""
    "operating_system" = "Oracle Linux"
    "operating_system_version" = "7.9"
    "size_in_mbs" = "47694"
    "state" = "AVAILABLE"
    "time_created" = "2021-01-11 19:05:21.301 +0000 UTC"
    "display_name" = "Oracle-Linux-7.9-2020.11.10-1"
    "id" = ""
    "operating_system" = "Oracle Linux"
    "operating_system_version" = "7.9"
    "size_in_mbs" = "47694"
    "state" = "AVAILABLE"
    "time_created" = "2020-11-11 06:18:05.628 +0000 UTC"
    "display_name" = "Oracle-Linux-7.9-2020.10.26-0"
    "id" = ""
    "operating_system" = "Oracle Linux"
    "operating_system_version" = "7.9"
    "size_in_mbs" = "47694"
    "state" = "AVAILABLE"
    "time_created" = "2020-10-27 06:33:38.068 +0000 UTC"

Right, so, the data source returns a list of 3 potential Oracle Linux 7.9 images I could choose from for my always-free VM. That’s nice to know! Although it doesn’t change my approach of taking the latest :)

Inspecting cloud resources

The console extrapolation isn’t limited to data sources. You can also look at cloud resources, such as this VCN:

resource "oci_core_vcn" "dummy_vcn" {
        compartment_id = var.compartment_ocid

        cidr_block = ""
        display_name = "iAmaDemoVCN"

Note that you always get a “default security list” when you create a VCN. Please don’t use it as it opens up SSH from everywhere. Please see my previous post on the topic for an example.

Let’s create the VCN:

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # oci_core_vcn.dummy_vcn will be created
  + resource "oci_core_vcn" "dummy_vcn" {
      + cidr_block               = ""
      + cidr_blocks              = (known after apply)
      + compartment_id           = "ocid1.compartment.oc1..aaaaa..."
      + default_dhcp_options_id  = (known after apply)
      + default_route_table_id   = (known after apply)
      + default_security_list_id = (known after apply)
      + defined_tags             = (known after apply)
      + display_name             = "iAmaDemoVCN"
      + dns_label                = (known after apply)
      + freeform_tags            = (known after apply)
      + id                       = (known after apply)
      + ipv6cidr_block           = (known after apply)
      + ipv6public_cidr_block    = (known after apply)
      + is_ipv6enabled           = (known after apply)
      + state                    = (known after apply)
      + time_created             = (known after apply)
      + vcn_domain_name          = (known after apply)

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

oci_core_vcn.dummy_vcn: Creating...
oci_core_vcn.dummy_vcn: Creation complete after 0s []

After Terraform finished creating the VCN I can display the “known after apply” values in the console:

$ terraform console
> oci_core_vcn.dummy_vcn
  "cidr_block" = ""
  "cidr_blocks" = tolist([
  "compartment_id" = ""
  "default_dhcp_options_id" = ""
  "default_route_table_id" = ""
  "default_security_list_id" = ""
  "defined_tags" = tomap({
    "Oracle-Tags.CreatedBy" = "someuser"
    "Oracle-Tags.CreatedOn" = "2021-02-01T19:08:00.774Z"
  "display_name" = "iAmaDemoVCN"
  "dns_label" = tostring(null)
  "freeform_tags" = tomap({})
  "id" = ""
  "ipv6cidr_block" = tostring(null)
  "ipv6public_cidr_block" = tostring(null)
  "is_ipv6enabled" = tobool(null)
  "state" = "AVAILABLE"
  "time_created" = "2021-02-01T19:08:00.774Z"
  "timeouts" = null /* object */
  "vcn_domain_name" = tostring(null)

Note that unlike with data sources I didn’t have to specify “resource” at the beginning. Doing so would result in an error telling you that a managed resource “resource” “resource_type” has not been declared in the root module.


Terraform’s console is a very useful tool for debugging your Terraform resources. Remember that when you want to inspect data sources, you need the “data” prefix. When looking at resources, you simply use the resource_type.resource_name syntax.

Terraform tips’n’tricks: getting the latest Oracle Linux 8 image OCID programatically

This post is a direct follow up to the previous one where I shared how I used a Terraform data source to fetch the latest Oracle-provided Oracle Linux 7 cloud image identifier. This time around I’d like to fetch the latest Oracle Cloud ID (OCID) for Oracle Linux 8. It’s a different approach and instead of a single article covering both Oracle Linux versions I decided to use a more the search-engine friendly method of splitting the topics.

Terraform versions

I’m still using Terraform 0.14.5/OCI provider 4.10.0.

What’s the latest Oracle Linux 8 OCID?

Referring back to the documentation the latest Oracle-provided Oracle Linux 8 image at the time of writing is Oracle-Linux-8.3-2021.01.12-0. As my home region is eu-frankfurt-1 and I want to create an always-free VM I need to go with this OCID:

Getting the OCID via the Terraform data source

So now let’s try to get the same OCID via Terraform’s data source. Here’s the code snippet I used in Cloud Shell:

# using Instance Principal for authentication
provider "oci" {
    auth = "InstancePrincipal"
    region = var.oci_region


data "oci_core_images" "ol8_latest" {

        compartment_id = var.compartment_ocid

        operating_system = "Oracle Linux"
        operating_system_version = "8"
        shape = "VM.Standard.E2.1.Micro"

output "latest_ol8_image" {
        value =

Note that unlike with Oracle Linux 7 you don’t specify the “dot” release. Doing so will raise an error with Terraform 0.14.5/OCI provider 4.10.0.

Let’s run the code:

 $ terraform apply

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.


latest_ol8_image = ""  

A quick comparison reveals that my code fetched the intended OCID; QED.


I really like not having to specify the dot release with Oracle’s provided OL8 as it’s more future-proof. As soon as Oracle Releases Oracle Linux 8.4 in their cloud, I don’t need to change a single line of code to use it.

On the other hand, if my code requires a different (more specific) image, some trickery is needed. I’ll write about a possible solution in another post.

Terraform tips’n’tricks: getting the latest Oracle Linux 7 image OCID programatically

As with all cloud providers you need to specify an operating system image when creating virtual machines using Terraform in Oracle Cloud Infrastructure (OCI). This can either be an Oracle supplied image, or a custom image you built. This post describes how to fetch the most recent Oracle-provided image for Oracle Linux 7 in Terraform. I am planning another post for Oracle Linux 8 in the future.

Terraform versions

When writing this post Terraform 0.14.5 was the latest and greatest release. The terraform init command downloaded release 4.10.0 of the OCI provider.

Supported Operating System Images

Part of the details you need to provide when starting a VM in OCI is the operating system image ID. Oracle calls their cloud identifiers “Oracle Cloud ID” or OCID for short. You can find all supported image OCIDs as part of the OCI documentation.

For instance, if you want to create an Oracle Linux 7 image, the latest “standard” image OCIDs at the time of writing could be found here. Now all you need to do is find the OCID corresponding to your region, suitable with the VM shape you want to run. My target region is eu-frankfurt-1 and I want to start an always-free VM, so I’ll have to pick this OCID for Oracle-Linux-7.9-2021.01.12-0:

I love the documentation page for reference, but it is not very practical. A new release of Oracle’s image requires me to update the VM configuration in Terraform. Wouldn’t it be nice if I could look the most recent version up programmatically?

Looking up the latest OCID for your Oracle Linux 7 image

This little code snippet shows how to fetch the OCID for the latest Oracle Linux 7.9 image for use with an always-free VM instance. I am running this in my cloud shell, if you want to run this locally you’d have to update the provider configuration accordingly.

provider "oci" {
        auth = "InstancePrincipal"
        region = var.oci_region

# this is the data source for Oracle Linux 7 images
data "oci_core_images" "ol7_latest" {
        compartment_id = var.compartment_ocid

        operating_system = "Oracle Linux"
        operating_system_version = "7.9"
        shape = "VM.Standard.E2.1.Micro"

# now let's print the OCID
output "latest_ol7_image" {
        value =

You’ll notice this code snippet doesn’t create anything, it merely fetches information and prints it. Terraform uses data sources for this purpose, returning the desired information. In the case of the oci_core_images data source Terraform will return the list of images matching the specification.

Once the usual terraform init has been run you can execute the code and print the image OCID:

$ terraform apply


latest_ol7_image = ""

Nice! This way I get the same result but unlike with the hard-coded version I don’t need to worry about a later Oracle Linux 7.9 image.

I can easily reference the OCID in the oci_core_instance’s source_details.

resource "oci_core_instance" "some_instance" {
        # oci_ads is another data source looking up names of the ADs in my region
        availability_domain =
        compartment_id = var.compartment_ocid
        shape = "VM.Standard.E2.1.Micro"

        source_details {
                source_id =
                source_type = "image"




You can’t simply specify “7” as the operating_system_ attribute, when I tried that terraform complained:

 $ grep operating_system_version 
        operating_system_version = "7"
$ terraform plan

Error: Invalid index

  on line 24, in output "latest_ol7_image":
  24:         value =
    | data.oci_core_images.ol7_latest.images is empty list of object

The given key does not identify an element in this collection value. 

So remember to specify the dot release for Oracle Linux 7 and update it when a new one comes out.

Device name persistence in the cloud: OCI

Device name persistence is an important concept for everyone deploying the Oracle database. In this little series I’ll show how you can achieve device name persistence with Oracle Cloud Infrastructure (OCI) and block storage. I am hoping to share future parts for Azure and AWS.

In the example I’m going to prepare a cloud VM for the installation of Oracle Grid Infrastructure 19.9.0. To do so I have created a number of block devices in addition to the boot volume:

  • One block volume to contain the Oracle binaries
  • Two block volumes to be used as +DATA
  • Two more block volumes for +RECO

This is going to be a playground environment, the block volume size is unrealistically small. You will certainly need larger block devices for a production environment. Additionally there is most likely a cost associated with creating these resources, be careful!

Block devices

The following block devices have been created previously, and are waiting to be attached to the VM:

cloudshell:~ (eu-frankfurt-1)$ oci bv volume list -c $C \
 --query "data [?contains(\"display-name\", 'dbinst1')].{AD:\"availability-domain\",name:\"display-name\"}" \
 --output table
| AD                       | name         |
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv01 |
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv02 |
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv03 |
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv04 |
| IHsr:EU-FRANKFURT-1-AD-3 | dbinst1-bv05 |

These now need to be attached to my VM, called dbinst1. You may have guessed ;)

Block device attachment

Once the block devices are created, they need to be attached to the VM. There are many ways to do so, but since I’m using a script in Cloud Shell I went with the OCI Command Linue Interface (CLI). For example:

oci compute volume-attachment attach-paravirtualized-volume \
--instance-id \
--volume-id \
--device "/dev/oracleoci/oraclevdf" 

This command attached the 5th block volume to the VM as /dev/oracleoci/oraclevdf. I have other volumes attached as /dev/oracleoci/oraclevd[a-e] already. Note that I opted to add the block volumes using paravirtualised option. This is fine for my playground VM where I don’t really expect or need the last bit of I/O performance. If you need performance, you need go with the iSCSI attachment type.

Block device use

And this is all there is to it: the para-virtualised block devices are immediately visible on dbinst1:

[opc@dbinst1 ~]$ lsscsi
[2:0:0:1]    disk    ORACLE   BlockVolume      1.0   /dev/sde 
[2:0:0:2]    disk    ORACLE   BlockVolume      1.0   /dev/sda 
[2:0:0:3]    disk    ORACLE   BlockVolume      1.0   /dev/sdb 
[2:0:0:4]    disk    ORACLE   BlockVolume      1.0   /dev/sdd 
[2:0:0:5]    disk    ORACLE   BlockVolume      1.0   /dev/sdc 
[2:0:0:6]    disk    ORACLE   BlockVolume      1.0   /dev/sdf 
[opc@dbinst1 ~]$  

The only thing to be aware of is that you shouldn’t use the native block device. Instead, use the device name you assigned when attaching the block device:

[opc@dbinst1 ~]$ ls -l /dev/oracleoci/*
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevda -> ../sde
lrwxrwxrwx. 1 root root 7 Nov 24 06:38 /dev/oracleoci/oraclevda1 -> ../sde1
lrwxrwxrwx. 1 root root 7 Nov 24 06:38 /dev/oracleoci/oraclevda2 -> ../sde2
lrwxrwxrwx. 1 root root 7 Nov 24 06:38 /dev/oracleoci/oraclevda3 -> ../sde3
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevdb -> ../sda
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevdc -> ../sdb
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevdd -> ../sdd
lrwxrwxrwx. 1 root root 6 Nov 24 06:38 /dev/oracleoci/oraclevde -> ../sdc
lrwxrwxrwx. 1 root root 6 Nov 24 07:18 /dev/oracleoci/oraclevdf -> ../sdf
[opc@dbinst1 ~]$  

My Ansible playbooks reference /dev/oracleoci/oraclevd*, and that way ensure device name persistence across reboots. Happy automating!

Handling kernel upgrades with Ansible prior to an Oracle installation

As part of the process of setting up VMs in the cloud for use with the Oracle database it is frequently necessary to update the systems to the latest and greatest, and hopefully more secure packages before the Oracle installation can begin. In a similar way I regularly upgrade the (cloud-vendor provided) base image when building a custom image using Packer. This demands for an automated process in my opinion, and Ansible is the right tool for me.

I may have mentioned once or twice that a Spacewalk powered (or equivalent) local repository is best for consistency. You may want to consider using it to ensure all systems are upgraded to the same packages. Applying the same package updates in production as you did in test (after successful regression testing of course) makes testing in lower-tier environments so much more meaningful ;)


My target VM is running in Oracle’s Cloud, and I’m spinning it and its required supporting infrastructure up with a small Terraform script. The playbook is executed immediately after the VM becomes available.

The playbook you are about to see later in the article is only intended for use prior to the initial installation of the Oracle binaries, after the VM has been freshly provisioned.

My playbook will determine whether a new kernel-uek has been installed as part of the upgrade process, and optionally reboot the VM should it have to. A reboot is acceptable in my scenario where I’m building a new VM with Oracle software to be installed as part of the process. In other cases, this approach is not tenable, consider yourself warned. A flag controls the reboot behaviour.

Be advised that not using a local repository can lead to an upgrade of kernel UEK5 to UEK6. The most current oraclelinux-release-el7 package ships with the ol7_UEKR5 repository disabled and ol7_UEKR6 repository enabled. The playbook therefore enables the UEK5 repository explicitly, and disables ol7_UEKR6 to remain on the UEK5 kernel branch. It also checks for UEK5 and mandates Oracle Linux 7.8 or newer because everything is really old by modern standards …

A few days ago Oracle Linux 7.9 has become available, and again – depending on your yum configuration – you might end up upgrading 7.8 to 7.9. Which is exactly what I want, but not necessarily what you want. Please review your configuration carefully to ensure the correct outcome. It goes without saying that testing is a must.

Introducing Ansible for system upgrades

I have been using Ansible a lot over the past years, it’s a handy tool to know. The question I wanted to answer is: how can I perform a full system upgrade in Ansible prior to installing the Oracle database?

The Ansible Playbook

Before talking more about the playbook, let’s see it first:

# Ansible playbook to update all RPM packages on a lab VM (powered by Oracle Linux 7.8 and later)
# prior to the installation of Oracle binaries (only).
# It is _only_ intended to be used as part of the initial Oracle installation on a fresh VM.
# The system upgrade includes the kernel as well, as it's an integral part of the system 
# and newer kernels provide security fixes and performance enhancements.
# The installation of a new kernel requires a reboot to become effective. You can control
# whether you want to reboot straight away or later (see variable "reboot"). The default is
# not to reboot the VM as part of the playbook's execution.
# The playbook requires the server to be booted into Oracle's Unbreakable Enterprise Kernel
# (UEK) for now, this can easily be changed if needed. 
# The reboot module requires ansible >= 2.7. 
# Both conditions are enforced. 
# As an added safety net, the playbook checks  for the presence of /etc/oraInst.loc and 
# /etc/oratab, which admittedly isn't a perfect way of identifying the presence of Oracle 
# software, but it's better than nothing. 
# It is _your_ responsibility to ensure you don't run this playbook outside the initial Oracle 
# software installation.
# - reboot: if set to true the playbook is going to reboot the VM straight away. If set to
#      false it is your responsibility to reboot into the new kernel
# Please refer to for more details

- hosts: oracledb
  name: full system upgrade of a lab VM prior to the installation of Oracle 19c
  become: yes
    reboot: false

  - name: fail if the O/S is not detected as Oracle Linux 7.8 or later
      msg: This playbook is written for Oracle Linux Linux 7.8 and later
    when: ansible_distribution != 'OracleLinux' and ansible_distribution_version is version('7.8', '<')

  - name: fail if Oracle's UEK5 is not in use
      msg: this playbook only covers Oracle's Unbreakable Enterprise Kernel UEK Release 5
    when: not ansible_kernel is search ("4.14")

  - name: fail if the Ansible release is older than 2.7
      msg: This playbook requires Ansible 2.7 or later
    when: ansible_version.full is version('2.7', '<=')

  # no guarantee this detects _your_ Oracle installation, see notes
  - name: try to detect Oracle software
      - name: try to detect Oracle Universal Installer's inventory pointer 
        stat: path=/etc/oraInst.loc
        register: orainst

      - name: fail if inventory pointer was detected
          msg: It appears as if Oracle database software has already been installed, aborting 
        when: orainst.stat.exists | bool

      - name: try to detect the database's oratab file
        stat: path=/etc/oratab
        register: oratab

      - name: fail if an oratab file was detected
          msg: It appears as if Oracle database software has already been installed, aborting 
        when: oratab.stat.exists | bool

  # this is where the actual work is done
  - name: update all packages (remain on the UEK5 branch)
      name: '*'
      state: latest
      enablerepo: ol7_UEKR5
      disablerepo: ol7_UEKR6
      update_cache: yes

  - name: get latest kernel UEK installed on disk
    shell: /usr/bin/rpm -q kernel-uek | /usr/bin/sort -V | /usr/bin/tail -1
    register: latest_uek
      warn: false

  - name: trim the RPM name to make it easier to compare with Ansible's facts
      latest_uek_rel: "{{ latest_uek.stdout | regex_replace('kernel-uek-(.*)', '\\1') }}"

  - name: print detected kernel releases
      msg: |
        This server booted into UEK {{ ansible_kernel }}, 
        the latest kernel on disk is {{ latest_uek_rel }}.

  - name: reboot the VM
      msg: Ansible is rebooting the VM now
    when: ansible_kernel != latest_uek_rel and reboot | bool 

  - name: print reboot reminder
      msg: A new kernel has been installed, please remember to reboot the VM at an opportune moment
    when: ansible_kernel != latest_uek_rel and reboot | bool == false 

Although it looks lengthy the code is straight forward. It will update all packages after a few safety checks. I experimented with another flag, upgrade_kernel, but had to learn the hard way that creating an exclusion list for yum is quite difficult given the many different packages starting ^kernel … At the end of the day I decided against its use.

Kernel Update

The hardest part was to come up with a way to compare the boot kernel with the latest installed kernel (on disk). The playbook only concerns itself with the Unbreakable Enterprise Kernel (UEK), ignoring the Red Hat Compatible Kernel (RHCK).

The latest kernel on disk can be found using a combination of the rpm, sort and tail commands. I tried to achieve the same result with Ansible’s yum module and the list option, but would have had to spend quite a bit of time working out how to get the latest installed kernel this way. Back to shell it was! Sort’s -V option is magical as is allows me to sort the kernels by release in ascending order. The last row returned has to be the most current kernel. If this kernel release (after being stripped of kernel-uek-) doesn’t match the boot kernel, a reboot is necessary.

Depending on whether you set the reboot flag to true or false, the system is rebooted. If you are building a Packer image, or prepare a VM for an Oracle installation you may want to consider setting the flag to true. If you don’t, a friendly message reminds you of the need to reboot at your convenience.


I don’t run this playbook in isolation, it has become an integral part of my Ansible-driven Oracle installation toolkit. I prefer to get the software updates out of the way early in the Oracle software installation process, it’s much easier this way.

Oracle Cloud Infrastructure: using Network Security Groups and the caveat with the subnet’s default security list

This is going to be one of these posts I’m mainly writing to myself, in the hope that a) I don’t forget about that topic too soon and b) someone might have the same question and doesn’t want to spin up an environment to find out.

Broadly speaking Oracle Cloud Infrastructure (and some other cloud providers) give you 2 different means of securing the Virtual Cloud Network at the VCN level:

After I got to grips with the latter when Oracle released them, I have used them a lot, for reasons outlined in the documentation. So why this post? It’s to raise awareness that you might feel secure thanks to a Network Security Group (NSG) when in fact you aren’t.

Obligatory Cloud Post Warning: if you create resources described in this post you might incur cost.

It’s easier to show with an example

Assume you created a VCN with 1 public, regional subnet for your admin host, running Oracle Linux 7. You didn’t create a VPN to connect to the VCN yet. However, you created a NSG for the admin host only allowing your IP to access the admin host. The admin host’s VNIC is assigned to the NSG. Like in this Terraform example, shortened to include only the necessary resources:

resource "oci_core_vcn" "simplevm_vcn" {

  cidr_block     = var.simplevm_vcn_cidr_block
  compartment_id = var.compartment_ocid
  display_name   = "simplevm VCN"
  dns_label      = "simplevm"

  defined_tags   = { "" = "simple-vm" }

# NSG + SSH rule
resource "oci_core_network_security_group" "simplevm_bastion_nsg" {
  compartment_id = var.compartment_ocid
  vcn_id         =
  display_name   = "Simple VM bastion NSG"
  defined_tags   = { "" = "simple-vm" }

resource "oci_core_network_security_group_security_rule" "simplevm_bastion_nsg_ssh_rule" {
  network_security_group_id =
  description               = "ssh"
  direction                 = "INGRESS"
  protocol                  = 6
  source_type               = "CIDR_BLOCK"
  source                    = var.controlhost

  tcp_options {
    destination_port_range {
      min = 22
      max = 22

# Subnet
resource "oci_core_subnet" "simplevm_vcn_bastion_sn" {
  cidr_block     = var.simplevm_vcn_bastion_sn_cidr_block
  compartment_id = var.compartment_ocid
  vcn_id         =

  display_name   = "Simple VM bastion SN"
  dns_label      = "simplevm"

  defined_tags   = { "" = "simple-vm" }

  prohibit_public_ip_on_vnic = false
  route_table_id             =


The big question must be: is the admin host accessible via SSH from everywhere, or merely from whatever is the value stored in var.controlhost? You’ll have to take my word for it, it’s a /32 address ;)

# simple VM
resource "oci_core_instance" "simplevm" {
  availability_domain = var.availability_domain
  compartment_id      = var.compartment_ocid
  shape               = "VM.Standard.E2.1.Micro"

  defined_tags        = { "" = "simple-vm" }

  create_vnic_details {

    assign_public_ip = true
    display_name     = "simplevm"
    hostname_label   = "simplevm"
    nsg_ids = [
    subnet_id =



Unfortunately “simplevm” is accessible from everywhere, thanks to the Default Security List implicitly created with the public subnet. It opens port 22 to the world for incoming traffic. The way Security Lists and Network Security work, is described in the Security Rules section of the documentation:

A packet in question is allowed if any rule in any of the relevant lists and groups allows the traffic…

— aforementioned Oracle Cloud Infrastructure documentation

Clearly the default security list matches with its ingress rule of to port 22, and traffic is allowed to reach the VM.


So with that said, in scenarios like the one shown above, please ensure you either change the default security list, or create a separate one and assign it to the subnet.

Better still, ensure you use a more secure approach to accessing your cloud network. There are many ways to do so, in the case of Oracle you find a dedicated module in the free OCI foundation training. Under “Connectivity” you will find both a video as well as slides.

The use of bastion hosts is frowned upon in some circles, and that sounds about right to me.

Upgrading oraclelinux-release-el7 might trigger an upgrade to UEK Release 6

While building a demo environment for an upcoming presentation I noticed an upgrade from UEK 5 to UEK 6 on my Oracle Linux 7 VM. As it turned out, the kernel change has been triggered by an upgrade of oraclelinux-release-el7 RPM. I am a great fan of Oracle’s UEK and the team behind it, so this is a welcome change for me. It might however meet you unprepared, which is why I put this little article together.

Whether or not the contents of my article applies to you depends on your yum configuration, and the source of your packages.


Oracle split the monolithic public-yum.repo file into more manageable parts a little while ago (see for example my older post here). The bulk of the files in /etc/yum.repos.d is managed by the aforementioned oraclelinux-release-el7 package:

[opc@simplevm ~]$ rpm -ql oraclelinux-release-el7
[opc@simplevm ~]$  

To prove the point I spun up a VM in Oracle Cloud Infrastructure based on the latest Oracle Linux image at the time of writing: Oracle-Linux-7.8-2020.09.23-0. The image is based on Oracle Linux 7.8 with fixes up to September 23rd. It ships with UEK5 4.14.35-1902.306.2.el7uek.x86_64.

The following yum repositories are configured when the machine comes up:

[opc@simplevm ~]$ sudo yum repolist 
Loaded plugins: langpacks, ulninfo
repo id                         repo name                                                                         status
ol7_UEKR5/x86_64                Latest Unbreakable Enterprise Kernel Release 5 for Oracle Linux 7Server (x86_64)     217
ol7_addons/x86_64               Oracle Linux 7Server Add ons (x86_64)                                                467
ol7_developer/x86_64            Oracle Linux 7Server Development Packages (x86_64)                                 1,611
ol7_developer_EPEL/x86_64       Oracle Linux 7Server EPEL Packages for Development (x86_64)                       33,126
ol7_ksplice                     Ksplice for Oracle Linux 7Server (x86_64)                                          8,957
ol7_latest/x86_64               Oracle Linux 7Server Latest (x86_64)                                              20,917
ol7_oci_included/x86_64         Oracle Software for OCI users on Oracle Linux 7Server (x86_64)                       627
ol7_optional_latest/x86_64      Oracle Linux 7Server Optional Latest (x86_64)                                     15,257
ol7_software_collections/x86_64 Software Collection Library release 3.0 packages for Oracle Linux 7 (x86_64)      15,366
repolist: 96,545
[opc@simplevm ~]$  

The effect of upgrading oraclelinux-release-el7

To cut a long story short let’s update oraclelinux-release-el7:

[opc@simplevm ~]$ rpm -q oraclelinux-release-el7
[opc@simplevm ~]$ sudo yum upgrade oraclelinux-release-el7
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package oraclelinux-release-el7.x86_64 0:1.0-12.1.el7 will be updated
---> Package oraclelinux-release-el7.x86_64 0:1.0-13.1.el7 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

 Package                               Arch                 Version                      Repository                Size
 oraclelinux-release-el7               x86_64               1.0-13.1.el7                 ol7_latest                19 k

Transaction Summary
Upgrade  1 Package

Total download size: 19 k
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
oraclelinux-release-el7-1.0-13.1.el7.x86_64.rpm                                                  |  19 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Updating   : oraclelinux-release-el7-1.0-13.1.el7.x86_64                                                          1/2 
warning: /etc/yum.repos.d/oracle-linux-ol7.repo created as /etc/yum.repos.d/oracle-linux-ol7.repo.rpmnew
  Cleanup    : oraclelinux-release-el7-1.0-12.1.el7.x86_64                                                          2/2 
  Verifying  : oraclelinux-release-el7-1.0-13.1.el7.x86_64                                                          1/2 
  Verifying  : oraclelinux-release-el7-1.0-12.1.el7.x86_64                                                          2/2 

  oraclelinux-release-el7.x86_64 0:1.0-13.1.el7                                                                         

[opc@simplevm ~]$  

Once that’s done, let’s compare the list of enabled repositories:

[opc@simplevm ~]$ sudo yum repolist
Loaded plugins: langpacks, ulninfo
ol7_UEKR6                                                                                        | 2.8 kB  00:00:00     
ol7_addons                                                                                       | 2.8 kB  00:00:00     
ol7_latest                                                                                       | 3.4 kB  00:00:00     
ol7_optional_latest                                                                              | 2.8 kB  00:00:00     
(1/2): ol7_UEKR6/x86_64/updateinfo                                                               |  52 kB  00:00:00     
(2/2): ol7_UEKR6/x86_64/primary_db                                                               | 7.6 MB  00:00:00     
repo id                         repo name                                                                         status
ol7_UEKR6/x86_64                Latest Unbreakable Enterprise Kernel Release 6 for Oracle Linux 7Server (x86_64)     174
ol7_addons/x86_64               Oracle Linux 7Server Add ons (x86_64)                                                467
ol7_developer/x86_64            Oracle Linux 7Server Development Packages (x86_64)                                 1,611
ol7_developer_EPEL/x86_64       Oracle Linux 7Server EPEL Packages for Development (x86_64)                       33,126
ol7_ksplice                     Ksplice for Oracle Linux 7Server (x86_64)                                          8,957
ol7_latest/x86_64               Oracle Linux 7Server Latest (x86_64)                                              20,917
ol7_oci_included/x86_64         Oracle Software for OCI users on Oracle Linux 7Server (x86_64)                       627
ol7_optional_latest/x86_64      Oracle Linux 7Server Optional Latest (x86_64)                                     15,257
ol7_software_collections/x86_64 Software Collection Library release 3.0 packages for Oracle Linux 7 (x86_64)      15,366
repolist: 96,502 

If you look carefully you notice the absence of the previously present ol7_UEKR5 repository, which has been replaced by ol7_UEKR6. If you were to upgrade kernel-uek – after all there are more recent 4.14.x kernels out there – you might be surprised:

[opc@simplevm ~]$ sudo yum upgrade kernel-uek
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package kernel-uek.x86_64 0:5.4.17-2011.7.4.el7uek will be installed
--> Finished Dependency Resolution

Dependencies Resolved

 Package                   Arch                  Version                                 Repository                Size
 kernel-uek                x86_64                5.4.17-2011.7.4.el7uek                  ol7_UEKR6                 58 M

Transaction Summary
Install  1 Package

Total download size: 58 M
Installed size: 65 M 

Had I hit return at this stage, my system would have upgraded UEK5 to UEK6.

This is exactly the upgrade I would have done, but you might not. UEK Release 6 is still relatively fresh out there, and you should dobule-check if your Oracle software is certified with it. This applies specifically to ASM Filter Driver, ASM Cluster File System (ACFS) and ASMLib. To stick with UEK Release 5 for now you can either change the yum configuration in /etc/yum.repos.d/uek-ol7.repo or alternatively call yum-config-manager.

Since I tend to perform all these tasks with Ansible, I simply go with the proven enablerepo/disablerepo attributes:

  - name: update all packages (remain on the UEK5 branch)
      name: '*'
      state: latest
      enablerepo: ol7_UEKR5
      disablerepo: ol7_UEKR6
      update_cache: yes 

A similar approach is possible on the command line:

[opc@simplevm ~]$ sudo yum upgrade kernel-uek --enablerepo ol7_UEKR5 --disablerepo ol7_UEKR6
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package kernel-uek.x86_64 0:4.14.35-2025.401.4.el7uek will be installed
--> Finished Dependency Resolution

Dependencies Resolved

 Package                  Arch                 Version                                    Repository               Size
 kernel-uek               x86_64               4.14.35-2025.401.4.el7uek                  ol7_UEKR5                53 M

Transaction Summary
Install  1 Package

Total download size: 53 M
Installed size: 58 M
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
kernel-uek-4.14.35-2025.401.4.el7uek.x86_64.rpm                                                  |  53 MB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : kernel-uek-4.14.35-2025.401.4.el7uek.x86_64                                                          1/1 


  Verifying  : kernel-uek-4.14.35-2025.401.4.el7uek.x86_64                                                          1/1 

  kernel-uek.x86_64 0:4.14.35-2025.401.4.el7uek                                                                         

[opc@simplevm ~]$  

This way I remained on UEK5.


By enabling the UEK5 repository while at the same time disabling the UEK6 repository I can remain on the Release 5 branch for now, until my Oracle Linux 7.x systems are generally ready for UEK Release 6.

Happy patching!

Introducing Packer: building Vagrant base boxes hands-free

I have referred to Packer in some of my cloud-related presentations as an example of a tool for creating immutable infrastructure. In addition to the cloud, Packer supports a great many other build targets as well. Since I work with VirtualBox and Vagrant a lot, Packer’s ability to create Vagrant base boxes is super awesome. Combined with local box versioning I can build new Vagrant systems in almost no time. More importantly though, I can simply kick the process off, grab a coffee, and when I’m back, enjoy a new build of my Oracle Linux Vagrant base box.

The task at hand

I would like to build a new Vagrant base box for Oracle Linux 7.8, completely hands-off. All my processes and workflows therefore need to be defined in software (Infrastructure as Code).

Since I’m building private Vagrant boxes I don’t intend to share I can ignore the requirements about passwords as documented in the Vagrant documentation, section “Default User Settings”. Instead of the insecure key pair I’m using my own keys as well.

The build environment

I’m using Ubuntu 20.04.1 LTS as my host operating system. Packer 1.5 does all the hard work provisioning VMs for Virtualbox 6.1.12. Ansible 2.9 helps me configure my systems. Vagrant 2.2.7 will power my VMs after they are created.

Except for VirtualBox and Packer I’m using the stock packages supplied by Ubuntu.

How to get there

Packer works by reading a template file and performs the tasks defined therein. If you are new to Packer, I suggest you visit the website for more information and some really great guides.

As of Packer 1.5 you can also use HCL2. Which is nice, as it allows me to reuse (or rather add to) my Terraform skills. However, at the time of writing the documentation warns that HCL2 support is still in beta, which is why I went with the JSON template language.

High-level steps

From a bird’s eye view, using my code Packer template…

  • Creates a Virtualbox VM from an ISO image
  • Feeds a kickstart file to it for an unattended installation
  • After the VM is up it applies an Ansible playbook to it to install VirtualBox guest additions

The end result should be a fully working Vagrant base box.

Provisioning the VM

The first thing to do is to create the Packer template for my VM image. Properties of a VM are defined in the so-called builders section. As I said before, there are lots of builders available… I would like to create a Virtualbox VM from an ISO image, so I went with virtualbox-iso, which is really super easy to use and well documented. So after a little bit of trial and error I ended up with this:

  "builders": [
      "type": "virtualbox-iso",
      "boot_command": [
        "linux text inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ol7.ks",
      "disk_size": "12288",
      "guest_additions_path": "/home/vagrant/VBoxGuestAdditions.iso",
      "guest_os_type": "Oracle_64",
      "hard_drive_interface": "sata",
      "hard_drive_nonrotational": "true",
      "http_directory": "http",
      "iso_checksum": "1c1471c49025ffc1105d0aa975f7c8e3",
      "iso_checksum_type": "md5",
      "iso_url": "file:///m/stage/V995537-01-ol78.iso",
      "sata_port_count": "5",
      "shutdown_command": "echo 'packer' | sudo -S shutdown -P now",
      "ssh_timeout": "600s",
      "ssh_username": "vagrant",
      "ssh_agent_auth": true,
      "vboxmanage": [
      "vm_name": "packertest"
  "provisioners": [
      "type": "ansible",
      "playbook_file": "ansible/guest_additions.yml",
      "user": "vagrant"
  "post-processors": [
      "keep_input_artifact": true,
      "output": "/home/martin/vagrant/boxes/",
      "type": "vagrant"

If you are new to Packer this probably looks quite odd, but it’s actually very intuitive after a little while. I haven’t used Packer much before coming up with this example, which is a great testimony to the quality of the documentation.

Note this template has 3 main sections:

  • builders: the virtualbox-iso builder allows me to create a VirtualBox VM based on an (Oracle Linux 7.8) ISO image
  • provisioners: once the VM has been created to my specification I can run Ansible playbooks against it
  • post-processors: this line is important as it creates the Vagrant base box after the provisioner finished its work

Contrary to most examples I found I’m using SSH keys for communicating with the VM, rather than a more insecure username/password combination. All you need to do is add the SSH key to the agent via ssh-add before you kick the build off.

While testing the best approach to building the VM and guest additions I ran into a few issues prompting me to upload the guest additions ISO to the vagrant user’s home directory. This way it wasn’t too hard to refer to it in the Ansible playbook (see below).

Kickstarting Oracle Linux 7

The http_directory directive in the first (builder) block is crucial for automating the build. As soon as Packer starts its work, it will create a HTTP server in the directory indicated by the variable. This directory must obviously exist.

Red-Hat-based distributions allow admins to install the operating system in a fully automated way using the Kickstart format. You provide the Kickstart file to the system when you boot it for the first time. A common way to do so is via HTTP, which is why I’m so pleased about the HTTP server started by Packer. It couldn’t be easier: thanks to my http_directory a web server is already started, and using the HTTPIP and HTTPort variables I can refer to files inside the directory from within the template.

As soon as Packer boots the VM the Kickstart file is passed as specified in boot_command. I had to look the syntax up using a search engine. It essentially comes down to simulating a bunch of keystrokes as if you were typing them interactively.

Long story short, I don’t need to worry about the installation, at least as long as my Kickstart file is ok. One way to get the Kickstart file right is to use the one that’s created after a manual operating system installation. I usually end up using /root/anaconda-ks.cfg and customise it.

There are 3 essentials tasks to complete in the Kickstart file if you want to create a Vagrant base box:

  1. Create the vagrant user account
  2. Allow password-less authentication to the vagrant account via SSH
  3. Add vagrant to the list of sudoers

First I have to include a directive to create a vagrant user:

user --name=vagrant

The sshkey keyword allows me to inject my SSH key into the user’s authorized_keys file.

sshkey --username=vagrant "very-long-ssh-key"

I also have to add the vagrant account to the list of sudoers. Using the %post directive, I inject the necessary line into /etc/sudoers:

%post --log=/root/ks-post.log

/bin/echo "vagrant        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers


Calling the Ansible provisioner

So far I have defined a VM to be created (within the builders section). The installation is completely hands-off thanks for the Kickstart file I provided. However, I’m not done yet: I have yet to install the Virtualbox guest additions. This is done via the Ansible provisioner. It connects as vagrant to the VM and executes the instructions from ansible/guest_additions.yml.

This is a rather simple file:

- hosts: all
  become: yes
  - name: upgrade all packages
      name: '*'
      state: latest

  - name: install kernel-uek-devel
      name: kernel-uek-devel
      state: present

  - name: reboot to the latest kernel

  # Guest additions are located as per guest_additions_path in 
  # Packer's configuration file
  - name: Mount guest additions ISO read-only
      path: /mnt/
      src: /home/vagrant/VBoxGuestAdditions.iso
      fstype: iso9660
      opts: ro
      state: mounted

  - name: execute guest additions
    shell: /mnt/ 

In plain English, I’m becoming root before updating all software packages. One of the pre-rquisites for compiling Virtualbox’s guest additions is to install the kernel-uek-devel package.

After that operation completed theVM reboot into the new kernel before mounting the guest addition ISO I asked to be copied to /home/vagrant/VBoxGuestAdditions.iso in the builder section of the template.

Once the ISO file is mounted, I call to build the guest additions.

Building the Vagrant base box

Putting it all together, this is the output created by Packer:

$ ANSIBLE_STDOUT_CALLBACK=debug ./packer build oracle-linux-7.8.json 
virtualbox-iso: output will be in this color.

==> virtualbox-iso: Retrieving Guest additions
==> virtualbox-iso: Trying /usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: Trying /usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: /usr/share/virtualbox/VBoxGuestAdditions.iso => /usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: Retrieving ISO
==> virtualbox-iso: Trying file:///m/stage/V995537-01-ol78.iso
==> virtualbox-iso: Trying file:///m/stage/V995537-01-ol78.iso?checksum=md5%3A1c1471c49025ffc1105d0aa975f7c8e3
==> virtualbox-iso: file:///m/stage/V995537-01-ol78.iso?checksum=md5%3A1c1471c49025ffc1105d0aa975f7c8e3 => /m/stage/V995537-01-ol78.iso
==> virtualbox-iso: Starting HTTP server on port 8232
==> virtualbox-iso: Using local SSH Agent to authenticate connections for the communicator...
==> virtualbox-iso: Creating virtual machine...
==> virtualbox-iso: Creating hard drive...
==> virtualbox-iso: Creating forwarded port mapping for communicator (SSH, WinRM, etc) (host port 2641)
==> virtualbox-iso: Executing custom VBoxManage commands...
    virtualbox-iso: Executing: modifyvm packertest --memory 2048
    virtualbox-iso: Executing: modifyvm packertest --cpus 2
==> virtualbox-iso: Starting the virtual machine...
==> virtualbox-iso: Waiting 10s for boot...
==> virtualbox-iso: Typing the boot command...
==> virtualbox-iso: Using ssh communicator to connect:
==> virtualbox-iso: Waiting for SSH to become available...
==> virtualbox-iso: Connected to SSH!
==> virtualbox-iso: Uploading VirtualBox version info (6.1.12)
==> virtualbox-iso: Uploading VirtualBox guest additions ISO...
==> virtualbox-iso: Provisioning with Ansible...
    virtualbox-iso: Setting up proxy adapter for Ansible....
==> virtualbox-iso: Executing Ansible: ansible-playbook -e ...
    virtualbox-iso: PLAY [all] *********************************************************************
    virtualbox-iso: TASK [Gathering Facts] *********************************************************
    virtualbox-iso: ok: [default]
    virtualbox-iso: [WARNING]: Platform linux on host default is using the discovered Python
    virtualbox-iso: interpreter at /usr/bin/python, but future installation of another Python
    virtualbox-iso: interpreter could change this. See
    virtualbox-iso: ce_appendices/interpreter_discovery.html for more information.
    virtualbox-iso: TASK [upgrade all packages] ****************************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso: TASK [install kernel-uek-devel] ************************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso: TASK [reboot to enable latest kernel] ******************************************
==> virtualbox-iso: EOF
    virtualbox-iso: changed: [default]
    virtualbox-iso: TASK [Mount guest additions ISO read-only] *************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso: TASK [execute guest additions] *************************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso: PLAY RECAP *********************************************************************
    virtualbox-iso: default                    : ok=6    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
==> virtualbox-iso: Gracefully halting virtual machine...
==> virtualbox-iso: Preparing to export machine...
    virtualbox-iso: Deleting forwarded port mapping for the communicator (SSH, WinRM, etc) (host port 2641)
==> virtualbox-iso: Exporting virtual machine...
    virtualbox-iso: Executing: export packertest --output output-virtualbox-iso/packertest.ovf
==> virtualbox-iso: Deregistering and deleting VM...
==> virtualbox-iso: Running post-processor: vagrant
==> virtualbox-iso (vagrant): Creating a dummy Vagrant box to ensure the host system can create one correctly
==> virtualbox-iso (vagrant): Creating Vagrant box for 'virtualbox' provider
    virtualbox-iso (vagrant): Copying from artifact: output-virtualbox-iso/packertest-disk001.vmdk
    virtualbox-iso (vagrant): Copying from artifact: output-virtualbox-iso/packertest.ovf
    virtualbox-iso (vagrant): Renaming the OVF to box.ovf...
    virtualbox-iso (vagrant): Compressing: Vagrantfile
    virtualbox-iso (vagrant): Compressing: box.ovf
    virtualbox-iso (vagrant): Compressing: metadata.json
    virtualbox-iso (vagrant): Compressing: packertest-disk001.vmdk
Build 'virtualbox-iso' finished.

==> Builds finished. The artifacts of successful builds are:
--> virtualbox-iso: VM files in directory: output-virtualbox-iso
--> virtualbox-iso: 'virtualbox' provider box: /home/martin/vagrant/boxes/

This concludes the build of the base box.

Using the newly created base box

I’m not quite done yet though: as you may recall I’m using (local) box versioning. A quick change of the metadata file ~/vagrant/boxes/ol7.json and a call to vagrant init later, I can use the box:

$ vagrant box outdated
Checking if box 'ol7' version '7.8.1' is up to date...
A newer version of the box 'ol7' for provider 'virtualbox' is
available! You currently have version '7.8.1'. The latest is version
'7.8.2'. Run `vagrant box update` to update. 

That looks pretty straight-forward, so let’s do it:

$ vagrant box update
==> server: Checking for updates to 'ol7'
    server: Latest installed version: 7.8.1
    server: Version constraints: 
    server: Provider: virtualbox
==> server: Updating 'ol7' with provider 'virtualbox' from version
==> server: '7.8.1' to '7.8.2'...
==> server: Loading metadata for box 'file:///home/martin/vagrant/boxes/ol7.json'
==> server: Adding box 'ol7' (v7.8.2) for provider: virtualbox
    server: Unpacking necessary files from: file:///home/martin/vagrant/boxes/
    server: Calculating and comparing box checksum...
==> server: Successfully added box 'ol7' (v7.8.2) for 'virtualbox'! 

Let’s start the environment:

$ vagrant up
Bringing machine 'server' up with 'virtualbox' provider...
==> server: Importing base box 'ol7'...
==> server: Matching MAC address for NAT networking...
==> server: Checking if box 'ol7' version '7.8.2' is up to date...
==> server: Setting the name of the VM: packertest_server_1598013258821_67878
==> server: Clearing any previously set network interfaces...
==> server: Preparing network interfaces based on configuration...
    server: Adapter 1: nat
==> server: Forwarding ports...
    server: 22 (guest) => 2222 (host) (adapter 1)
==> server: Running 'pre-boot' VM customizations...
==> server: Booting VM...
==> server: Waiting for machine to boot. This may take a few minutes...
    server: SSH address:
    server: SSH username: vagrant
    server: SSH auth method: private key
==> server: Machine booted and ready!
==> server: Checking for guest additions in VM...
==> server: Setting hostname...
==> server: Mounting shared folders...
    server: /vagrant => /home/martin/vagrant/packertest 

Happy Automating!