Tag Archives: Ansible

Vagrant Ansible Provisioner: working with the Ansible Inventory – addendum

Recently I wrote a post about one of my dream combinations, Ansible and Vagrant. After hitting the publish button I noticed that there might be a need for a part II – passing complex data types such as lists and dicts to Ansible via a Vagrantfile.

I wrote a similar post for when you are in a situation where you invoke an Ansible playbook directly from the command line. For this article the invocation of the Ansible playbook happens as part of a call to vagrant up or vagrant provision.

Setup

I’m going to reuse the Vagrantfile from the previous article:

Vagrant.configure("2") do |config|
  
  config.vm.box = "debianbase"
  config.vm.hostname = "debian"

  config.ssh.private_key_path = "/home/martin/.ssh/debianbase"

  config.vm.provider "virtualbox" do |vb|
    vb.vcpus = 2
    vb.memory = "1024"
    vb.name = "debian"
  end

  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "provisioning/example01.yml"
    ansible.verbose = "v"
    # ...
  end
end

The directory/file layout is also identical, repeated here for convenience:

$ tree provisioning/
provisioning/
├── example01.yml
├── example02.yml
├── group_vars
│   └── all.yml
└── roles
    └── role1
        └── tasks
            └── main.yml

I used Ubuntu 22.04, patched to 230306 with both Ansible and Vagrant versions as provided by the distribution:

  • Ansible 2.10.8
  • Vagrant 2.2.19

Passing lists to the Ansible playbook

This time however I’d like to pass a list to the playbook indicating which block devices to partition. The type of variable is a list, with either 1 or more elements. The Ansible code iterates over the list and performs the action on the current item. Here’s the code from the playbook example01.yml:

- hosts: default

  tasks: 
  - ansible.builtin.debug:
      var: blkdevs

  - name: print block devices to be partitioned
    ansible.builtin.debug:
      msg: If this was a call to community.general.parted I'd partition {{ item }} now
    loop: "{{ blkdevs }}"

The question is: how can I pass a list to the playbook? As with scalar data types I wrote about yesterday you use host_vars in the Vagrantfile:

  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "provisioning/example01.yml"
    ansible.verbose = "v"
    ansible.host_vars = {
      "default" => {
        "blkdevs" => '[ "/dev/sdb", "/dev/sdc" ]'
      }
    }
  end

Note the use of single and double quotes! Without quotes around the entire RHS expression Ansible will complain about a syntax error in the dynamically generated inventory. The provisioner does what it’s supposed to do:

PLAY [default] *****************************************************************

TASK [Gathering Facts] *********************************************************
ok: [default]

TASK [ansible.builtin.debug] ***************************************************
ok: [default] => {
    "blkdevs": [
        "/dev/sdb",
        "/dev/sdc"
    ]
}

TASK [print block devices to be partitioned] ***********************************
ok: [default] => (item=/dev/sdb) => {
    "msg": "If this was a call to community.general.parted I'd partition /dev/sdb now"
}
ok: [default] => (item=/dev/sdc) => {
    "msg": "If this was a call to community.general.parted I'd partition /dev/sdc now"
}

PLAY RECAP *********************************************************************
default                    : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 

Passing Dicts to the Ansible playbook

Passing a dict works exactly the same way, which is why I feel like I can keep this section short. The Vagrantfile uses the same host_var, blkdevs, but this time it’s a dict with keys indicating the intended use of the block devices. Each key is associated with a list of values containing the actual block device(s). Lists are perfectly fine even if they only contain a single item ;)

  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "provisioning/example02.yml"
    ansible.verbose = "v"
    ansible.host_vars = {
      "default" => {
        "blkdevs" => 
          '{ "binaries": ["/dev/sdb"], "database": ["/dev/sdc", "/dev/sdd"], "fast_recovery_area": ["/dev/sde"] }'
      }
    }
  end

The playbook iterates over the list of block devices provided as the dict’s values:

- hosts: default
  become: true

  tasks: 
  - name: format block devices for Oracle binaries
    ansible.builtin.debug:
      msg: If this was a call to community.general.parted I'd partition {{ item }} now
    loop: "{{ blkdevs.binaries }}"
  
  - name: format block devices for Oracle database files
    ansible.builtin.debug:
      msg: If this was a call to community.general.parted I'd partition {{ item }} now
    loop: "{{ blkdevs.database }}"
  
  - name: format block devices for Oracle database Fast Recovery Area
    ansible.builtin.debug:
      msg: If this was a call to community.general.parted I'd partition {{ item }} now
    loop: "{{ blkdevs.fast_recovery_area }}"

Using lists as the dict’s values solves the problem of having to distinguish between a scalar variable like /dev/sdc and multiple block devices like /dev/sdc, /dev/sdd to be used.

Et voila! Here’s the result:

PLAY [default] *****************************************************************

TASK [Gathering Facts] *********************************************************
ok: [default]

TASK [format block devices for Oracle binaries] ********************************
ok: [default] => (item=/dev/sdb) => {
    "msg": "If this was a call to community.general.parted I'd partition /dev/sdb now"
}

TASK [format block devices for Oracle database files] **************************
ok: [default] => (item=/dev/sdc) => {
    "msg": "If this was a call to community.general.parted I'd partition /dev/sdc now"
}
ok: [default] => (item=/dev/sdd) => {
    "msg": "If this was a call to community.general.parted I'd partition /dev/sdd now"
}

TASK [format block devices for Oracle database Fast Recovery Area] *************
ok: [default] => (item=/dev/sde) => {
    "msg": "If this was a call to community.general.parted I'd partition /dev/sde now"
}

PLAY RECAP *********************************************************************
default                    : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 

Happy automating!

Advertisement

Ansible tips’n’tricks: gather facts in an ad-hoc fashion

There are times when I really need to get some ansible_facts from a host to work out details about, say the network card, storage, or Linux Distribution to continue coding. And I don’t want to/have the patience to run add a debug step in my Ansible playbook either :) Thankfully Ansible has just the right tool for the case, called ad-hoc command execution.

Since I can never remember how to gather ansible_facts I decided to write it down, hopefully this saves me (and you!) 5 minutes next time.

Setup

I am using ansible-5.9.0-1.fc36.noarch as provided by Fedora 36 (which includes ansible-core-2.12.10-1.fc36.noarch) on Linux x86-64. Vagrant 2.3.4 has been provided by the HashiCorp repository.

Gathering facts: using Vagrant’s dynamic Ansible inventory

If you are using the Ansible provisioner with your Vagrant box, Vagrant will create a suitable inventory for you. Assuming there is only a single VM defined in your Vagrantfile you can use the following command to gather facts:

ansible -i .vagrant/provisioners/ansible/inventory/ default -m setup
default | SUCCESS => {
    "ansible_facts": {
        "ansible_all_ipv4_addresses": [
            "10.0.2.15"
        ],
        "ansible_all_ipv6_addresses": [
            "fe80::a00:27ff:fec0:f04e"
        ],
        "ansible_apparmor": {
            "status": "enabled"

If you have multiple VMs defined in your Vagrantfile you need to either specify all or the VM name as defined in the inventory.

Gathering facts without an inventory

If you have a VM you can SSH to there is an alternative option available to you: simply specify the IP address or DNS name of the VM as the Ansible inventory followed by a ",", like so:

ansible -i nginx, nginx -u ansible --private-key ~/.ssh/ansible -m ansible.builtin.setup | head
nginx | SUCCESS => {
    "ansible_facts": {
        "ansible_all_ipv4_addresses": [
            "10.0.2.15",
            "192.168.56.43"
        ],
        "ansible_all_ipv6_addresses": [
            "fe80::a00:27ff:fe8d:7f5f",
            "fe80::a00:27ff:fe37:33f6"
        ],

That’s all there is to gathering ansible_facts in an ad-hoc fashion. Happy automating!

Vagrant Ansible Provisioner: working with the Ansible Inventory

Vagrant and Ansible are a great match: using Vagrant it’s very easy to work with virtual machines. Creating, updating, and removing VMs is just a short command away. Vagrant provides various provisioners to configure the VM, and Ansible is one of these. This article covers the ansible provisioner as opposed to ansible_local.

Earlier articles I wrote might be of interest in this context:

The post was written using Ubuntu 22.04 patched to 230306, I used Ansible and Vagrant as provided by the distribution:

  • Ansible 2.10.8
  • Vagrant 2.2.19

Configuring the Ansible Inventory

Very often the behaviour of an Ansible playbook is controlled using variables. Providing variables to Ansible from a Vagrantfile is quite neat and subject of this article.

Let’s have a look at the most basic Vagrantfile:

Vagrant.configure("2") do |config|
  
  config.vm.box = "debianbase"
  config.vm.hostname = "debian"

  config.ssh.private_key_path = "/home/martin/.ssh/debianbase"

  config.vm.provider "virtualbox" do |vb|
    vb.cpus = 2
    vb.memory = "2048"
    vb.name = "debian"
  end
  
  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "provisioning/blogpost.yml"
    ansible.verbose = "v"
  end
end

I frequently use a flag indicating if the Ansible script should reboot the VM after the update of all packages completed. Within the provisioning folder I store group_vars, roles, and the main playbook as per the recommendation in the docs:

$ tree provisioning/
provisioning/
├── blogpost.yml
├── group_vars
│   └── all.yml
└── roles
    └── role1
        └── tasks
            └── main.yml

All global variables I don’t necessarily expect to change are stored in group_vars/all.yml. This includes the reboot_flag flag that defaults to false. The playbook does not need to list the variable in its own vars section, in fact doing so would grant the variable a higher precedence and my way of providing a variable to Ansible via Vagrant would fail. Here is the playbook:

- hosts: default
  become: true

  tasks: 
  - debug:
      var: reboot_flag

  - name: reboot
    ansible.builtin.reboot:
    when: reboot_flag | bool

Since rebooting can be a time consuming task I don’t want to do this by default, which is fine by me as I understand that I have to reboot manually later.

Let’s see what happens when the VM is provisioned:

PLAY [default] *****************************************************************

TASK [Gathering Facts] *********************************************************
ok: [default]

TASK [debug] *******************************************************************
ok: [default] => {
    "reboot_flag": false
}

TASK [reboot] ******************************************************************
skipping: [default] => {
    "changed": false,
    "skip_reason": "Conditional result was False"
}

PLAY RECAP *********************************************************************
default                    : ok=2    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

Overriding variables

In case I want to override the flag I can do so without touching my Ansible playbook only by changing the Vagrantfile. Thanks to host_vars I can pass variables to Ansible via the inventory. Here’s the changed section in the Vagrantfile:

  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "provisioning/blogpost.yml"
    ansible.verbose = "v"
    ansible.host_vars = {
      "default" => {
        "reboot_flag" => true
      }
    }
  end

All host_vars for my default VM are then appended to the inventory in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory.

Next time I run vagrant provision the flag is changed to true, and the VM is rebooted:

PLAY [default] *****************************************************************

TASK [Gathering Facts] *********************************************************
ok: [default]

TASK [debug] *******************************************************************
ok: [default] => {
    "reboot_flag": "true"
}

TASK [reboot] ******************************************************************
changed: [default] => {
    "changed": true,
    "elapsed": 20,
    "rebooted": true
}

PLAY RECAP *********************************************************************
default                    : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 

Summary

Vagrant offers a very neat way of creating an Ansible inventory on the fly. If your Ansible playbooks are written in a way that different execution paths/options are configurable via variables a single playbook is highly flexible and can be used for many things. In the age of version control it’s very convenient not having to touch the source code of an Ansible playbook as that might interfere with other projects. Variables, passed at runtime, are much better suited to create flexible automation scripts.

Vagrant: always provision virtual machines

Since Spectre and Meltdown (2 infamous side channel attack vectors on CPUs) have become public I thought about better, more secure ways to browse the web. When I read that a commercial vendor for operating systems created a solution where a browser is started in a disposable sandbox that gets discarded when you exit the browser session I thought of ways to implement this feature myself.

Since I’m a great fan of both Virtualbox and Vagrant I decided to use the combination of the two to get this done. My host runs Ubuntu 22.04 LTS, and I’m using Vagrant 2.2.19 (the one shipping with the distribution, it’s not the latest version!) as well as Virtualbox 6.1.40. Whilst the solution presented in this article provides a more secure (notice how I didn’t claim this to be secure ;) ) approach to web browsing it doesn’t keep the host up to date. Security updates for the host O/S and hypervisor (read: Virtualbox) are crucial, too.

Please be super-careful when thinking of implementing a strategy where provisioners are run always, it can and potentially will break your system! For most use cases provisioning a VM each time it starts is not what you want.

Building a “browser” VM

I started off by creating a small “browser” VM with a minimal GUI and a web browser – nothing else – and registered this system as a vagrant box. This is the first step towards my solution: being able to create/tear down the sandbox. Not perfect, and there are more secure ways, but I’m fine with my approach.

The one thing that’s necessary though is updating the VM, ideally performed automatically, at each start. Vagrant provisioners can help with that.

Defining one or more provisioners in the Vagrantfile is a great way to initially configure a VM when it is created for the first time and works really well. Provisioners thankfully do NOT run with each subsequent start of the VM. If they were run each time it would probably be a disaster for all of my other Vagrant VMs. For my sandbox browser VM though I want all packages to be updated automatically.

Switching from on-demand provisioning to automatic provisioning

As I said, VMs are provisioned once by default, subsequent starts won’t run the provisioners as you can see in the output:

$ vagrant up

[output of vagrant bringing my VM up skipped]

==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run.

The section detailing provisioners in my Vagrantfile is super simple because it has to run in Linux and Windows and I’m too lazy to install Ansible on my Windows box. The above output was caused by the following directive:

Vagrant.configure("2") do |config|

  # ... more directives ...

  config.vm.provision "shell",
    inline: "sudo apt-get update --error-on=any && sudo apt-get dist-upgrade -y"

  # ... even more directives ...

Looking at the command you may have guessed that this is a Debian-based VM, and I’m neither using Flatpack nor Snaps. All packages in this environment are DEBs. That’s easier to maintain for me.

To change the provision section to always run, simply tell it to:

Vagrant.configure("2") do |config|

  # ... more directives ...

  config.vm.provision "shell",
    inline: "sudo apt-get update --error-on=any && sudo apt-get dist-upgrade -y",
    run: "always"

Next time the vagrant VM starts, the provisioner marked as “run: always” will be triggered, even though the VM wasn’t created from scratch:

$ vagrant up

[output of vagrant bringing my VM up skipped once more]

==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run.
==> default: Running provisioner: shell...
    default: Running: inline script

[output of apt-get omitted for brevity]

There you go! I could have achieved the same by telling vagrant to provision the VM using the --provision flag but I’m sure I would have forgotten that half the time.

Anyone using Ansible can benefit from running provisioners always, too:

Vagrant.configure("2") do |config|

  # ... more directives ...

  config.vm.provision "ansible", run: "always" do |ansible|
      ansible.playbook = "/path/to/ansible/playbook.yaml"
  end

Next time the VM is started by vagrant the Ansible playbook will be executed.

Summary

Vagrant can be instructed to run provisioners always if the use case merits it. For the most part it’s not advisable to run provisioners each time the VM comes up as it might well mess with the installation already present.

Configuring a VM using Ansible via the OCI Bastion Service

In my previous post I wrote about the creation of a Bastion Service using Terraform. As I’m incredibly lazy I prefer to configure the system pointed at by my Bastion Session with a configuration management tool. If you followed my blog for a bit you might suspect that I’ll use Ansible for that purpose. Of course I do! The question is: how do I configure the VM accessible via a Bastion Session?

Background

Please have a look at my previous post for a description of the resources created. In a nutshell the Terraform code creates a Virtual Cloud Network (VCN). There is only one private subnet in the VCN. A small VM without direct access to the Internet resides in the private subet. Another set of Terraform code creates a bastion session allowing me to connect to the VM.

I wrote this post on Ubuntu 20.04 LTS using ansible 4.8/ansible-core 2.11.6 by the way. From what I can tell these were current at the time of writing.

Connecting to the VM via a Bastion Session

The answer to “how does one connect to a VM via a Bastion Session?” isn’t terribly difficult once you know how to. The clue to my solution is with the SSH connection string as shown by the Terraform output variable. It prints the contents of oci_bastion_session.demo_bastionsession.ssh_metadata.command

$ terraform output
connection_details = "ssh -i <privateKey> -o ProxyCommand=\"ssh -i <privateKey> -W %h:%p -p 22 ocid1.bastionsession.oc1.eu-frankfurt-1.a...@host.bastion.eu-frankfurt-1.oci.oraclecloud.com\" -p 22 opc@10.0.2.39"

If I can connect to the VM via SSH I surely can do so via Ansible. As per the screen output above you can see that the connection to the VM relies on a proxy in form of the bastion session. See man 5 ssh_config for details. Make sure to provide the correct SSH keys in both locations as specified in the Terraform code. I like to think of the proxy session as a Jump Host to my private VM (its internal IP is 10.0.2.39). And yes, I am aware of alternative options to SSH, the one shown above however is the most compatible (to my knowledge).

Creating an Ansible Inventory and running a playbook

Even though it’s not the most flexible option I’m a great fan of using Ansible inventories. The use of an inventory saves me from typing a bunch of options on the command line.

Translating the Terraform output into the inventory format, this is what worked for me:

[blogpost]
privateinst ansible_host=10.0.2.39 ansible_user=opc ansible_ssh_common_args='-o ProxyCommand="ssh -i ~/.oci/oci_rsa -W %h:%p -p 22 ocid1.bastionsession.oc1.eu-frankfurt-1.a...@host.bastion.eu-frankfurt-1.oci.oraclecloud.com"'

Let’s run some Ansible code! Consider this playbook:

- hosts: blogpost
  tasks:
  - name: say hello
    ansible.builtin.debug:
      msg: hello from {{ ansible_hostname }}

With the inventory set, it’s now possible to run the playbook:

$ ansible-playbook -vi inventory.ini blogpost.yml 
Using /tmp/ansible/ansible.cfg as config file

PLAY [blogpost] *********************************************************************************************************

TASK [Gathering Facts] **************************************************************************************************
ok: [privateinst]

TASK [say hello] ********************************************************************************************************
ok: [privateinst] => {}

MSG:

hello from privateinst

PLAY RECAP **************************************************************************************************************
privateinst                : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

The playbook is of course very simple, but it can be easily extended. The tricky bit was establishing the connection, once the connection is established the sky is the limit!

Ansible tips’n’tricks: configuring the Ansible Dynamic Inventory for OCI – Oracle Linux 7

I have previously written about the configuration of the Ansible Dynamic Inventory for OCI. The aforementioned article focused on Debian, and I promised an update for Oracle Linux 7. You are reading it now.

The biggest difference between the older post and this one is the ability to use YUM in Oracle Linux 7. Rather than manually installing Ansible, the Python SDK and the OCI collection from Ansible Galaxy you can make use of the package management built into Oracle Linux 7 and Oracle-provided packages.

Warning about the software repositories

All the packages referred to later in the article are either provided by Oracle’s Extra Packages for Enterprise Linux (EPEL) repository or the development repo. Both repositories are listed in a section labelled “Packages for Test and Development“ in Oracle’s yum server. As per https://yum.oracle.com/oracle-linux-7.html, these packages come with the following warning:

Note: The contents in the following repositories are for development purposes only. Oracle suggests these not be used in production.

This is really important! Please make sure you understand the implications for your organisation. If this caveat is a show-stopper for you, please refer to the manual installation of the tools in my earlier article for an alternative approach.

I’m ok with the restriction as it’s my lab anyway, with myself as the only user. No one else to blame if things go wrong :)

Installing the software

You need to install a few packages from Oracle’s development repositories if you accept the warning quoted above. The first step is to enable the necessary repositories:

sudo yum-config-manager --enable ol7_developer_EPEL
sudo yum-config-manager --enable ol7_developer

Once that’s done you can install the OCI Ansible Collection. This package pulls all the other RPMs I need as dependencies. The following output was generated on March 16, 2023:

[opc@dynInv ~]$ sudo yum install oci-ansible-collection

...

--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================
 Package                       Arch          Version                Repository                 Size
====================================================================================================
Installing:
 oci-ansible-collection        x86_64        4.15.0-1.el7           ol7_developer              19 M
Installing for dependencies:
 ansible-python3               noarch        2.9.27-1.el7           ol7_developer_EPEL         16 M
 python3-jmespath              noarch        0.10.0-1.el7           ol7_addons                 42 k
 python36-jinja2               noarch        2.11.1-1.el7           ol7_developer_EPEL        237 k
 python36-markupsafe           x86_64        0.23-3.0.1.el7         ol7_developer_EPEL         32 k
 python36-paramiko             noarch        2.1.1-0.10.el7         ol7_developer_EPEL        272 k
 python36-pyasn1               noarch        0.4.7-1.el7            ol7_developer             173 k
 python36-pyyaml               x86_64        5.4.1-1.0.1.el7        ol7_addons                205 k
 sshpass                       x86_64        1.06-1.el7             ol7_developer_EPEL         21 k
Updating for dependencies:
 python36-oci-sdk              x86_64        2.93.1-1.el7           ol7_addons                 26 M

Transaction Summary
====================================================================================================
Install  1 Package  (+8 Dependent packages)
Upgrade             ( 1 Dependent package)

Total download size: 62 M
Is this ok [y/d/N]: 

Once all packages are installed you should be in the position to test the configuration. The article assumes the OCI Python SDK is already configured. If not, head over to the documentation for instructions on how to do so.

Verifying the installation

Out of habit I run ansible --version once the software is installed to make sure everything works as expected. Right after the installation I tried, but I noticed that Ansible seemingly wasn’t present:

[opc@dyninv ~]$ which ansible
/usr/bin/which: no ansible in (/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/opc/.local/bin:/home/opc/bin)

It is present though, and it took me a minute to understand the way Oracle packaged Ansible: Ansible/Python3 is found in ansible-python3 instead of ansible. A quick check of the package’s contents revealed that a suffix was added, for example:

[opc@dyninv ~]$ ansible-3 --version
ansible-3 2.9.27
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/opc/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible-3
  python version = 3.6.8 (default, Nov 18 2021, 10:07:16) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44.0.3)]
[opc@dyninv ~]$ 

An important detail can be found in the last line: the python version is reported to be 3.6.8, at least at it was at the time of writing.

Testing the Dynamic Inventory

Before going into details about the dynamic inventory, first I’d like to repeat a warning I had in my older post as well:

Remember that the use of the Dynamic Inventory plugin is a great time saver, but comes with a risk. If you aren’t careful, you can end up running playbooks against far too many hosts. Clever Identity and Access Management (IAM) and the use of filters in the inventory are a must to prevent accidents. And don’t ever use hosts: all in your playbooks! Principle of least privilege is key.

Ansible configuration

With the hard work completed and out of the way it’s time to test the dynamic inventory. First of all I need to tell Ansible to enable the Oracle collection. I’m doing this in ~/.ansible.cfg:

[opc@dyninv ansible]$ cat ~/.ansible.cfg 
[defaults]
stdout_callback = debug

[inventory]
enable_plugins = oracle.oci.oci

Be careful though as this is the user’s global setting! A safer way is to include the configuration file in the repository where it is supposed to be used. This way no other project is affected. Since this is a new VM it doesn’t matter for me and I want to use the new host as one of the main provisioning systems anyway – every Ansible playbook to be run from here requires the dynamic inventory.

The next file to be created is the dynamic inventory file. It needs to be named following the Ansible convention:

filename.oci.yml
filename.oci.yaml

You are only allowed to change the first part (“filename”) or else you get an error. The example file contains the following lines, limiting the output to a particular compartment and set of tags, following my own advice from above.

plugin: oracle.oci.oci

regions:
- eu-frankfurt-1

compartments:
- compartment_ocid: "ocid1.compartment.oc1..aaaa..."
  fetch_hosts_from_subcompartments: false

hostname_format: fqdn

filters:
- defined_tags:
    project:
      name: "simple-app"

debug: False

The filter as defined by the filters: keyword is very important! It takes a list of possible filter criteria including defined tags used above. I want to be absolutely sure my inventory only returns those hosts tagged with my project designation, simple-app. That’s one of the reasons why tagging is so important in any cloud project.

With the setup complete I can graph the inventory:

[opc@dyninv ansible]$ ansible-inventory-3 --inventory dynInv.oci.yml --graph
...
@all:
  |--@IHsr_EU-FRANKFURT-1-AD-2:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@IHsr_EU-FRANKFURT-1-AD-3:
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |--@all_hosts:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@ougdemo-department:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@project#name=simple-app:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@region_eu-frankfurt-1:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@tag_role=appserver:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |--@tag_role=bastionhost:
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@ungrouped:

Happy Automating!

Summary

It’s quite a time saver not having to install all components of the toolchain yourself. By pulling packages from Oracle’s yum repositories I can also count on updates being made available, providing many benefits such as security and bug fixes.

Installing Ansible on Oracle Linux 8 for test and development use

I have previously written about installing Ansible on Oracle Linux 7 for non-production use. A similar approach can be taken to install Ansible on Oracle Linux 8. This is a quick post to show you how I did that in my Vagrant (lab) VM.

As it is the case with Oracle Linux 7, the Extra Packages for Enterprise Linux (EPEL) repository is listed in a section labelled “Packages for Test and Development“. As per http://yum.oracle.com/oracle-linux-8.html, these packages come with the following warning:

Note: The contents in the following repositories are for development purposes only. Oracle suggests these not be used in production.

This is really important!

If you are ok with the limitation I just quoted from Oracle’s YUM server, please read on. If not, head back to the official Ansible documentation and use a different installation method instead. I only use Ansible in my own lab and therefore don’t mind.

Enabling the EPEL repository

The first step is to enable the EPEL repository. For quite some time now, Oracle has split the monolithic YUM configuration file into smaller, more manageable pieces. For EPEL, you need to install oracle-epel-release-el8.x86_64:

[vagrant@dev ~]$ sudo dnf info oracle-epel-release-el8.x86_64
Last metadata expiration check: 1:51:09 ago on Wed 10 Feb 2021 09:30:41 UTC.
Installed Packages
Name         : oracle-epel-release-el8
Version      : 1.0
Release      : 2.el8
Architecture : x86_64
Size         : 18 k
Source       : oracle-epel-release-el8-1.0-2.el8.src.rpm
Repository   : @System
From repo    : ol8_baseos_latest
Summary      : Extra Packages for Enterprise Linux (EPEL) yum repository
             : configuration
License      : GPLv2
Description  : This package contains the  Extra Packages for Enterprise Linux
             : (EPEL) yum repository configuration.

[vagrant@dev ~]$  

A quick sudo dnf install oracle-epel-release-el8 will install the package and create the EPEL repository configuration. Until this stage the new repository is known, but still disabled. This is what it looked like on my (custom built) Oracle Linux 8.3 Vagrant box, booted into UEK 6:

[vagrant@dev ~]$ sudo dnf repolist
repo id           repo name
ol8_UEKR6         Latest Unbreakable Enterprise Kernel Release 6 for Oracle Linux 8 (x86_64)
ol8_appstream     Oracle Linux 8 Application Stream (x86_64)
ol8_baseos_latest Oracle Linux 8 BaseOS Latest (x86_64)
[vagrant@dev ~]$  

If you are ok with the caveat mentioned earlier (development purpose, no production use…, see above) you can enable the EPEL repository:

[vagrant@dev ~]$ sudo yum-config-manager --enable ol8_developer_EPEL
[vagrant@dev ~]$ sudo dnf repolist
repo id            repo name
ol8_UEKR6          Latest Unbreakable Enterprise Kernel Release 6 for Oracle Linux 8 (x86_64)
ol8_appstream      Oracle Linux 8 Application Stream (x86_64)
ol8_baseos_latest  Oracle Linux 8 BaseOS Latest (x86_64)
ol8_developer_EPEL Oracle Linux 8 EPEL Packages for Development (x86_64)
[vagrant@dev ~]$   

The output of dnf repolist confirms that EPEL is now enabled.

Installing Ansible

With the repository enabled you can search for Ansible:

[vagrant@dev ~]$ sudo dnf info ansible
Last metadata expiration check: 0:00:10 ago on Wed 10 Feb 2021 11:26:57 UTC.
Available Packages
Name         : ansible
Version      : 2.9.15
Release      : 1.el8
Architecture : noarch
Size         : 17 M
Source       : ansible-2.9.15-1.el8.src.rpm
Repository   : ol8_developer_EPEL
Summary      : SSH-based configuration management, deployment, and task
             : execution system
URL          : http://ansible.com
License      : GPLv3+
Description  : Ansible is a radically simple model-driven configuration
             : management, multi-node deployment, and remote task execution
             : system. Ansible works over SSH and does not require any software
             : or daemons to be installed on remote nodes. Extension modules can
             : be written in any language and are transferred to managed
             : machines automatically.

[...]

[vagrant@dev ~]$  

Mind you, 2.9.15 was the current release at the time of writing. If you hit the blog by means of a search engine, the version will most likely be different. Let’s install Ansible:

[vagrant@dev ~]$ sudo dnf install ansible ansible-doc
Last metadata expiration check: 0:01:06 ago on Wed 10 Feb 2021 11:26:57 UTC.
Dependencies resolved.
================================================================================
 Package            Arch   Version                     Repository          Size
================================================================================
Installing:
 ansible            noarch 2.9.15-1.el8                ol8_developer_EPEL  17 M
 ansible-doc        noarch 2.9.15-1.el8                ol8_developer_EPEL  12 M
Installing dependencies:
 python3-babel      noarch 2.5.1-5.el8                 ol8_appstream      4.8 M
 python3-jinja2     noarch 2.10.1-2.el8_0              ol8_appstream      538 k
 python3-jmespath   noarch 0.9.0-11.el8                ol8_appstream       45 k
 python3-markupsafe x86_64 0.23-19.el8                 ol8_appstream       39 k
 python3-pip        noarch 9.0.3-18.el8                ol8_appstream       20 k
 python3-pytz       noarch 2017.2-9.el8                ol8_appstream       54 k
 python3-pyyaml     x86_64 3.12-12.el8                 ol8_baseos_latest  193 k
 python3-setuptools noarch 39.2.0-6.el8                ol8_baseos_latest  163 k
 python36           x86_64 3.6.8-2.module+el8.3.0+7694+550a8252
                                                       ol8_appstream       19 k
 sshpass            x86_64 1.06-9.el8                  ol8_developer_EPEL  28 k
Enabling module streams:
 python36                  3.6                                                 

Transaction Summary
================================================================================
Install  12 Packages

Total download size: 35 M
Installed size: 459 M
Is this ok [y/N]: y

[...]  

Installed:
  ansible-2.9.15-1.el8.noarch                                                   
  ansible-doc-2.9.15-1.el8.noarch                                               
  python3-babel-2.5.1-5.el8.noarch                                              
  python3-jinja2-2.10.1-2.el8_0.noarch                                          
  python3-jmespath-0.9.0-11.el8.noarch                                          
  python3-markupsafe-0.23-19.el8.x86_64                                         
  python3-pip-9.0.3-18.el8.noarch                                               
  python3-pytz-2017.2-9.el8.noarch                                              
  python3-pyyaml-3.12-12.el8.x86_64                                             
  python3-setuptools-39.2.0-6.el8.noarch                                        
  python36-3.6.8-2.module+el8.3.0+7694+550a8252.x86_64                          
  sshpass-1.06-9.el8.x86_64                                                     

Complete!
[vagrant@dev ~]$   

A quick test reveals the software works as advertised:

[vagrant@dev ~]$ ansible --version
ansible 2.9.15
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.8 (default, Nov  5 2020, 18:03:20) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5.0.1)]
[vagrant@dev ~]$  

It seems that Ansible has been installed successfully.

Ansible tips’n’tricks: run select parts of a playbook using tags

I have recently re-discovered an Ansible feature I haven’t used in a while: tagging. Ansible allows you to define tags at various places of your playbook. On its own that wouldn’t be terribly useful, except that you can pass tags to ansible-playbook causing the interpreter to selectively run tasks tagged appropriately.

My example uses Ansible 2.9.6+dfsg-1 as it was provided by Ubuntu 20.04 LTS.

Tagging Ansible tasks

Here is the code of my somewhat over-simplified playbook for this blog post:

 - hosts: testvm
   tasks:
  - block:
    - name: tag1 step 1
      debug:
        msg: I am part of tag1
    - name: tag1 step 2
      debug:
        msg: And so am I
    tags: tag1

  - block:
    - name: tag2 complete
      debug:
        msg: I am tag 2
    tags: tag2

  - name: tag3 step 1
    debug:
      msg: this is tag3 step 1
    tags: tag3

  - name: tag3 step 2 
    debug:
      msg: this is tag3 step 2
    tags: tag3

  - name: common task for tag2 and tag3
    debug:
      msg: I am common to tag2 and tag3
    tags:
    - tag2
    - tag3

  - name: this needs to be executed regardless
    debug:
      msg: I am always run, irrespective of the tag
    tags: always

  - name: untagged task
    debug:
      msg: I am an untagged task  

To keep the post readable I’ll focus on tagging tasks. Another (future) post will discuss the use of tags for includes and other Ansible entities.

Looking at the Ansible playbook you can see all but 1 tasks are tagged. Tags 1 and 2 apply to blocks, whereas tag 3 is applied to multiple tasks. The special always tag has been applied to the penultimate task. Finally, you can see a task with 2 tags as well.

Execute the play without specifying tags

If I run this playbook the usual way, it will execute all steps, as shown in the output:

$ ansible-playbook -vi inventory.ini cond_exec.yml
Using /etc/ansible/ansible.cfg as config file

PLAY [testvm] ******************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************
ok: [localhost]

TASK [tag1 step 1] *************************************************************************************************
ok: [localhost] => {
    "msg": "I am part of tag1"
}

TASK [tag1 step 2] *************************************************************************************************
ok: [localhost] => {
    "msg": "And so am I"
}

TASK [tag2 complete] ***********************************************************************************************
ok: [localhost] => {
    "msg": "I am tag 2"
}

TASK [tag3 step 1] *************************************************************************************************
ok: [localhost] => {
    "msg": "this is tag3 step 1"
}

TASK [tag3 step 2] *************************************************************************************************
ok: [localhost] => {
    "msg": "this is tag3 step 2"
}

TASK [common task for tag2 and tag3] *******************************************************************************
ok: [localhost] => {
    "msg": "I am common to tag2 and tag3"
}

TASK [this needs to be executed regardless] ************************************************************************
ok: [localhost] => {
    "msg": "I am always run, irrespective of the tag"
}

TASK [untagged task] ***********************************************************************************************
ok: [localhost] => {
    "msg": "I am an untagged task"
}

PLAY RECAP *********************************************************************************************************
localhost                  : ok=9    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 

If you have executed playbooks before, this should look familiar.

Passing a tag to ansible-playbook

Let’s assume I want to re-run the playbook, limited to those tasks tagged tag2. To do so, I have to provide the tag to ansible-playbook, like so:

$ ansible-playbook -vi inventory.ini cond_exec.yml --tags=tag2
Using /etc/ansible/ansible.cfg as config file

PLAY [testvm] ******************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************
ok: [localhost]

TASK [tag2 complete] ***********************************************************************************************
ok: [localhost] => {
    "msg": "I am tag 2"
}

TASK [common task for tag2 and tag3] *******************************************************************************
ok: [localhost] => {
    "msg": "I am common to tag2 and tag3"
}

TASK [this needs to be executed regardless] ************************************************************************
ok: [localhost] => {
    "msg": "I am always run, irrespective of the tag"
}

PLAY RECAP *********************************************************************************************************
localhost                  : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 

Looking at the execution you can see any task tagged tag2 is executed, including the common task for tag2 and tag3. But there is another task’s output: an additional task is executed thanks to the special tag: always. It has an equivalent, never, although I’m still trying to find a use case for it.

Notice how the un-tagged task isn’t executed.

Providing more than 1 tag to ansible-playbook

I’m not limited to running a single tag, I can do the same for multiple tags. Let’s run the play for tags tag2 and tag3:

$ ansible-playbook -vi inventory.ini cond_exec.yml --tags=tag2,tag3
Using /etc/ansible/ansible.cfg as config file

PLAY [testvm] ******************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************
ok: [localhost]

TASK [tag2 complete] ***********************************************************************************************
ok: [localhost] => {
    "msg": "I am tag 2"
}

TASK [tag3 step 1] *************************************************************************************************
ok: [localhost] => {
    "msg": "this is tag3 step 1"
}

TASK [tag3 step 2] *************************************************************************************************
ok: [localhost] => {
    "msg": "this is tag3 step 2"
}

TASK [common task for tag2 and tag3] *******************************************************************************
ok: [localhost] => {
    "msg": "I am common to tag2 and tag3"
}

TASK [this needs to be executed regardless] ************************************************************************
ok: [localhost] => {
    "msg": "I am always run, irrespective of the tag"
}

PLAY RECAP *********************************************************************************************************
localhost                  : ok=6    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  

Tasks tagged tag2 and tag3 are run, as is the task flagged to always run. This wasn’t particularly surprising considering the output of the previous execution.

Skipping a tag

Ansible offers the option to skip one or more tags. It’s not the inverse operation to specifying which tags to run. Here is proof: I’m asking Ansible to skip tag 1, so it should produce the same output as before. Well, it doesn’t quite manage to do so:

$ ansible-playbook -vi inventory.ini cond_exec.yml --skip-tags=tag1
Using /etc/ansible/ansible.cfg as config file

PLAY [testvm] ******************************************************************************************************

TASK [Gathering Facts] *********************************************************************************************
ok: [localhost]

TASK [tag2 complete] ***********************************************************************************************
ok: [localhost] => {
    "msg": "I am tag 2"
}

TASK [tag3 step 1] *************************************************************************************************
ok: [localhost] => {
    "msg": "this is tag3 step 1"
}

TASK [tag3 step 2] *************************************************************************************************
ok: [localhost] => {
    "msg": "this is tag3 step 2"
}

TASK [common task for tag2 and tag3] *******************************************************************************
ok: [localhost] => {
    "msg": "I am common to tag2 and tag3"
}

TASK [this needs to be executed regardless] ************************************************************************
ok: [localhost] => {
    "msg": "I am always run, irrespective of the tag"
}

TASK [untagged task] ***********************************************************************************************
ok: [localhost] => {
    "msg": "I am an untagged task"
}

PLAY RECAP *********************************************************************************************************
localhost                  : ok=7    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  

Although the block tagged tag1 didn’t run, the un-tagged task did! So if you want absolute control, you should not use the --skip-tags option as it might produce undesired results with mixed (tagged and un-tagged) playbooks. Unless of course you apply tags to all the elements in your playbook. Which requires more discipline than I can muster.

Summary

Tags are an interesting, optional feature of the language specification allowing you to run only select parts of playbooks. I’ll try to write another post detailing the use of tags with dynamic includes.

Happy automating!

Ansible tips’n’tricks: using the OCI Dynamic Inventory Plugin in playbooks

After having covered how to configure the Ansible Dynamic Inventory Plugin for Oracle Cloud Infrastructure (OCI) in the previous posts now it’s time to get it to work with my simple-app cloud application. Before I go into more detail, I’d like to add the usual caveat first.

Caveat

As I said in the previous post, the Dynamic Inventory Plugin is a great time saver, especially when used as part of build pipelines. However, as with anything that’s misconfigured and doesn’t adhere to best practices, there is a risk associated. You might run playbooks against hosts you didn’t intend to. Ensure you have proper Identity and Access (IAM) policies in place, and use the principle of last privilege throughout. And use tags to filter hosts, as you see in this and the previous example. And finally, use the ansible toolset (ansible-inventory fwiw) to validate the hosts you target.

Using the Dynamic Inventory Plugin

The Dynamic Inventory Plugin nicely prints all the VMs in my configuration, broken down by tag. If the plugin returns fewer hosts than expected, your configuration might need tweaking as I explained in this post.

[opc@supersecureVM ansible]$ ansible-inventory -vi ougdemo-compartment.oci.yml --graph
@all:
  |--@IHsr_EU-FRANKFURT-1-AD-2:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@IHsr_EU-FRANKFURT-1-AD-3:
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |--@all_hosts:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@ougdemo-department:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@project#name=simple-app:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@region_eu-frankfurt-1:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@tag_role=appserver:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |--@tag_role=bastionhost:
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@ungrouped: 

As you can see there are plenty of tags assigned. Tagging is one of the most important tasks in the cloud, as it is the most convenient way to identify resources.

The logical next step is to make use of that detail! I used free form tags to identify my bastion host and the app servers. So how can I target them with my playbook?

Saying hello to both app servers

I wrote a little playbook as a quick example on how I can use the free form tags to address the app servers. I guess that’s the bare minimum example I can get away with.

[opc@supersecureVM ansible]$ cat hello-appservers.yml 
- hosts: tag_role=appserver
  name: say hello to app servers
  tasks:
  - name: say hello
    debug:
      msg: Hello from {{ ansible_hostname }}
[opc@supersecureVM ansible]$  

You call it just as you would call any other Ansible playbook, substituting the static inventory with the Dynamic Inventory. As you can see the app servers are referenced by their role, indicated by the tags assigned.

[opc@supersecureVM ansible]$ ansible-playbook -i ougdemo-compartment.oci.yml hello-appservers.yml 

[... output showing the inventory is build skipped ...]

PLAY [say hello to app servers] *************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************
ok: [appserver1.app.simpleapp.oraclevcn.com]
ok: [appserver2.app.simpleapp.oraclevcn.com]

TASK [say hello] ****************************************************************************************************************
ok: [appserver1.app.simpleapp.oraclevcn.com] => {}

MSG:

Hello from appserver1
ok: [appserver2.app.simpleapp.oraclevcn.com] => {}

MSG:

Hello from appserver2

PLAY RECAP **********************************************************************************************************************
appserver1.app.simpleapp.oraclevcn.com : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
appserver2.app.simpleapp.oraclevcn.com : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

[opc@supersecureVM ansible]$  

Happy automating!

Ansible Dynamic Inventory Plugin for OCI: where are all my hosts?

I wrote about the configuration of the Ansible Dynamic Inventory Plugin for Oracle Cloud Infrastructure (OCI) in a previous post. As it seems to happen all the time, the length of the post escalated quickly. When I finished the previous post there was still a lot to say! For example, I wanted to share the answer to the question where all my hosts were hiding :)

Caveat

As I said in the previous post, the Dynamic Inventory Plugin is a great time saver, especially when used as part of build pipelines. However, as with anything that’s misconfigured and doesn’t adhere to best practices, there is a risk associated. You might run playbooks against hosts you didn’t intend to. Ensure you have proper Identity and Access (IAM) policies in place, and use the principle of last privilege throughout. And use tags to filter hosts, as you see in this and the previous example. And finally, use the ansible toolset to validate the hosts you target.

I can’t see all my cloud VMs!

After I have been able to resolve the initial problems with the Dynamic Inventory Plugin I was a bit surprised to see it report fewer hosts than expected. At the time I had used the following configuration (YAML) file after piecing the various examples together:

vagrant@ocidev:~/ansible$ cat ougdemo-compartment.oci.yml 
plugin: oracle.oci.oci

regions:
- eu-frankfurt-1

filters:
- defined_tags: { "project": { "name": "simple-app" } }

compartments:
- compartment_ocid: "ocid1.compartment...."
  fetch_hosts_from_subcompartments: true 

This should show me all the hosts of my simple-app project in the compartment I indicated. However the output did not match the number of VMs I had deployed:

vagrant@ocidev:~/ansible$ ansible-inventory -i ougdemo-compartment.oci.yml --graph

[...]

@all:
  |--@IHsr_EU-FRANKFURT-1-AD-2:
  |  |--130.61.233.113
  |--@Oracle-Tags#CreatedBy=ougdemouser:
  |  |--130.61.233.113
  |--@Oracle-Tags#CreatedOn=2020-11-11T18_01_04.484Z:
  |  |--130.61.233.113
  |--@all_hosts:
  |  |--130.61.233.113
  |--@ougdemo-department:
  |  |--130.61.233.113
  |--@project#name=simple-app:
  |  |--130.61.233.113
  |--@region_eu-frankfurt-1:
  |  |--130.61.233.113
  |--@tag_role=bastionhost:
  |  |--130.61.233.113
  |--@ungrouped:
vagrant@ocidev:~/ansible$ 

Although there is a lot of output, actually there is just 1 IP address (VM) discovered. Despite the fact I had a few more hosts created as shown by the OCI CLI:

$ oci compute instance list \
-c ${COMPARTMENT} \
--query "data[].{name:\"display-name\",state:\"lifecycle-state\",tags:\"defined-tags\"}" \
--output table
+------------+---------+---------------------------------------------------------------------------------------------------------------------------+
| name       | state   | tags                                                                                                                      |
+------------+---------+---------------------------------------------------------------------------------------------------------------------------+
| appserver1 | RUNNING | {'project': {'name': 'simple-app'}, 'Oracle-Tags': {'CreatedBy': 'ougdemouser', 'CreatedOn': '2020-11-11T18:01:03.639Z'}} |
| bastion1   | RUNNING | {'project': {'name': 'simple-app'}, 'Oracle-Tags': {'CreatedBy': 'ougdemouser', 'CreatedOn': '2020-11-11T18:01:04.484Z'}} |
| appserver2 | RUNNING | {'project': {'name': 'simple-app'}, 'Oracle-Tags': {'CreatedBy': 'ougdemouser', 'CreatedOn': '2020-11-11T18:01:03.916Z'}} |
+------------+---------+---------------------------------------------------------------------------------------------------------------------------+ 

So where are the missing hosts? I knew they were part of my private subnet so after a little bit of digging around the docs, I found this:

> ORACLE.OCI.OCI    (/home/vagrant/.ansible/collections/ansible_collections/oracle/oci/plugins/inventory/oci.py)

        Get inventory hosts from oci. Uses a .oci.yaml (or .oci.yml)
        YAML configuration file.

OPTIONS (= is mandatory):

[ more documentation ]

- hostname_format
        Host naming format to use. Use 'fqdn' to list hosts using the instance's
        Fully Qualified Domain Name (FQDN). These FQDNs are resolvable within the
        VCN using the VCN resolver specified through the subnet's DHCP options.
        Please see https://docs.us-
        phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm for more details.
        Use 'public_ip' to list hosts using public IP address. Use 'private_ip' to
        list hosts using private IP address. By default, hosts are listed using
        public IP address.
        [Default: (null)]
        set_via:
          env:
          - name: OCI_HOSTNAME_FORMAT

[ more documentation ]

That explains it – by default only public IPs are returned. Since my app servers are in a private subnet, they don’t have public IPs, and won’t show up.

Fetching all hosts

With this knowledge at hand the solution merely implied changing the configuration file, adding a new line:

plugin: oracle.oci.oci

regions:
- eu-frankfurt-1

hostname_format: fqdn

filters:
- defined_tags: { "project": { "name": "simple-app" } }

compartments:
- compartment_ocid: "ocid1.compartment...."
  fetch_hosts_from_subcompartments: true 

I chose for it to display the fully qualified domain name (FQDN). Now all my hosts are fetched by the Dynamic Inventory Plugin:

vagrant@ocidev:~/ansible$ ansible-inventory -i ougdemo-compartment.oci.yml --graph

[...]

@all:
  |--@IHsr_EU-FRANKFURT-1-AD-2:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@IHsr_EU-FRANKFURT-1-AD-3:
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |--@Oracle-Tags#CreatedBy=ougdemouser:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@Oracle-Tags#CreatedOn=2020-11-11T18_01_03.639Z:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |--@Oracle-Tags#CreatedOn=2020-11-11T18_01_03.916Z:
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |--@Oracle-Tags#CreatedOn=2020-11-11T18_01_04.484Z:
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@all_hosts:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@ougdemo-department:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@project#name=simple-app:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@region_eu-frankfurt-1:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@tag_role=appserver:
  |  |--appserver1.app.simpleapp.oraclevcn.com
  |  |--appserver2.app.simpleapp.oraclevcn.com
  |--@tag_role=bastionhost:
  |  |--bastion1.bastion.simpleapp.oraclevcn.com
  |--@ungrouped: 

Happy scripting!