Category Archives: Linux

Learning Kubernetes: persistent storage with Minikube

As part of my talk at (the absolutely amazing) Riga Dev Days 2019 I deployed Oracle Restful Data Services (ORDS) in Minikube as my application’s endpoint. I’ll blog about deploying ORDS 19 in docker and running it on Kubernetes later, but before I can do so I want to briefly touch about persistent storage in Minikube because I saw it as a pre-requisite.

In a previous post I wrote about using Minikube as a runtime environment for learning Kubernetes. This post uses the same versions as before – Docker 18.09 and Kubernetes 1.14 are invoked by Minikube 1.1.0.

Quick & dirty intro to storage concepts for Kubernetes

Container storage is – at least to my understanding – ephemeral. So, in other words, if you delete the container, and create a new one, locally stored data is gone. In “standalone” Docker, you can use so-called volumes to store things more permanently. A similar concept exists in Kubernetes in form of a persistent volume (PV) and a corresponding persistent volume claim (PVC).

In a nutshell, and greatly simplified, a persistent volume is a piece of persistent storage pre-allocated on a Kubernetes cluster that can be “claimed”. Hence the names …

There is a lot more to say about persistent data in Kubernetes, and others have already done that, so you won’t find an explanation of storage concepts in Kubernetes here. Please head over to the documentation set for more details.

Be careful though, as the use of persistent volumes and persistent volume claims is quite an in-depth topic, especially outside the developer-scoped Minikube. I specifically won’t touch on the subject of persistent storage outside of Minikube, I’d rather save that for a later post.

Persistent Volumes for my configuration files

ORDS requires you to run an installation routine first, in the cause of which it will create a directory containing its runtime configuration. The shell script I’m using to initialise my container checks for the existence of the configuration directory and skips the installation step if it finds one. It proceeds straight to starting the Tomcat container. This is primarily done to speed the startup process up. If I were not to use the PV/PVC combination the pods in my deployment would have to run the installation each time they start up, something I wanted to avoid.

A less complex example please

I didn’t want to make things more complicated than necessary, so instead I’ll use a much simpler example before writing up on my ORDS deployment. My example is based on the official Ubuntu image, and I’m writing a “heartbeat” file into the mounted volume and see if I can find it on the underlying infrastructure.

The Minikube documentation informs us that Minikube preserves data in /data and a few other locations. Anything you try to put elsewhere will be lost. With that piece of information at hand I proceeded with the storage creation.

Experienced Minikube users might point out at this stage that pre-creating a volume isn’t needed as Minikube supports dynamic volume creation. From what I can tell that’s correct, but not what I chose to do.

Creating a persistent volume

Based on the previously mentioned documentation I created a persistent volume and accompanying volume claim like this:

$ cat persistent-volumes.yml 

kind: PersistentVolume
apiVersion: v1
  name: research-vol
    type: local
    storage: 2Gi
    - ReadWriteOnce
    path: "/data/research-vol"
kind: PersistentVolumeClaim
apiVersion: v1
  name: research-pvc
    - ReadWriteOnce
  storageClassName: ""
  volumeName: research-vol
      storage: 1Gi

This approach is very specific to Minikube because I’m using persistent volumes of type HostPath.

Translated to plain English this means that I’m creating a 2 GB volume pointing to /data/research-vol on my Minikube system. And I’m asking for 1 GB in my persistent volume claim. The access mode (ReadWriteOnce) seems to be related to mounting the volume (concurrently) on multi-node clusters. Or it’s a bug because I successfully wrote to a single PVC from multiple pods as you can see in a bit… In my opinion the documentation wasn’t particularly clear on the subject.

Build the Docker image

Before I can deploy my example application to Minikube, I need to build a Docker image first. This is really boring, but I’ll show it here for the sake of completeness:

$ cat Dockerfile 
FROM ubuntu
COPY /usr/local/bin
RUN chmod +x /usr/local/bin/
ENTRYPOINT ["/usr/local/bin/"]

The shell script I’m running is shown here:

 $ cat 
 #!/usr/bin/env bash

 set -euxo pipefail

 if [[ ! -d $VOLUME ]]; then
   /bin/echo ERR: cannot find the volume $VOLUME, exiting
   exit 1

 while true; do
   /usr/bin/touch ${VOLUME}/$(hostname)-$(date +%Y%m%d-%H%M%S)
   /bin/sleep 10

As you can see it is just an infinite loop writing heartbeat files into the directory indicated by the VOLUME environment variable. The image is available to Minikube after building it. Refer to my previous post on how to build Docker images for Minikube, or have a look at the documentation. I built the image as “research:v2”


With the foundation laid, I can move on to defining the deployment. I’ll do that again with the help of a YAML file:

 $ cat research-deployment.yml 
 apiVersion: extensions/v1beta1
 kind: Deployment
     app: research
   name: research
   replicas: 2
       app: research
         app: research
       - image: research:v2
         - name: VOLUME
           value: "/var/log/heartbeat"
         name: research
         - mountPath: "/var/log/heartbeat"
           name: research-vol
       - name: research-vol
           claimName: research-pvc
       restartPolicy: Always

If you haven’t previously worked with Kubernetes this might look daunting, but it isn’t actually. I’m asking for the deployment of 2 copies (“replicas”) of my research:v2 image. I am passing an environment variable – VOLUME – to the container image. It contains the path to the persistent volume’s mount point. I’m also mounting a PV named research-vol as /var/log/heartbeat/ in each container. This volume in the container scope is based on the definition found in the volumes: section of the YAML file. It’s important to match the information in volumes and volumeMounts.


After a quick kubectl apply -f research-deployment.yml I have 2 pods happily writing to the persistent volume.

 $ kubectl get pods
 NAME                       READY   STATUS    RESTARTS   AGE
 research-f6668c975-98684   1/1     Running   0          6m22s
 research-f6668c975-z6pll   1/1     Running   0          6m22s

The directory exists as specified on the Minikube system. If you used the VirtualBox driver, use minikube ssh to access the VM and change to the PV’s directory:

# pwd
# ls -l *research-f6668c975-z6pll* | wc -l
# ls -l *research-f6668c975-98684* | wc -l

As you can see, both pods are able to write to the directory. The volume’s contents were preserved when I deleted the deployment:

$ kubectl delete deploy/research
deployment.extensions "research" deleted
$ kubectl get pods
No resources found.

This demonstrates that both pods from my deployment are gone. What does it look like back on the Minikube VM?

# ls -l *research-f6668c975-z6pll* | wc -l
# ls -l *research-f6668c975-98684* | wc -l

You can see that files are still present. The persistent volume lives up to its name.

Next I wanted to see if (new) Pods pick up what’s in the PV. After a short modification to the shell script and a quick docker build followed by a modification of the deployment to use research:v3, the container prints the number of files:

#!/usr/bin/env bash

set -euxo pipefail

if [[ ! -d $VOLUME ]]; then
  /bin/echo ERR: cannot find the volume $VOLUME, exiting
  exit 1

/bin/echo INFO: found $(ls -l $VOLUME/* | wc -l) files in the volume

while true; do
  /usr/bin/touch ${VOLUME}/$(hostname)-$(date +%Y%m%d-%H%M%S)
  /bin/sleep 10

As proven by the logs:

 $ kubectl get pods
 NAME                        READY   STATUS        RESTARTS   AGE
 research-556cc9989c-bwzkg   1/1     Running       0          13s
 research-556cc9989c-chg76   1/1     Running       0          13s

 $ kubectl logs research-556cc9989c-bwzkg
 [[ ! -d /var/log/heartbeat ]]
 ++ wc -l
 ++ ls -l /var/log/heartbeat/research-7d4c7c8dc8-5xrtr-20190606-105014 
 ( a lot of output not shown )
 /bin/echo INFO: found 166 files in the volume
 INFO: found 166 files in the volume
 ++ hostname
 ++ date +%Y%m%d-%H%M%S
 /usr/bin/touch /var/log/heartbeat/research-556cc9989c-bwzkg-20190606-105331
 /bin/sleep 10
 ++ hostname
 ++ date +%Y%m%d-%H%M%S
 /usr/bin/touch /var/log/heartbeat/research-556cc9989c-bwzkg-20190606-105341
 /bin/sleep 10 

It appears as if I successfully created and used persistent volumes in Minikube.


Using the Secure External Password store with sqlcl

Sometimes it is necessary to invoke a SQL script in bash or otherwise in an unattended way. SQLcl has become my tool of choice because it’s really lightweight and can do a lot. If you haven’t worked with it yet, you really should give it a go.

So how does one go about invoking SQL scripts from the command line these days? There’s an age-old problem with unattended execution: how do you authenticate against the database? There are many ways to do so, some better than others. This post shows how to use the Secure External Password Store with SQLcl. As always, there is more than one way to do this, @FranckPachot recently wrote about a different approach on Medium which you might want to check out as well.

Please don’t store passwords in scripts

I have seen passwords embedded in shell scripts far too often, and that’s something I really don’t like for many, many reasons. Thankfully Oracle offers an alternative to storing clear text passwords in the form of the Secure External Password Store (SEPS).This post explains one of many ways to use a wallet to use sqlcl to connect to a database. It assumes that a Secure External Password store is set up with the necessary credentials. Components referenced in this post are:

  • sqlcl 19.1
  • Instant Client Basic 18.5
  • Oracle XE 18.4

The SEPS wallet is found in /home/oracle/seps with its corresponding tnsnames.ora and sqlnet.ora in /home/oracle/seps/tns. I have set TNS_ADMIN to /home/oracle/seps/tns and ensured that sqlnet.ora points to the correct wallet location.

First attempt

The first attempt at using sqlcl with the wallet resulted in the following error:

$ /home/oracle/sqlcl/bin/sql -L /@xepdb1

SQLcl: Release 19.1 Production on Fri May 17 05:56:56 2019

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

  USER          = 
  Error Message = ORA-01017: invalid username/password; logon denied

I provided the -L flag to prevent sqlcl from asking me for different credentials after a failed login attempt. Using the -verbose flag in the next attempt I confirmed that sqlcl was indeed using my tnsnames.ora file in the directory specified by $TNS_ADMIN.


So I started investigating … The first place to go to is the documentation, however I didn’t find anything relevant in the command line reference or FAQ shown on the product’s landing page. I then cast my net wider and found a few things on My Oracle Support (they didn’t apply to my version of sqlcl) and the Oracle forums.

I tried various things to get the thin client to cooperate with using the wallet but didn’t pursue that route further after learning about the option to use the OCI JDBC driver. After experimenting a little more I got on the right track.

Second attempt

The consensus in the Oracle forum posts I found seems to be to use the OCI flag when invoking the tool. So I tried that next:

$ /home/oracle/sqlcl/bin/sql -L -oci /@xepdb1

SQLcl: Release 19.1 Production on Fri May 17 06:09:29 2019

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

  USER          = 
  URL           = jdbc:oracle:oci8:@xepdb1
  Error Message = no ocijdbc18 in java.library.path
  USER          = 
  Error Message = ORA-01017: invalid username/password; logon denied

No success yet, but there’s an important clue in the output the first URL indicates that indeed an OCI connection was tried, except that a shared library was missing from Java’s library path. I guessed correctly that ocijdbc18 is part of the instant client 18 basic installation. After installing the RPM for the latest 18c instant client I confirmed was part of the package.

From what I understand java doesn’t pick up the configuration created by ldconfig and you either have to set java.library.path manually (as in java -Djava.library.path=…) or set LD_LIBRARY_PATH. The latter is easier, and it gave me the desired result:

$ export LD_LIBRARY_PATH=/usr/lib/oracle/18.5/client64/lib:$LD_LIBRARY_PATH
$ echo "select user from dual" | /home/oracle/sqlcl/bin/sql -L -oci /@xepdb1

SQLcl: Release 19.1 Production on Fri May 17 06:15:29 2019

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Last Successful login time: Fri May 17 2019 06:15:32 -04:00

Connected to:
Oracle Database 18c Express Edition Release - Production


Disconnected from Oracle Database 18c Express Edition Release - Production

Result! I can use sqlcl to connect to a database using a wallet.

Oracle Instant Client RPM installation where to find things

Last week I blogged about the option to install Oracle’s Instant Client via the public YUM repository. If you go ahead and try this, there is one thing you will undoubtedly notice: file locations are rather unusual if you have worked with Oracle for a while. This is true at least for the 19c Instant Client, it might be similar for older releases although I didn’t check. I’d like to thank @oraclebase for prompting me to write this short article!

Installing the 19.3 “Basic” Instant Client package

So to start this post I am going to install the 19.3 “Basic” package on my Oracle Linux 7.6 lab environment:

$ sudo yum install oracle-instantclient19.3-basic
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package oracle-instantclient19.3-basic.x86_64 0: will be installed
--> Finished Dependency Resolution

Dependencies Resolved

 Package                     Arch   Version      Repository                Size
                             x86_64 ol7_oracle_instantclient  51 M

Transaction Summary
Install  1 Package

Total download size: 51 M
Installed size: 225 M
Is this ok [y/d/N]: y
Downloading packages:
oracle-instantclient19.3-basic-     |  51 MB   00:09     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : oracle-instantclient19.3-basic-           1/1 
  Verifying  : oracle-instantclient19.3-basic-           1/1 

  oracle-instantclient19.3-basic.x86_64 0:                          


With the software installed, let’s have a look at where everything is:

$ rpm -ql oracle-instantclient19.3-basic

As you can see in the RPM output, files are found under /usr/lib/oracle. That’s why I meant the file location is unusual. I for my part have followed the directory structure suggested by previous release’s Oracle Universal Installer (OUI) defaults and installed it under /u01/app/oracle/product/version/client_1. The actual location however doesn’t really matter.

No more manually calling ldconfig with 19.3

Note that before 19.3 you had to manually run ldconfig after installing the Instant Client RPM file. In 19c this is handled via a post-install script. The RPM adds its configuration in /etc/

$ rpm -q --scripts oracle-instantclient19.3-basic
postinstall scriptlet (using /bin/sh):
postuninstall scriptlet (using /bin/sh):

This is nicer – at least in my opinion – than setting LD_LIBRARY_PATH in a shell. This seemed to work just fine: I installed the SQLPlus package (oracle-instantclient19.3-sqlplus) and I could start it without any problems.

Network Configuration

If you have a requirement to add a TNS naming file, you should be able to do so by either setting TNS_ADMIN or place the file in /usr/lib/oracle/19.3/client64/lib/network/admin. I have a tnsnames.ora file pointing to my XE database here:

$ cat /usr/lib/oracle/19.3/client64/lib/network/admin/tnsnames.ora 
xepdb1 =
    (ADDRESS = (PROTOCOL = TCP)(HOST = oraclexe)(PORT = 1521))
      (SERVICE_NAME = xepdb1)

I can connect to my database without setting any specific environment variables:

$ env | egrep -i 'tns|ld_library' | wc -l
$ sqlplus martin@xepdb1

SQL*Plus: Release - Production on Mon May 13 09:38:12 2019

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Enter password: 
Last Successful login time: Mon May 13 2019 09:22:11 -04:00

Connected to:
Oracle Database 18c Express Edition Release - Production

The SQLPlus package places a symbolic link to /usr/lib/oracle/19.3/client64/bin/sqlplus into /usr/bin so I don’t even have to adjust the PATH variable.


Being able to use the Instant Client out of the box is very useful for automated deployments where all you have to do is add a repository followed by a call to yum. This should make a lot of peoples’ lives much easier.

Installing the Oracle Instant Client RPM via YUM on Oracle Linux 7

Many applications require Oracle’s instant client to enable connectivity with the database. In the past, getting hold of the instant client required you to agree to the license agreement before you could download the software. For a little while now, Oracle offers a YUM repository with the instant client RPMs. There are a couple of announcements to that effect, for example on Oracle’s Linux blog. It’s a great step ahead for usability, and I really appreciate the move. After a small exchange on Twitter I had to give this a go and see how this works. The following article is a summary of what I learned.

Instead of pulling the RPMs straight from the repository location using curl or wget, you can add the repository to your YUM configuration just as easily. How you do this depends on your installed Oracle Linux image. Oracle changed the way the contents of /etc/yum.repos.d is handled earlier this year. If your system is already using the new modular approach – many cloud images for example use the new modular approach already – the steps are slightly different from the monolithic configuration file.

Monolithic public-yum-ol7.repo

My virtualised lab environment (for which I actually do have a backup!) is based on Oracle Linux 7.6 and still uses /etc/yum.repos.d/public-yum-ol7.repo. The files belonging to the oraclelinux-release-el7 RPM are present but disabled. I am conscious of the fact that this configuration file is deprecated.

To enable the instant client repository you can either edit public-yum-ol7.repo with your text editor of choice, or use yum-config-manager. Ansible users can use the ini_file module to do the same thing in code. I used yum-config-manager:

$ sudo yum-config-manager --enable ol7_oracle_instantclient

With the repository enabled you can search for the oracle-instantclient RPMS

$ sudo yum search oracle-instant
Loaded plugins: ulninfo
ol7_UEKR5                                                | 1.2 kB     00:00     
ol7_latest                                               | 1.4 kB     00:00     
ol7_oracle_instantclient                                 | 1.2 kB     00:00     
(1/4): ol7_oracle_instantclient/x86_64/primary             | 4.4 kB   00:00     
(2/4): ol7_oracle_instantclient/x86_64/updateinfo          |  145 B   00:00     
(3/4): ol7_latest/x86_64/updateinfo                        | 907 kB   00:00     
(4/4): ol7_latest/x86_64/primary                           |  13 MB   00:02     
ol7_latest                                                          12804/12804
ol7_oracle_instantclient                                                  21/21
========================= N/S matched: oracle-instant ==========================
oracle-instantclient18.3-basic.x86_64 : Oracle Instant Client Basic package
oracle-instantclient18.3-basiclite.x86_64 : Oracle Instant Client Light package
oracle-instantclient18.3-devel.x86_64 : Development headers for Instant Client.
oracle-instantclient18.3-jdbc.x86_64 : Supplemental JDBC features for the Oracle
                                     : Instant Client
oracle-instantclient18.3-odbc.x86_64 : Oracle Instant Client ODBC
oracle-instantclient18.3-sqlplus.x86_64 : SQL*Plus for Instant Client.
oracle-instantclient18.3-tools.x86_64 : Tools for Oracle Instant Client
oracle-instantclient18.5-basic.x86_64 : Oracle Instant Client Basic package
oracle-instantclient18.5-basiclite.x86_64 : Oracle Instant Client Light package
oracle-instantclient18.5-devel.x86_64 : Development headers for Instant Client.
oracle-instantclient18.5-jdbc.x86_64 : Supplemental JDBC features for the Oracle
                                     : Instant Client
oracle-instantclient18.5-odbc.x86_64 : Oracle Instant Client ODBC
oracle-instantclient18.5-sqlplus.x86_64 : SQL*Plus for Instant Client.
oracle-instantclient18.5-tools.x86_64 : Tools for Oracle Instant Client
oracle-instantclient19.3-basic.x86_64 : Oracle Instant Client Basic package
oracle-instantclient19.3-basiclite.x86_64 : Oracle Instant Client Light package
oracle-instantclient19.3-devel.x86_64 : Development header files for Oracle
                                      : Instant Client.
oracle-instantclient19.3-jdbc.x86_64 : Supplemental JDBC features for the Oracle
                                     : Instant Client
oracle-instantclient19.3-odbc.x86_64 : Oracle Instant Client ODBC
oracle-instantclient19.3-sqlplus.x86_64 : Oracle Instant Client SQL*Plus package
oracle-instantclient19.3-tools.x86_64 : Tools for Oracle Instant Client

  Name and summary matches only, use "search all" for everything.

As you can see, there are instant clients for 18.3, 18.5 and 19.3 at the time of writing.

Modular YUM configuration

If your system is already configured to use the modular YUM configuration, you need to know which RPM contains the configuration for the instant client repository. The YUM getting started guide comes to the rescue again. If you consult the table in section Installing Software from Oracle Linux Yum Server, you’ll notice that you have to install oracle-release-el7.

$ sudo yum install oracle-release-el7
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package oracle-release-el7.x86_64 0:1.0-2.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

 Package                  Arch         Version           Repository        Size
 oracle-release-el7       x86_64       1.0-2.el7         ol7_latest        14 k

Transaction Summary
Install  1 Package

Total download size: 14 k
Installed size: 18 k
Is this ok [y/d/N]: y
Downloading packages:
oracle-release-el7-1.0-2.el7.x86_64.rpm                    |  14 kB   00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : oracle-release-el7-1.0-2.el7.x86_64                          1/1 
  Verifying  : oracle-release-el7-1.0-2.el7.x86_64                          1/1 

  oracle-release-el7.x86_64 0:1.0-2.el7                                         


As part of the YUM command, the instant client repository configuration is laid down and enabled. From this point onward, everything is exactly the same as previously shown with in the section detailing the monolithic repository configuration file.

Learning about Kubernetes: JDBC database connectivity to an Oracle database

In the past few months I have spent some time trying to better understand Kubernetes and how application developers can make use of it in the context of the Oracle database. In this post I’m sharing what I learned along that way. Please be careful: this is very much a moving target, and I wouldn’t call myself an expert in the field. If you find anything in this post that could be done differently/better, please let me know!

By the way, I am going to put something similar together where Oracle Restful Data Services (ORDS) will provide a different, more popular yet potentially more difficult-to-get-right connection method.

My Motivation for this post

The question I wanted to answer to myself was: can I deploy an application into Kubernetes that uses JDBC to connect to Oracle? The key design decision I made was to have a database outside the Kubernetes cluster. From what I read on blogs and social media it appears to be the consensus at the time of writing to keep state out of Kubernetes. I am curious to see if this is going to change in the future…

My environment

As this whole subject of container-based deployment and orchestration is developing at an incredible pace, here’s the specification of my lab environment used for this little experiment so I remember which components I had in use in case something breaks later…

  • Ubuntu 18.04 LTS serves as my host operating environment
  • Local installation of kubectl v1.14.1
  • I am using minikube version: v1.0.1 and it pulled the following default components:
    • Container runtime: docker 18.06.3-ce
    • kubernetes 1.14.1 (kubeadm v1.14.1, kubelet v1.14.1)
  • The database service is provided by Oracle XE (oracle-database-xe-18c-1.0-1.x86_64)
    • Oracle XE 18.4.0 to provide data persistance in a VM (“”)
    • The Pluggable Database (PDB) name is XEPDB1
    • I am using an unprivileged user to connect

I would like to take the opportunity again to point out that the information you see in this post is my transcript of how I built this proof-of-concept, can-I-even-do-it application. It’s by no means meant to be anything other than that. There are a lot more bells and whistles to be attached to this in order to make it remotely ready for more serious use. But it was super fun putting this all together which is why I’m sharing this anyway.

Preparing Minikube

Since I’m only getting started with Kubernetes I didn’t want to set up a multi-node Kubernetes cluster in the cloud, and instead went with Minikube just to see if my ideas pan out ok or not. As I understand it, Minikube is a development/playground Kubernetes environment and it’s explicitly stated not to be suitable for production. It does seem to give me all that I need to work out my ideas. Since I’ve been using Virtualbox for a long time I went with the Minikube Virtualbox provider and all the defaults. It isn’t too hard to get this to work, I followed the documentation (see earlier link) on how to install kubectl (the main Kubernetes command line tool) and Minikube.

Once kubectl and Minikube were installed, I started the playground environment. On a modern terminal this prints some very fancy characters in the output as you can see:

$ minikube start
😄  minikube v1.0.1 on linux (amd64)
💿  Downloading Minikube ISO ...
 142.88 MB / 142.88 MB [============================================] 100.00% 0s
🤹  Downloading Kubernetes v1.14.1 images in the background ...
🔥  Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
📶  "minikube" IP address is
🐳  Configuring Docker as the container runtime ...
🐳  Version of container runtime is 18.06.3-ce
⌛  Waiting for image downloads to complete ...
✨  Preparing Kubernetes environment ...
💾  Downloading kubeadm v1.14.1
💾  Downloading kubelet v1.14.1
🚜  Pulling images required by Kubernetes v1.14.1 ...
🚀  Launching Kubernetes v1.14.1 using kubeadm ... 
⌛  Waiting for pods: apiserver proxy etcd scheduler controller dns
🔑  Configuring cluster permissions ...
🤔  Verifying component health .....
💗  kubectl is now configured to use "minikube"
🏄  Done! Thank you for using minikube!

This gives me a fully working Kubernetes lab environment. Next I need an application! My plan is to write the world’s most basic JDBC application connecting to the database and return success or failure. The application will be deployed into Kubernetes, the database will remain outside the cluster as you just read.

Let’s start with the datbase

I needed a database to connect to, obviously. This was super easy: create a VM, install the Oracle XE RPM for version 18c, and create a database. I started the database and listener to enable external connectivity. The external IP address to which the listener is bound is This is unusual for Virtualbox as its internal network defaults to Minikube started on though and I had to make an adjustment to allow both VMs to communicate without having to mess with advanced routing rules.

Within XEPDB1 (which appears to be the default PDB in XE) I created a user named martin with a super secret password, granted him the connect role and left at at that. This account will later be used by my application.

Developing a super simple Java app to test connectivity

Modern applications are most often based on some sort of web technology, so I decided to go with that, too. I decided to write a very simple Java Servlet in which I connect to the database and print some status messages along the way. It’s a bit old-fashioned but makes it easy to get the job done.

I ended up using Tomcat 8.5 as the runtime environment, primarily because I have a working demo based on it and I wanted to save a little bit of time. I am implementing a Universal Connection Pool (UCP) as a JNDI resource in Tomcat for my application. The Servlet implements doGet(), and that’s where most of the action is going to take place. For anything but this playground environment I would write factory methods to handle all interactions with UCP and a bunch of other convenience methods. Since this is going to be a simple demo I decided to keep it short without confusing readers with 42 different Java classes. The important part of the Servlet code is shown here:

	protected void doGet(HttpServletRequest request, HttpServletResponse response)
          throws ServletException, IOException {

		PrintWriter out = response.getWriter();
		out.println("<html&gt;<head&gt;<title&gt;JDBC connection test</title&gt;</head&gt;");
		out.println("<h1&gt;Progress report</h1&gt;");
		out.println("<li&gt;Trying to look up the JNDI data source ... ");
		PoolDataSource pds = null;
		Connection con = null;
		DataSource ds = null;
		PreparedStatement pstmt = null;
		// JNDI resource look-up
		Context envContext = null;
		Context ctx;
		try {
			ctx = new InitialContext();
			envContext = (Context) ctx.lookup("java:/comp/env");

			ds = (javax.sql.DataSource) envContext.lookup("jdbc/UCPTest");
		} catch (NamingException e) {
			out.println("error looking up jdbc/UCPTest: " + e.getMessage());
		pds = ((PoolDataSource) ds);
		out.println("Data Source successfully acquired</li&gt;");
		out.println("<li&gt;Trying to get a connection to the database ... ");
		try {
			con = pds.getConnection();
			// double-checking the connection although that shouldn't be necessary
			// due to the "validateConnectionOnBorrow flag in context.xml
			if (con == null || !((ValidConnection)con).isValid()) {
				out.println("Error: connection is either null or otherwise invalid</li&gt;");
			out.println(" successfully grabbed a connection from the pool</li&gt;");
			out.println("<li&gt;Trying run a query against the database ... ");
			pstmt = con.prepareStatement("select sys_context('userenv','db_name') from dual");
			ResultSet rs = pstmt.executeQuery();
			while ( {
				out.println(" query against " + rs.getString(1) + " completed successfully</li&gt;");
			// clean up
			con = null;
			out.println("<li&gt;No errors encountered, we are done</li&gt;");
		} catch (SQLException e) {
			out.println("<h2&gt;A generic error was encountered. </h2&gt;");
			out.println("<p&gt;Error message:</p&gt; <pre&gt; " + e.getMessage() + "</pre&gt;");


The context.xml file describing my JNDI data source goes into META-INF/context.xml in the WAR file.

<?xml version="1.0" encoding="UTF-8"?&gt;

	<Resource name="jdbc/UCPTest" 
		description="UCP JNDI Connection Pool"
		fastConnectionFailoverEnabled="false" /&gt;

Note the URL: it references oracle-dev as part of the JDBC connection string. That’s not the DNS name of my VM, and I’ll get back to that later.

As always, I debugged/compiled/tested the code until I was confident it worked. Once it did, I created a WAR file named servletTest.war for use with Tomcat.


In a microservice-driven world it is very important NOT to hard-code IP addresses or machine names. In the age of on-premises deployments it was very common to hard code machine names and/o IP addresses into applications. Needless to say that … wasn’t ideal.

In Kubernetes you expose your deployments as services to the outside world to enable connectivity. With the database outside of Kubernetes in its own VM – which appears to be the preferred way of doing this at the time of writing – I need to expose it as a headless service. Using an external name appears to be the right choice. The YAML file defining the service is shown here:

$ cat service.yml 
kind: Service
apiVersion: v1
  name: oracle-dev
  type: ExternalName

If you scroll up you will notice that this service’s name matches the JDBC URL in the context.xml file shown earlier. The name is totally arbitrary, I went with oracle-dev to indicate this is a playground environment. The identifier listed as external name must of course resolve to the appropriate machine:

$ ping -c 1 
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=64 time=0.290 ms

--- ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms

A quick kubectl apply later the service is defined in Minikube:

$ kubectl get service/oracle-dev -o wide
NAME         TYPE           CLUSTER-IP   EXTERNAL-IP            PORT(S)   AGE   SELECTOR
oracle-dev   ExternalName   <none&gt;   <none&gt;    14m   <none&gt;

That should be sorted! Whenever applications in the Kubernetes cluster refer to oracle-dev, they are pointed to, my database VM.

Docker all the things, eh application

With the WAR file ready and a service I could use for the JDBC connection pool ready I could start thinking about deploying the application.

Create the Docker image

The first step is to package the application in Docker. To do so I used the official Tomcat Docker image. The Dockerfile is trivial: specify tomcat:8.5-slim and copy the WAR file into /usr/local/tomcat/webapps. That’s it! The CMD directive invokes “ run” to start the servlet engine. This is the output created by “docker build”:

$ docker build -t servlettest:v1 .
Sending build context to Docker daemon  10.77MB
Step 1/3 : FROM tomcat:8.5-slim
8.5-slim: Pulling from library/tomcat
27833a3ba0a5: Pull complete 
16d944e3d00d: Pull complete 
9019de9fce5f: Pull complete 
9b053055f644: Pull complete 
110ab0e6e34d: Pull complete 
54976ba77289: Pull complete 
0afe340b9ec5: Pull complete 
6ddc5164be39: Pull complete 
c622a1870b10: Pull complete 
Digest: sha256:7c1ed9c730a6b537334fbe1081500474ca54e340174661b8e5c0251057bc4895
Status: Downloaded newer image for tomcat:8.5-slim
 ---> 78e1f34f235b
Step 2/3 : COPY servletTest.war /usr/local/tomcat/webapps/
 ---> c5b9afe06265
Step 3/3 : CMD ["", "run"]
 ---> Running in 2cd82f7358d3
Removing intermediate container 2cd82f7358d3
 ---> 342a8a2a31d3
Successfully built 342a8a2a31d3
Successfully tagged servlettest:v1

Once the docker image is built (remember to use eval $(minikube docker-env) before you run docker build to make the image available to Minikube) you can reference it in Kubernetes.

Deploying the application

The next step is to run the application. I have previously used “kubectl run” but since that’s deprecated I switched to creating deployments as YAML files. I dumped the deployment generated by “kubectl run” into a YML file and used the documentation to adjust it to what I need. The result is shown here:

$ cat deployment.yml 
apiVersion: extensions/v1beta1
kind: Deployment
    app: servlettest
  name: servlettest
  replicas: 1
      app: servlettest
        app: servlettest
      - image: servlettest:v1
        imagePullPolicy: IfNotPresent
        name: servlettest
        - containerPort: 8080
          protocol: TCP
      restartPolicy: Always

The application is deployed into Kubernetes using “kubectl apply -f deployment.yml”. A few moments later I can see it appearing:

$ kubectl get deploy/servlettest -o wide
servlettest   1/1     1            1           7m23s   servlettest   servlettest:v1   app=servlettest

I’m delighted to see 1/1 pods ready to serve the application! It looks everything went to plan. I can see the pod as well:

$ kubectl get pods -o wide
NAME                           READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES
servlettest-6c6598b487-wssp6   1/1     Running   0          8m12s   minikube   <none&gt;           <none&gt;

Accessing the application

I now have an application running in Kubernetes! Umm, but what next? I haven’t been able to get access to the application yet. To do so, I need to expose another service. The Minikube quick start suggests exposing the application service via a NodePort, and that seems to be the quickest way. In a “proper Kubernetes environment” in the cloud you’d most likely go for a service of type LoadBalancer to expose applications to the public.

So far I have only shown you YAML files to configure Kubernetes resources, many things can be done via kubectl directly. For example, it’s possible to expose the deployment I just created as a service on the command line:

$ kubectl expose deployment servlettest --type NodePort
service/servlettest exposed

$ kubectl get service/servlettest -o wide
NAME          TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE   SELECTOR
servlettest   NodePort   <none&gt;        8080:31030/TCP   71s   app=servlettest

One of the neat things with Minikube is that you don’t need to worry too much about how to actually access that service: the minikube service command provides the URL for the service, and it even opens it in the default browser if you so want. I’ll see if I can get the ingress add-on to work as well, but that should probably go into its own post…

In my case I’m only interested in the URL because I want a textual representation of the output:

$ minikube service servlettest --url

$ lynx --dump
                                Progress report

     * Trying to look up the JNDI data source ... Data Source successfully
     * Trying to get a connection to the database ... successfully grabbed
       a connection from the pool
     * Trying run a query against the database ... query against XE
       completed successfully
     * No errors encountered, we are done

The URL I am passing to lynx is made up of the IP and node port ( as well as the servlet path based on the WAR file’s name.

So that’s it! I have a working application deployed into Minikube, using a service of type “ExternalName” to query my database. Although these steps have been created using Minikube, they shouldn’t be different on a “real” Kubernetes environment.

What about the database?

The JNDI resource definition for my connection pool requested 10 sessions, and they have been dutifully created:

SQL> select username, machine from v$session where username = 'MARTIN';

USERNAME                       MACHINE
------------------------------ ----------------------------------------
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6

10 rows selected.

The “machine” matches the pod name as shown in kubectl output:

$ kubectl get pods -l app=servlettest -o wide
NAME                           READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
servlettest-6c6598b487-wssp6   1/1     Running   0          30m   minikube   <none&gt;           <none&gt;

Well I guess you can actually deploy the world’s most basic JDBC application into Kubernetes. Whether this works for you is an entirely different matter. As many much cleverer people than me have been pointing out for some time: just because a technology is “hot” doesn’t mean it’s the best tool for the job. It’s always worthwhile weighing up pros and cons for your solution carefully, and just because something can be done (technically speaking) doesn’t mean it should be done.

Ansible tips’n’tricks: provision multiple machines in parallel with Vagrant and Ansible

Vagrant is a great tool that I’m regularly using for building playground environments on my laptop. I recently came across a slight inconvenience with Vagrant’s Virtualbox provider: occasionally I would like to spin up a Data Guard environment and provision both VMs in parallel to save time. Sadly you can’t bring up multiple machines in parallel using the VirtualBox provisioner according to the documentation . This was true as of April 11 2019 and might change in the future, so keep an eye out on the reference.

I very much prefer to save time by doing things in parallel, and so I started digging around how I could achieve this goal.

The official documentation mentions something that looks like a for loop to wait for all machines to be up. This isn’t really an option, I wanted more control over machine names and IP addresses. So I came up with this approach, it may not be the best, but it falls into the “good enough for me” category.


The Vagrantfile is actually quite simple and might remind you of a previous article:

  1 Vagrant.configure("2") do |config|
  2   config.ssh.private_key_path = "/path/to/key"
  4   config.vm.define "server1" do |server1|
  5 = "ansibletestbase"
  6     server1.vm.hostname = "server1"
  7 "private_network", ip: ""
  8     server1.vm.synced_folder "/path/to/stuff", "/mnt",
  9       mount_options: ["uid=54321", "gid=54321"]
 11     config.vm.provider "virtualbox" do |vb|
 12       vb.memory = 2048
 13       vb.cpus = 2
 14     end
 15   end
 17   config.vm.define "server2" do |server2|
 18 = "ansibletestbase"
 19     server2.vm.hostname = "server2"
 20 "private_network", ip: ""
 21     server2.vm.synced_folder "/path/to/stuff", "/mnt",
 22       mount_options: ["uid=54321", "gid=54321"]
 24     config.vm.provider "virtualbox" do |vb|
 25       vb.memory = 2048
 26       vb.cpus = 2
 27     end
 28   end
 30   config.vm.provision "ansible" do |ansible|
 31     ansible.playbook = "hello.yml"
 32     ansible.groups = {
 33       "oracle_si" => ["server[1:2]"],
 34       "oracle_si:vars" => { 
 35         "install_rdbms" => "true",
 36         "patch_rdbms" => "true",
 37       }
 38     }
 39   end
 41 end

Ansibletestbase is my custom Oracle Linux 7 image that I keep updated for personal use. I define a couple of machines, server1 and server2 and from line 30 onwards let Ansible provision them.

A little bit of an inconvenience

Now here is the inconvenient bit: if I provided an elaborate playbook to provision Oracle in line 31 of the Vagrantfile, it would be run serially. First for server1, and only after it completed (or failed…) server2 will be created and provisioned. This is the reason for a rather plain playbook, hello.yml:

$ cat hello.yml 
- hosts: oracle_si
  - name: say hello
    debug: var=ansible_hostname

This literally takes no time to execute at all, so no harm is done running it serially once per VM. Not only is no harm done, quite the contrary: Vagrant discovered an Ansible provider in the Vagrantfile and created a suitable inventory file for me. I’ll gladly use it later.

How does this work out?

Enough talking, time to put this to test and to bring up both machines. As you will see in the captured output, they start one-by-one, run their provisioning tool and proceed to the next system.

$ vagrant up 
Bringing machine 'server1' up with 'virtualbox' provider...
Bringing machine 'server2' up with 'virtualbox' provider...
==> server1: Importing base box 'ansibletestbase'...
==> server1: Matching MAC address for NAT networking...


==> server1: Running provisioner: ansible...


    server1: Running ansible-playbook...

PLAY [oracle_si] ***************************************************************

TASK [Gathering Facts] *********************************************************
ok: [server1]

TASK [say hello] ***************************************************************
ok: [server1] => {
    "ansible_hostname": "server1"

PLAY RECAP *********************************************************************
server1                    : ok=2    changed=0    unreachable=0    failed=0   

==> server2: Importing base box 'ansibletestbase'...
==> server2: Matching MAC address for NAT networking...


==> server2: Running provisioner: ansible...

    server2: Running ansible-playbook...

PLAY [oracle_si] ***************************************************************

TASK [Gathering Facts] *********************************************************
ok: [server2]

TASK [say hello] ***************************************************************
ok: [server2] => {
    "ansible_hostname": "server2"

PLAY RECAP *********************************************************************
server2                    : ok=2    changed=0    unreachable=0    failed=0

As always the Ansible provisioner created an inventory file I can use in ./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory. The inventory looks exactly as described in the |ansible| block, and it has the all important global variables as well.

$cat ./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
# Generated by Vagrant
server2 ansible_host= ansible_port=2201 ansible_user='vagrant' ansible_ssh_private_key_file='/path/to/key'
server1 ansible_host= ansible_port=2200 ansible_user='vagrant' ansible_ssh_private_key_file='/path/to/key'



After ensuring both that machines are up using the “ping” module, I can run the actual playbook. You might have to confirm the servers’ ssh keys the first time you run this:

$ ansible -i ./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory -m ping oracle_si
server1 | SUCCESS => {
"changed": false,
"ping": "pong"
server2 | SUCCESS => {
"changed": false,
"ping": "pong"

All good to go! Let’s call the actual playbook to provision my machines.

$ ansible-playbook -i ./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory provisioning/oracle.yml 

TASK [Gathering Facts] ***************************************************************
ok: [server1]
ok: [server2]

TASK [test for Oracle Linux 7] *******************************************************
skipping: [server1]
skipping: [server2]

And we’re off to the races. Happy automating!

Ansible tips’n’tricks: testing and debugging Ansible scripts using Vagrant

At last year’s UKOUG I presented about Vagrant and how to use this great piece of software to test and debug Ansible scripts easily. Back then in December I promised a write-up, but for various reasons only now got around to finishing it.

Vagrant’s Ansible Provisioner

Vagrant offers two different Ansible provisioners: “ansible” and “ansible_local”. The “ansible” provisioner depends on a local Ansible installation, on the host. If this isn’t feasible, you can use “ansible_local” instead. As the name implies it executes code on the VM instead of on the host. This post is about the “ansible” provisioner.

Most people use Vagrant with the default VirtualBox provider, and so do I in this post.

A closer look at the Vagrantfile

It all starts with a Vagrantfile. A quick “vagrant init ” will get you one. My test image I use for deploying the Oracle database comes with all the necessary block devices and packages needed, saving me quite some time. Naturally I’ll start with that one.

$ cat -n Vagrantfile 
     1    # -*- mode: ruby -*-
     2    # vi: set ft=ruby :
     4    Vagrant.configure("2") do |config|
     6      config.ssh.private_key_path = "/path/to/ssh/key"
     8 = "ansibletestbase"
     9      config.vm.define "server1" do |server1|
    10 = "ansibletestbase"
    11        server1.vm.hostname = "server1"
    12 "private_network", ip: ""
    14        config.vm.provider "virtualbox" do |vb|
    15          vb.memory = 2048
    16          vb.cpus = 2 
    17        end 
    18      end 
    20      config.vm.provision "ansible" do |ansible|
    21        ansible.playbook = "blogpost.yml"
    22        ansible.groups = { 
    23          "oracle_si" => ["server1"],
    24          "oracle_si:vars" => { 
    25            "install_rdbms" => "true",
    26            "patch_rdbms" => "true",
    27            "create_db" => "true"
    28          }   
    29        }   
    30      end 
    32    end

Since I have decided to create my own custom image without relying on the “insecure key pair” I need to keep track of my SSH keys. This is done in line 6. Otherwise there wouldn’t be an option to connect to the system and Vagrant couldn’t bring the VM up.

Lines 8 to 18 define the VM – which image to derive it from, and how to configure it. The settings are pretty much self-explanatory so I won’t go into too much detail. Only this much:

  • I usually want a host-only network instead of just a NAT device, and I create one in line 12. The IP address maps to and address on vboxnet0 in my configuration. If you don’t have a host-only network and want one, you can create it in VirtualBox’s preferences.
  • In line 14 to 17 I set some properties of my VM. I want it to come up with 2 GB of RAM and 2 CPUs.

Integrating Ansible into the Vagrantfile

The Ansible configuration is found on lines 20 to 30. As soon as the VM comes up I want Vagrant to run the Ansible provisioner and execute my playbook named “blogpost.yml”.

Most of my playbooks rely on global variables I define in the inventory file. Vagrant will create an inventory for me when it finds an Ansible provisioner in the Vagrantfile. The inventory it creates doesn’t fit my needs though, but that is easy to change. Recent Vagrant versions allow me to create the inventory just as I need it. You see this in lines 22 to 28. The resulting inventory file is created in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory and looks like this:

$ cat .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory 
# Generated by Vagrant

server1 ansible_host= ansible_port=2222 ansible_user='vagrant' ansible_ssh_private_key_file='/path/to/ssh/key'



That’s exactly what I’d use if I manually edited the inventory file, except that I don’t need to use “vagrant ssh-config” to figure out what the current SSH configuration is.

I define a group of hosts, and a few global variables for my playbook. This way all I need to do is change the Vagrantfile and control the execution of my playbook rather than maintaining information in 2 places (Vagrantfile and static inventory).

Ansible Playbook

The final piece of information is the actual Ansible playbook. Except for the host group I’m not going to use the inventory’s variables to keep the example simple.

$ cat -n blogpost.yml 
     1    ---
     2    - name: blogpost
     3      hosts: oracle_si
     4      vars:
     5      - oravg_pv: /dev/sdb
     6      become: yes
     7      tasks:
     8      - name: say hello
     9        debug: msg="hello from {{ ansible_hostname }}"
    11      - name: partitioning PVs for the volume group
    12        parted:
    13          device: "{{ oravg_pv }}"
    14          number: 1
    15          state: present
    16          align: optimal
    17          label: gpt

Expressed in plain English, it reads: take the block device indicated by the variable oravg_pv and create a single partition on it spanning the entire device.

As soon as I “vagrant up” the VM, it all comes together:

$ vagrant up
Bringing machine 'server1' up with 'virtualbox' provider…
==> server1: Importing base box 'ansibletestbase'…
==> server1: Matching MAC address for NAT networking…
==> server1: Setting the name of the VM: blogpost_server1_1554188252201_2080
==> server1: Clearing any previously set network interfaces…
==> server1: Preparing network interfaces based on configuration…
server1: Adapter 1: nat
server1: Adapter 2: hostonly
==> server1: Forwarding ports…
server1: 22 (guest) => 2222 (host) (adapter 1)
==> server1: Running 'pre-boot' VM customizations…
==> server1: Booting VM…
==> server1: Waiting for machine to boot. This may take a few minutes…
server1: SSH address:
server1: SSH username: vagrant
server1: SSH auth method: private key
==> server1: Machine booted and ready!
==> server1: Checking for guest additions in VM…
==> server1: Setting hostname…
==> server1: Configuring and enabling network interfaces…
server1: SSH address:
server1: SSH username: vagrant
server1: SSH auth method: private key
==> server1: Mounting shared folders…
server1: /vagrant => /home/martin/vagrant/blogpost
==> server1: Running provisioner: ansible…

Vagrant has automatically selected the compatibility mode '2.0'according to the Ansible version installed (2.7.7).

Alternatively, the compatibility mode can be specified in your Vagrantfile:

server1: Running ansible-playbook...

PLAY [blogpost] *************************************************************

TASK [Gathering Facts] ******************************************************
ok: [server1]

TASK [say hello] ************************************************************
ok: [server1] => {
"msg": "hello from server1"

TASK [partitioning PVs for the volume group] ********************************
changed: [server1]

PLAY RECAP ******************************************************************
server1 : ok=3 changed=1 unreachable=0 failed=0

Great! But I forgot to partition /dev/sd[cd] in the same way as I partition /dev/sdb! That’s a quick fix:

- name: blogpost
  hosts: oracle_si
  - oravg_pv: /dev/sdb
  - asm_disks:
      - /dev/sdc
      - /dev/sdd
  become: yes
  - name: say hello
    debug: msg="hello from {{ ansible_hostname }}"

  - name: partitioning PVs for the Oracle volume group
      device: "{{ oravg_pv }}"
      number: 1
      state: present
      align: optimal
      label: gpt

  - name: partition block devices for ASM
      device: "{{ item }}"
      number: 1
      state: present
      align: optimal
      label: gpt
    loop: "{{ asm_disks }}"

Re-running the provisioning script couldn’t be easier. Vagrant has a command for this: “vagrant provision”. This command re-runs the provisioning code against a VM. A quick “vagrant provision” later my system is configured exactly the way I want:

$ vagrant provision
==> server1: Running provisioner: ansible...
Vagrant has automatically selected the compatibility mode '2.0'
according to the Ansible version installed (2.7.7).

Alternatively, the compatibility mode can be specified in your Vagrantfile:

    server1: Running ansible-playbook...

PLAY [blogpost] ****************************************************************

TASK [Gathering Facts] *********************************************************
ok: [server1]

TASK [say hello] ***************************************************************
ok: [server1] => {
    "msg": "hello from server1"

TASK [partitioning PVs for the Oracle volume group] ****************************
ok: [server1]

TASK [partition block devices for ASM] *****************************************
changed: [server1] => (item=/dev/sdc)
changed: [server1] => (item=/dev/sdd)

PLAY RECAP *********************************************************************
server1                    : ok=4    changed=1    unreachable=0    failed=0   

This is it! Using just a few commands I can spin up VMs, test my Ansible scripts and later on when I’m happy with them, check the code into source control.