Category Archives: Cloud

Bootstrapping a VM image in Oracle Cloud Infrastructure using cloud-init

At the time of writing Oracle’s Cloud Infrastructure as a Service (IaaS) offers 2 ways to connect block storage to virtual machines: paravirtualised and via iSCSI. There are important differences between the two so please read the documentation to understand all the implications. I need all the performance I can get with my systems so I’m going with iSCSI.

It’s the little differences

Using the paravirtualised driver couldn’t be easier: you boot the VM, and all block devices are automatically attached and available. When using iSCSI you need to run a few iscsiadm commands (once) to discover and mount the remote storage. These commands are available on the click of a button in the GUI. It’s been ages that I used the GUI and I prefer a scripted approach to cloud infrastructure. My tool of choice when it comes to “infrastructure as code” is terraform

Until fairly recently I have made use of the null provider combined with a remote-exec provisioner in my terraform scripts. The combination allows me to execute the iscsiadm commands necessary to attach the iSCSI devices to the VM. A number of enhancements in this space allowed me to ditch the rather cumbersome remote-exec step and use cloud-init combined with OCI utilities instead. As I hope to show you, using these two combined make the management of iSCSI device just as simple as the paravirtualised ones.

Cloud Init

When creating VMs I often need to perform a few extra steps that don’t quite justify the creation of a custom image. The cloud-init toolkit in OCI allows me to pass a shell script as “user_data” to the instance’s metadata, provided it’s encoded in base64. Have a look at the documentation I just referenced for more details about restrictions etc. In my terraform script, I use something like this:

resource "oci_core_instance" "docker_tf_instance" {
    metadata {
        ssh_authorized_keys = "${var.ssh_public_key}"
        user_data = "${base64encode(file(""))}"


Most examples I found specify the input to the file() function as a variable, I didn’t do this in this post for the sake of simplicity. The script I’m passing as user_data makes use of the OCI utilities.

OCI Utilities

I wasn’t aware of these until my colleague Syed asked me why I didn’t use them. It couldn’t be easier: just install a RPM package and start a service. This will take care of the iSCSI part for you. The only caveat is that currently they can only be used for Oracle provided images based on Oracle Linux. Here is a really basic example of a shell script calling the OCI utilities:

$ cat 

cp /etc/motd /etc/motd.bkp
cat << EOF > /etc/motd

I have been modified by cloud-init at $(date)


yum install -y python-oci-cli
systemctl enable ocid.service
systemctl start ocid.service
systemctl status ocid.service

The first line has to start with #!/bin/bash to indicate to cloud-init that you want to run a shell script. Following the instructions for using OCI utilities, I am installing the python-oci-cli and start the ocid.service. This in turn will perform the iSCSI volume attachment for me – super nice! After my terraform script completed, I can log in to see if this worked:

[root@docker-tf-instance ~]# lsblk
sdb      8:16   0   50G  0 disk 
sda      8:0    0 46.6G  0 disk 
├─sda2   8:2    0    8G  0 part [SWAP]
├─sda3   8:3    0 38.4G  0 part /
└─sda1   8:1    0  200M  0 part /boot/efi
[root@docker-tf-instance ~]# 

[root@docker-tf-instance ~]# systemctl status ocid.service
● ocid.service - Oracle Cloud Infrastructure utilities daemon
   Loaded: loaded (/etc/systemd/system/ocid.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2018-11-27 19:52:45 GMT; 19min ago
 Main PID: 15138 (python2.7)
   CGroup: /system.slice/ocid.service
           └─15138 python2.7 /usr/libexec/ocid

Nov 27 19:52:04 docker-tf-instance python2.7[15138]: ocid - INFO - Starting ocid thread 'iscsi'
Nov 27 19:52:04 docker-tf-instance python2.7[15138]: ocid - INFO - Starting ocid thread 'vnic'
Nov 27 19:52:09 docker-tf-instance python2.7[15138]: ocid - INFO - secondary VNIC script reports: Info: no changes, IP configuration is up-to-date
Nov 27 19:52:44 docker-tf-instance python2.7[15138]: ocid - INFO - Attaching iscsi device: 169.254.a.b:3260 (
Nov 27 19:52:45 docker-tf-instance systemd[1]: Started Oracle Cloud Infrastructure utilities daemon.

You can see cloud-init in action by checking /var/log/messages for occurrences of “cloud-init”. The file /var/log/cloud-init.log doesn’t contain information relevant to the “user-data” processed by the way. If you want to see how your script arrived on the VM, check /var/lib/cloud/instance/user-data.txt.


It would seem you can have the cake and eat it. Using cloud-init for bootstrapping my VM and OCI utilities to attach my block devices I don’t need to write any remote-exec hacks using the null provider and use the iSCSI volumes with the same ease of use as the paravirtualised ones. Without having to make compromises. I like it!

Log in to Ubuntu VMs in Oracle Cloud Infrastructure

When I learned that Oracle was providing Ubuntu images in Oracle Cloud Infrastructure (OCI) I was a bit surprised at first. After all, Oracle provides a great Enterprise Linux distribution in the form of Oracle Linux. As a Ubuntu fan I do of course appreciate the addition of Ubuntu to the list of supported distributions. In fact it doesn’t end there, have a look at the complete list of Oracle provided images to see what’s available.

Trying Ubuntu LTS

I wanted to give Ubuntu a spin on OCI and decided to start a small VM using the 16.04 LTS image. I have been using this release quite heavily in the past and have yet to make the transition to 18.04. Starting the 16.04 VM up was easily done using my terraform script. Immediately after the terraform prompt returned I faced a slight issue: I couldn’t log in:

$ ssh opc@w.x.y.z
The authenticity of host ... can't be established.
opc@w.x.y.z: Permission denied (publickey)

This is entirely my fault, for some reason I didn’t scroll down within the page to read more about users. Assuming the account created during the VM provisioning would be the same as for the Oracle Linux image, I tried logging in as user “opc”. The result is what I showed you earlier in the listing.

The clue about users is found in Linux Image Details, section “users” and aforementioned documentation page. I am quoting verbally because I couldn’t possibly say it any better:

For instances created using the Ubuntu image, the user name ubuntu is created automatically. The ubuntu user has sudo privileges and is configured for remote access over the SSH v2 protocol using RSA keys. The SSH public keys that you specify while creating instances are added to the /home/ubuntu/.ssh/authorized_keys file.

There it is.

It seems I wasn’t the only one, and beginning with the Canonical-Ubuntu-16.04-2018.11.15-0 image, a message is displayed when you try to log in as opc:

$ ssh opc@w.x.y.z
Warning: Permanently added ... to the list of known hosts
Please login as the user "ubuntu" rather than the user "opc".

Connection to w.x.y.z closed

So no more missing this important piece of information :)

Terraforming the Oracle Cloud: choosing and using an image family

For a few times now I have presented about “cloud deployments done the cloud way”, sharing lessons learned in the changing world I find myself in. It’s a lot of fun and so far I have been far too busy to blog about things I learned by trial and error. Working with Terraform turned out to be a very good source for blog posts, I’ll put a few of these up in the hope of saving you a few minutes.

This blog post is all about creating Ubuntu images in Oracle Cloud Infrastructure (OCI) using terraform. The technique is equally applicable for other Linux image types though. In case you find this post later using a search engine, here is some version information that might put everything into context:

$ ./terraform version
Terraform v0.11.10
+ provider.null v1.0.0
+ provider.oci v3.7.0

I used the “null” provider to run a few post-installation commands as shown in various terraform examples for OCI. Right now I’m trying to work out if I can’t do the same in a different way. If I am successful you can expect a blog post to follow…

Creating a Ubuntu 18.04 LTS image in OCI

To create the Ubuntu image (or any other image for that matter), I need information about the image family. Documentation about image families in OCI can be found at

Scrolling down/selecting the entry from the left hand side I found the link to the Ubuntu 18.04 LTS image family. Each supported image has its own documentation link, containing crucial data: an OCID per location. At the time of writing, the latest Ubuntu image was named Canonical-Ubuntu-18.04-2018.11.17-0 and had an image OCID of An OCID is short for Oracle Cloud Identifier and it’s used in many places in OCI. There are different OCIDs for the image depending on location; the (shortened) OCID I just showed you was for Frankfurt.

With the OCID at hand, I can open my favourite code editor and start putting the terraform script together. I create instances in OCI using the oci_core_instance type, documented at the terraform website.

Be careful, many of the references and code examples I found about oci_core_image are written for older versions of the terraform provider. I noticed some attributes used in the examples are deprecated. It might be useful to compare the source code examples against the current documentation

Part of the definition of an oci_core_instance requires the specification of the operating system in the source_details {} section. To create the Ubuntu VM in the Frankfurt region, I have to specify – amongst other things of course – this:

resource "oci_core_instance" "docker_tf_instance" {
    source_details {
        source_type = "image"
        source_id   = ""

The actual OCID is far longer, the example above is shortened for the sake of readability. I didn’t like it wrapping around the text box and thus destroying my layout. Make sure you use the correct OCID ;)

With the information at hand I can create the Ubuntu VM and connect to it using the specified SSH key. Have fun!