At the time of writing Oracle’s Cloud Infrastructure as a Service (IaaS) offers 2 ways to connect block storage to virtual machines: paravirtualised and via iSCSI. There are important differences between the two so please read the documentation to understand all the implications. I need all the performance I can get with my systems so I’m going with iSCSI.
It’s the little differences
Using the paravirtualised driver couldn’t be easier: you boot the VM, and all block devices are automatically attached and available. When using iSCSI you need to run a few iscsiadm commands (once) to discover and mount the remote storage. These commands are available on the click of a button in the GUI. It’s been ages that I used the GUI and I prefer a scripted approach to cloud infrastructure. My tool of choice when it comes to “infrastructure as code” is terraform.
Until fairly recently I have made use of the null provider combined with a remote-exec provisioner in my terraform scripts. The combination allows me to execute the iscsiadm commands necessary to attach the iSCSI devices to the VM. A number of enhancements in this space allowed me to ditch the rather cumbersome remote-exec step and use cloud-init combined with OCI utilities instead. As I hope to show you, using these two combined make the management of iSCSI device just as simple as the paravirtualised ones.
Cloud Init
When creating VMs I often need to perform a few extra steps that don’t quite justify the creation of a custom image. The cloud-init toolkit in OCI allows me to pass a shell script as “user_data” to the instance’s metadata, provided it’s encoded in base64. Have a look at the documentation I just referenced for more details about restrictions etc. In my terraform script, I use something like this:
resource "oci_core_instance" "docker_tf_instance" {
[...]
metadata {
ssh_authorized_keys = "${var.ssh_public_key}"
user_data = "${base64encode(file("bootstrap.sh"))}"
}
[...]
}
Most examples I found specify the input to the file() function as a variable, I didn’t do this in this post for the sake of simplicity. The script I’m passing as user_data makes use of the OCI utilities.
OCI Utilities
I wasn’t aware of these until my colleague Syed asked me why I didn’t use them. It couldn’t be easier: just install a RPM package and start a service. This will take care of the iSCSI part for you. The only caveat is that currently they can only be used for Oracle provided images based on Oracle Linux. Here is a really basic example of a shell script calling the OCI utilities:
$ cat bootstrap.sh
#!/bin/bash
cp /etc/motd /etc/motd.bkp
cat << EOF > /etc/motd
I have been modified by cloud-init at $(date)
EOF
yum install -y python-oci-cli
systemctl enable ocid.service
systemctl start ocid.service
systemctl status ocid.service
The first line has to start with #!/bin/bash to indicate to cloud-init that you want to run a shell script. Following the instructions for using OCI utilities, I am installing the python-oci-cli and start the ocid.service. This in turn will perform the iSCSI volume attachment for me – super nice! After my terraform script completed, I can log in to see if this worked:
[root@docker-tf-instance ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 50G 0 disk
sda 8:0 0 46.6G 0 disk
├─sda2 8:2 0 8G 0 part [SWAP]
├─sda3 8:3 0 38.4G 0 part /
└─sda1 8:1 0 200M 0 part /boot/efi
[root@docker-tf-instance ~]#
[root@docker-tf-instance ~]# systemctl status ocid.service
● ocid.service - Oracle Cloud Infrastructure utilities daemon
Loaded: loaded (/etc/systemd/system/ocid.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-11-27 19:52:45 GMT; 19min ago
Main PID: 15138 (python2.7)
CGroup: /system.slice/ocid.service
└─15138 python2.7 /usr/libexec/ocid
Nov 27 19:52:04 docker-tf-instance python2.7[15138]: ocid - INFO - Starting ocid thread 'iscsi'
Nov 27 19:52:04 docker-tf-instance python2.7[15138]: ocid - INFO - Starting ocid thread 'vnic'
...
Nov 27 19:52:09 docker-tf-instance python2.7[15138]: ocid - INFO - secondary VNIC script reports: Info: no changes, IP configuration is up-to-date
Nov 27 19:52:44 docker-tf-instance python2.7[15138]: ocid - INFO - Attaching iscsi device: 169.254.a.b:3260 (iqn.2015-12.com.oracleiaas:e1af1...)
Nov 27 19:52:45 docker-tf-instance systemd[1]: Started Oracle Cloud Infrastructure utilities daemon.
You can see cloud-init in action by checking /var/log/messages for occurrences of “cloud-init”. The file /var/log/cloud-init.log doesn’t contain information relevant to the “user-data” processed by the way. If you want to see how your script arrived on the VM, check /var/lib/cloud/instance/user-data.txt.
Summary
It would seem you can have the cake and eat it. Using cloud-init for bootstrapping my VM and OCI utilities to attach my block devices I don’t need to write any remote-exec hacks using the null provider and use the iSCSI volumes with the same ease of use as the paravirtualised ones. Without having to make compromises. I like it!