This is a really short post (by my standards at least) demonstrating how I ensure device name persistence in Oracle Cloud Infrastructure (OCI). Device name persistence matters for many reasons, not the least for my Ansible scripts expecting a given block device to be of a certain size and used for a specific purpose. And I’m too lazy to write discovery code in Ansible, I just want to be able to use /dev/oracleoci/oraclevdb
for LVM so that I can install the database.
The goal is to provision a VM with a sufficient number of block devices for use with the Oracle database. I wrote about the basics of device name persistence in December last year. In my earlier post I used the OCI Command Line Interface (CLI). Today I rewrote my code, switching from shell to Terraform.
As always I shall warn you that creating cloud resources as shown in this post will incur cost so please make sure you are aware of the fact. You should also be authorised to spend money if you use the code for your purposes.
Terraform Compute and Block Storage
When creating a VM in OCI, you make use of the oci_core_instance Terraform resource. Amongst the arguments you pass to it is the (operating system) image as well as the boot volume size. The boot volume is attached to the VM instance without any further input on your behalf.
Let’s assume you have already defined a VM resource named sitea_instance
in your Terraform code.
I generally attach 5 block volumes to my VMs unless performance requirements mandate a different approach.
- Block device number 1 hosts the database binaries
- Devices number 2 and 3 are used for
+DATA
- The remaining devices (4 and 5) will be used for
+RECO
Creating block volumes
The first step is to create block volumes. I know I want five, and I know they need to end up as /dev/oracleoci/oraclevd[b-f]
. Since I’m pretty lazy I thought I’d go with some kind of loop instead of hard-coding 5 block devices. It should also allow for more flexibility in the long run.
I tried to use the count meta-argument but failed to get it to work the way I wanted. Which might be a PBKAC issue. The other option in Terraform is to use the for each meta-argument instead. This sounded a lot better for my purpose. To keep my code flexible I decided to store the future block devices’ names in a variable:
variable "block_volumes" { type = list(string) default = [ "oraclevdb", "oraclevdc", "oraclevdd", "oraclevde", "oraclevdf" ] }
Remember that Oracle assigns /dev/oracleoci/oraclevda
to the boot volume. You definitely want to leave that one alone.
Next I’ll use the for-each block to get the block device name. I’m not sure if this is considered good code, all I know is that it does the job. The Terraform entity to create block devices is name oci_core_volume:
resource "oci_core_volume" "sitea_block_volume" { for_each = toset(var.block_volumes) availability_domain = data.oci_identity_availability_domains.local_ads.availability_domains.0.name compartment_id = var.compartment_ocid display_name = "sitea-${each.value}" size_in_gbs = 50 }
This takes care of creating 5 block volumes. On their own they aren’t very useful yet, they need to be attached to a VM.
Attaching block devices to the VM
In the next step I have to create a block device attachment. This is where the count meta-argument failed me as I couldn’t find a way to generate the persistent device name. I got around that issue using for-each, as shown here:
resource "oci_core_volume_attachment" "sitea_block_volume_attachement" { for_each = toset(var.block_volumes) attachment_type = "iscsi" instance_id = oci_core_instance.sitea_instance.id volume_id = oci_core_volume.sitea_block_volume[each.value].id device = "/dev/oracleoci/${each.value}" }
Using the contents of each.value
I can refer to the block volume and also assign a suitable device name. Note that I’m specifying “iscsi” as the attachement type. Instead of the remote-exec provisioner I rely on cloud-init to make my iSCSI devices available to the VM.
The result
Once the Terraform script completes, I have a VM with block storage ready for Ansible provisioning scripts.
[opc@sitea ~]$ ls -l /dev/oracleoci/ total 0 lrwxrwxrwx. 1 root root 6 Mar 25 14:47 oraclevda -> ../sda lrwxrwxrwx. 1 root root 7 Mar 25 14:47 oraclevda1 -> ../sda1 lrwxrwxrwx. 1 root root 7 Mar 25 14:47 oraclevda2 -> ../sda2 lrwxrwxrwx. 1 root root 7 Mar 25 14:47 oraclevda3 -> ../sda3 lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevdb -> ../sdc lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevdc -> ../sdd lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevdd -> ../sde lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevde -> ../sdb lrwxrwxrwx. 1 root root 6 Mar 25 14:51 oraclevdf -> ../sdf [opc@sitea ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 50G 0 disk ├─sda1 8:1 0 100M 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 48,9G 0 part ├─ocivolume-root 252:0 0 38,9G 0 lvm / └─ocivolume-oled 252:1 0 10G 0 lvm /var/oled sdb 8:16 0 50G 0 disk sdc 8:32 0 50G 0 disk sdd 8:48 0 50G 0 disk sde 8:64 0 50G 0 disk sdf 8:80 0 50G 0 disk
Summary
There are many ways to complete tasks, and cloud providers usually offer plenty of them. I previously wrote about ensuring device name persistence using the OCI CLI whereas this post covers Terraform. Looking back and comparing both I have to say that I like the new approach better.