Tag Archives: Vagrant

Vagrant: mapping a Virtualbox VM to a Vagrant environment

This is a small post hopefully saving you a few minutes mapping Vagrant and VirtualBox environments.

I typically have lots of Vagrant environments defined. I love Vagrant as a technology, it makes it super easy to spin up Virtual Machines (VMs) and learn about new technologies.

Said Vagrant environments obviously show up as VMs in VirtualBox. To make it more interesting I have a few more VirtualBox VMs that don’t map to a Vagrant environment. Adding in a naming convention that’s been growing organically over time I occasionally find myself at a loss as to which VirtualBox VM maps to a Vagrant environment. Can this be done? Yep, and creating a mapping is quite simple actually. Here is what I found useful.

Directory structure

My Vagrant directory structure is quite simple: I defined ${HOME}/vagrant as top-level directory with a sub-directory containing all my (custom) boxes. Apart from ~/vagrant/boxes I create further sub-directories for each project. For example:

[martin@ryzen: vagrant]$ ls -ld *oracle* boxes
drwxrwxr-x 2 martin martin 4096 Nov 23 16:52 boxes
drwxrwxr-x 3 martin martin   41 Feb 16  2021 oracle_19c_dg
drwxrwxr-x 3 martin martin   41 Nov 19  2020 oracle_19c_ol7
drwxrwxr-x 3 martin martin   41 Jan  6  2021 oracle_19c_ol8
drwxrwxr-x 3 martin martin   41 Nov 25 12:54 oracle_xe

But … which of my VirtualBox VMs belongs to the oracle_xe environment?

Mapping a Vagrant environment to a VirtualBox VM

Vagrant keeps a lot of metadata in the project’s .vagrant directory. Continuing with the oracle_xe example, here is what it stores:

[martin@buildhost: oracle_xe]$ tree .vagrant/
.vagrant/
├── machines
│   └── oraclexe
│       └── virtualbox
│           ├── action_provision
│           ├── action_set_name
│           ├── box_meta
│           ├── creator_uid
│           ├── id
│           ├── index_uuid
│           ├── synced_folders
│           └── vagrant_cwd
├── provisioners
│   └── ansible
│       └── inventory
│           └── vagrant_ansible_inventory
└── rgloader
    └── loader.rb

7 directories, 10 files

Looking at the above output I guess I should look at .vagrant/machines/

The machine name (oraclexe) is derived from the Vagrantfile. I create a config.vm.define section per VM out of habit (even when I create just 1 VM), as you can see here in my shortened Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  
  config.vm.define "oraclexe" do |xe|
    xe.vm.box = "ol7"
    xe.vm.box_url = "file:///home/martin/vagrant/boxes/ol7.json"

    ...

    xe.vm.provision "ansible" do |ansible|
      ansible.playbook = "setup.yml"
    end
  end
end

In case you don’t give your VMs a name you should find a directory named default instead.

As I’m using Vagrant together with VirtualBox I’m not surprised to find a sub-directory named virtualbox.

Finally! You see the VM’s metadata in that directory. The VM’s ID can be found in .vagrant/machines/oraclexe/virtualbox/id. The file contains the internal ID VirtualBox uses to identify VMs. Using that knowledge to my advantage I can create the lookup as shown here:

[martin@buildhost: oracle_xe]$ vboxmanage list vms | grep $(cat .vagrant/machines/oraclexe/virtualbox/id)
"oraclexe" {67031773-bad9-4325-937b-e471d02a56a3}

Voila! This wasn’t particularly hard since the VM name is oracelxe as well. Nevertheless I found this technique works well regardless of how you curated your Vagrantfile.

Happy Automating!

Building a Debian 11 Vagrant Box using Packer and Ansible

Sometimes it might be necessary to create one’s own Vagrant base box for reasons too numerous to mention here. Let’s assume you want to build a new base box for Debian 11 (bullseye) to run on Virtualbox. Previously I would have run through the installation process followed by customising the VM’s installed packages and installing Guest Additions before creating the base box. As it turns out, this repetitive (and boring) process isn’t required as pretty much the whole thing can be automated using Packer.

Debian 11 is still quite new and a few things related to the Guest Additions don’t work yet but it can’t hurt to be prepared.

As I’m notoriously poor at keeping my code in sync between my various computers I created a new repository on Github for sharing my Packer builds. If you are interested head over to https://github.com/martincarstenbach/packer-blogposts. As with every piece of code you find online, it’s always a good idea to vet it first before even considering using it. Kindly take the time to read the license as well as the README associated with the repository in addition to this post.

Please note this is code I wrote for myself, a little more generic than it might have to be but ultimately you’ll have to read the code and adjust it for your own purposes. The preseed and kickstart files are specifically single-purpose only and shouldn’t be used for anything other than what is covered in this post. My Debian 11 base box is true to the word: it’s really basic, apart from SSH and the standard utilities (+ Virtualbox Guest Additions) I decided not to include anything else.

Software Releases

I should have added that I used Packer’s Virtualbox ISO builder. It is documented in great detail at the Packer website. Further software used:

  • Ubuntu 20.04 LTS
  • Ansible 2.9
  • Packer 1.7.4
  • Virtualbox 6.1.26

All of these were current at the time of writing.

Preparing the Packer build JSON and Debian Preseed file

I have missed the opportunity of creating all my computer systems with the same directory structure, hence there are small, subtle differences. To accommodate all of these I created a small shell script, prepare-debian11.sh. This script prompts me for the most important pieces of information and creates both the preseed file as well as the JSON build-file required by Packer.

martin@ubuntu:~/packer-blogposts$ bash prepare-debian11.sh 

INFO: preparing your packer environment for the creation of a Debian 11 Vagrant base box

Enter your local Debian mirror (http://ftp2.de.debian.org): 
Enter the mirror directory (/debian): 

/home/martin/.ssh/id_rsa.pub

Enter the full path to your public SSH key (/home/martin/.ssh/id_rsa.pub): 
Identity added: /home/martin/.ssh/id_rsa (/home/martin/.ssh/id_rsa)
Enter the location of the Debian 11 network installation media (/m/stage/debian-11.0.0-amd64-netinst.iso):
Enter the full path to store the new vagrant box (/home/martin/vagrant/boxes/debian-11-01.box):/home/martin/vagrant/boxes/blogpost.box    

INFO: preparation complete, next run packer validate vagrant-debian-11.json && packer build vagrant-debian-11.json

One of the particularities of my Packer builds is the use of agent authentication. My number 1 rule when coding is to never store authentication details in files if it can be avoided at all. Relying on the SSH agent to connect to the Virtualbox VM while it’s created allows me to do that, at least for Packer. Since I tend to forget adding my Vagrant SSH key to the agent, the prepare-script does that for me.

Sadly I have to store the vagrant user’s password in the preseed file. I can live with that this time as the password should be “vagrant” by convention and I didn’t break with it. Out of habit I encrypted the password anyway, it’s one of these industry best-known-methods worth applying every time.

Building the Vagrant Base Box

Once the build file and its corresponding preseed file are created by the prepare-script, I suggest you review them first before taking any further action. Make any changes you like, then proceed by running a packer validate followed by the packer build command once you understood/agree with what’s happening next. The latter of the 2 commands kicks the build off, and you’ll see the magic of automation for yourself ;)

Here is a sample of one of my sessions:

martin@ubuntu:~/packer-blogposts$ packer build vagrant-debian-11.json
virtualbox-iso: output will be in this color.

==> virtualbox-iso: Retrieving Guest additions
==> virtualbox-iso: Trying /usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: Trying /usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: /usr/share/virtualbox/VBoxGuestAdditions.iso => /usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: Retrieving ISO
==> virtualbox-iso: Trying file:///m/stage/debian-11.0.0-amd64-netinst.iso
==> virtualbox-iso: Trying file:///m/stage/debian-11.0.0-amd64-netinst.iso?checksum=sha256%3Aae6d563d2444665316901fe7091059ac34b8f67ba30f9159f7cef7d2fdc5bf8a
==> virtualbox-iso: file:///m/stage/debian-11.0.0-amd64-netinst.iso?checksum=sha256%3Aae6d563d2444665316901fe7091059ac34b8f67ba30f9159f7cef7d2fdc5bf8a => /m/stage/debian-11.0.0-amd64-netinst.iso
==> virtualbox-iso: Starting HTTP server on port 8765
==> virtualbox-iso: Using local SSH Agent to authenticate connections for the communicator...
==> virtualbox-iso: Creating virtual machine...
==> virtualbox-iso: Creating hard drive output-virtualbox-iso-debian11base/debian11base.vdi with size 20480 MiB...
==> virtualbox-iso: Mounting ISOs...
    virtualbox-iso: Mounting boot ISO...
==> virtualbox-iso: Creating forwarded port mapping for communicator (SSH, WinRM, etc) (host port 2302)
==> virtualbox-iso: Executing custom VBoxManage commands...
    virtualbox-iso: Executing: modifyvm debian11base --memory 2048
    virtualbox-iso: Executing: modifyvm debian11base --cpus 2
==> virtualbox-iso: Starting the virtual machine...
==> virtualbox-iso: Waiting 10s for boot...
==> virtualbox-iso: Typing the boot command...
==> virtualbox-iso: Using SSH communicator to connect: 127.0.0.1
==> virtualbox-iso: Waiting for SSH to become available...
==> virtualbox-iso: Connected to SSH!
==> virtualbox-iso: Uploading VirtualBox version info (6.1.26)
==> virtualbox-iso: Uploading VirtualBox guest additions ISO...
==> virtualbox-iso: Provisioning with Ansible...
    virtualbox-iso: Setting up proxy adapter for Ansible....
==> virtualbox-iso: Executing Ansible: ansible-playbook -e packer_build_name="virtualbox-iso" -e packer_builder_type=virtualbox-iso -e packer_http_addr=10.0.2.2:8765 --ssh-extra-args '-o IdentitiesOnly=yes' -e ansible_ssh_private_key_file=/tmp/ansible-key610730318 -i /tmp/packer-provisioner-ansible461216853 /home/martin/devel/packer-blogposts/ansible/vagrant-debian-11-guest-additions.yml
    virtualbox-iso:
    virtualbox-iso: PLAY [all] *********************************************************************
    virtualbox-iso:
    virtualbox-iso: TASK [Gathering Facts] *********************************************************
    virtualbox-iso: ok: [default]
    virtualbox-iso: [WARNING]: Platform linux on host default is using the discovered Python
    virtualbox-iso: interpreter at /usr/bin/python3, but future installation of another Python
    virtualbox-iso: interpreter could change this. See https://docs.ansible.com/ansible/2.9/referen
    virtualbox-iso: ce_appendices/interpreter_discovery.html for more information.
    virtualbox-iso:
    virtualbox-iso: TASK [install additional useful packages] **************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: TASK [create a temporary mount point for vbox guest additions] *****************
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: TASK [mount guest additions ISO read-only] *************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: TASK [execute guest additions script] ******************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: TASK [unmount guest additions ISO] *********************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: TASK [remove the temporary mount point] ****************************************
    virtualbox-iso: ok: [default]
    virtualbox-iso:
    virtualbox-iso: TASK [upgrade all packages] ****************************************************
    virtualbox-iso: ok: [default]
    virtualbox-iso:
    virtualbox-iso: PLAY RECAP *********************************************************************
    virtualbox-iso: default                    : ok=8    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
    virtualbox-iso:
==> virtualbox-iso: Gracefully halting virtual machine...
==> virtualbox-iso: Preparing to export machine...
    virtualbox-iso: Deleting forwarded port mapping for the communicator (SSH, WinRM, etc) (host port 2302)
==> virtualbox-iso: Exporting virtual machine...
    virtualbox-iso: Executing: export debian11base --output output-virtualbox-iso-debian11base/debian11base.ovf
==> virtualbox-iso: Cleaning up floppy disk...
==> virtualbox-iso: Deregistering and deleting VM...
==> virtualbox-iso: Running post-processor: vagrant
==> virtualbox-iso (vagrant): Creating a dummy Vagrant box to ensure the host system can create one correctly
==> virtualbox-iso (vagrant): Creating Vagrant box for 'virtualbox' provider
    virtualbox-iso (vagrant): Copying from artifact: output-virtualbox-iso-debian11base/debian11base-disk001.vmdk
    virtualbox-iso (vagrant): Copying from artifact: output-virtualbox-iso-debian11base/debian11base.ovf
    virtualbox-iso (vagrant): Renaming the OVF to box.ovf...
    virtualbox-iso (vagrant): Compressing: Vagrantfile
    virtualbox-iso (vagrant): Compressing: box.ovf
    virtualbox-iso (vagrant): Compressing: debian11base-disk001.vmdk
    virtualbox-iso (vagrant): Compressing: metadata.json
Build 'virtualbox-iso' finished after 13 minutes 43 seconds.

==> Wait completed after 13 minutes 43 seconds

==> Builds finished. The artifacts of successful builds are:
--> virtualbox-iso: 'virtualbox' provider box: /home/martin/vagrant/boxes/blogpost.box

The operations should complete with the message shown in the output – build complete, box created and added in the directory specified. From that point onward you can add it to your inventory.

Happy Automation!

Automating Vagrant Box versioning

The longer I work in IT the more I dislike repetitive processes. For example, when updating my Oracle Linux 8 Vagrant Base Box I repeat the same process over and over:

  • Boot the VirtualBox (source) VM
  • Enable port forwarding for SSH
  • SSH to the VM to initiate the update via dnf update -y && reboot
  • Run vagrant package, calculate the SHA256 sum, modify the metadata file
  • Use vagrant box update to make it known to vagrant

There has to be a better way to do that, and in fact there is. A little bit of shell scripting later all I need to do is run my “update base box” script, and grab a coffee while it’s all done behind the scenes. The most part of the exercise laid out above is quite boring, but I thought I’d share how I’m modifying the metadata file in the hope to save you a little bit of time and effort. If you would like a more thorough explanation of the process please head over to my previous post.

Updating the Metadata File

If you would like to version-control your vagrant boxes locally, you need a metadata file, maybe something similar to ol8.json shown below. It defines my Oracle Linux 8 boxes (at the moment there is only one):

$ cat ol8.json 
{
  "name": "ol8",
  "description": "Martins Oracle Linux 8",
  "versions": [
    {
      "version": "8.4.0",
      "providers": [
        {
          "name": "virtualbox",
          "url": "file:///vagrant/boxes/ol8_8.4.0.box",
          "checksum": "b28a3413d33d4917bc3b8321464c54f22a12dadd612161b36ab20754488f4867",
          "checksum_type": "sha256"
        }
      ]
    }
  ]
}

For the sake of argument, let’s assume I want to upgrade my Oracle Linux 8.4.0 box to the latest and greatest packages that were available at the time of writing. As it’s a minor update I’ll call the new version 8.4.1. To keep the post short and (hopefully) entertaining I’m skipping the upgrade of the VM.

Option (1): jq

Fast forward to the metadata update: I need to add a new element to the versions array. I could have used jq for that purpose and it would have been quite easy:

$ jq '.versions += [{
>       "version": "8.4.1",
>       "providers": [
>         {
>           "name": "virtualbox",
>           "url": "file:///vagrant/boxes/ol8_8.4.1.box",
>           "checksum": "ecb3134d7337a9ae32c303e2dee4fa6e5b9fbbea5a38084097a6b5bde2a56671",
>           "checksum_type": "sha256"
>         }
>       ]
>     }]' ol8.json
{
  "name": "ol8",
  "description": "Martins Oracle Linux 8",
  "versions": [
    {
      "version": "8.4.0",
      "providers": [
        {
          "name": "virtualbox",
          "url": "file:///vagrant/boxes/ol8_8.4.0.box",
          "checksum": "b28a3413d33d4917bc3b8321464c54f22a12dadd612161b36ab20754488f4867",
          "checksum_type": "sha256"
        }
      ]
    },
    {
      "version": "8.4.1",
      "providers": [
        {
          "name": "virtualbox",
          "url": "file:///vagrant/boxes/ol8_8.4.1.box",
          "checksum": "ecb3134d7337a9ae32c303e2dee4fa6e5b9fbbea5a38084097a6b5bde2a56671",
          "checksum_type": "sha256"
        }
      ]
    }
  ]
}

That would be too easy ;) Sadly I don’t have jq available on all the systems I’d like to run this script on. But wait, I have Python available.

Option (2): Python

Although I’m certainly late to to the party I truly enjoy working with Python. Below you’ll find a (shortened) version of a Python script to take care of the metadata addition.

Admittedly it does a few additional things compared to the very basic jq example. For instance, it takes a backup of the metadata file, takes and parses command line arguments etc. It’s a bit longer than a one-liner though ;)

#!/usr/bin/env python3

# PURPOSE
# add metadata about a new box version to the metadata file
# should also work with python2

import json
import argparse
import os
import sys
from time import strftime
import shutil

# Parsing the command line. Use -h to print help
parser = argparse.ArgumentParser()
parser.add_argument("version",       help="the new version of the vagrant box to be added. Must be unique")
parser.add_argument("sha256sum",     help="the sha256 sum of the newly created package.box")
parser.add_argument("box_file",      help="full path to the package.box, eg /vagrant/boxes/ol8_8.4.1.box")
parser.add_argument("metadata_file", help="full path to the metadata file, eg /vagrant/boxes/ol8.json")
args = parser.parse_args()

# this is the JSON element to add
new_box_version = {
    "version": args.version,
    "providers": [
        {
            "name": "virtualbox",
            "url": "file://" + args.box_file,
            "checksum": args.sha256sum,
            "checksum_type": "sha256"
        }
    ]
}

...

# check if the box_file exists
if (not os.path.isfile(args.box_file)):
    sys.exit("FATAL: Vagrant box file {} does not exist".format(args.box_file))

# read the existing metadata file
try:
    with open(args.metadata_file, 'r+') as f:
        metadata = json.load(f)
except OSError as err:
    sys.exit ("FATAL: Cannot open the metadata file {} for reading: {}".format(args.metadata_file, err))

# check if the version to be added exists already. 
all_versions =  metadata["versions"]
if args.version in all_versions.__str__():
    sys.exit ("FATAL: new version {} to be added is a duplicate".format(args.version))

# if the new box doesn't exist already, it's ok to add it
metadata['versions'].append(new_box_version)

# create a backup of the existing file before writing
try:
    bkpfile = args.metadata_file + "_" + strftime("%y%m%d_%H%M%S")
    shutil.copy(args.metadata_file, bkpfile)
except OSError as err:
    sys.exit ("FATAL: cannot create a backup of the metadata file {}".format(err))

# ... and write changes to disk
try:
    with open(args.metadata_file, 'w') as f:
        json.dump(metadata, f, indent=2)
except OSError as err:
    sys.exit ("FATAL: cannot save metadata to {}: {}".format(args.metadata_file, err))

print("INFO: process completed successfully")

That’s it! Next time I need to upgrade my Vagrant boxes I can rely on a fully automated process, saving me quite a bit of time when I’m instantiating a new Vagrant-based environment.

Introducing Packer: building Vagrant base boxes hands-free

I have referred to Packer in some of my cloud-related presentations as an example of a tool for creating immutable infrastructure. In addition to the cloud, Packer supports a great many other build targets as well. Since I work with VirtualBox and Vagrant a lot, Packer’s ability to create Vagrant base boxes is super awesome. Combined with local box versioning I can build new Vagrant systems in almost no time. More importantly though, I can simply kick the process off, grab a coffee, and when I’m back, enjoy a new build of my Oracle Linux Vagrant base box.

The task at hand

I would like to build a new Vagrant base box for Oracle Linux 7.8, completely hands-off. All my processes and workflows therefore need to be defined in software (Infrastructure as Code).

Since I’m building private Vagrant boxes I don’t intend to share I can ignore the requirements about passwords as documented in the Vagrant documentation, section “Default User Settings”. Instead of the insecure key pair I’m using my own keys as well.

The build environment

I’m using Ubuntu 20.04.1 LTS as my host operating system. Packer 1.5 does all the hard work provisioning VMs for Virtualbox 6.1.12. Ansible 2.9 helps me configure my systems. Vagrant 2.2.7 will power my VMs after they are created.

Except for VirtualBox and Packer I’m using the stock packages supplied by Ubuntu.

How to get there

Packer works by reading a template file and performs the tasks defined therein. If you are new to Packer, I suggest you visit the website for more information and some really great guides.

As of Packer 1.5 you can also use HCL2. Which is nice, as it allows me to reuse (or rather add to) my Terraform skills. However, at the time of writing the documentation warns that HCL2 support is still in beta, which is why I went with the JSON template language.

To save you a lot of typing I collated all my Packer-related posts in a dedicated Github repository: https://github.com/martincarstenbach/packer-blogposts

High-level steps

From a bird’s eye view, using my code Packer template…

  • Creates a Virtualbox VM from an ISO image
  • Feeds a kickstart file to it for an unattended installation
  • After the VM is up it applies an Ansible playbook to it to install VirtualBox guest additions

The end result should be a fully working Vagrant base box.

Provisioning the VM

The first thing to do is to create the Packer template for my VM image. Properties of a VM are defined in the so-called builders section. As I said before, there are lots of builders available… I would like to create a Virtualbox VM from an ISO image, so I went with virtualbox-iso, which is really super easy to use and well documented. So after a little bit of trial and error I ended up with this:

{
  "builders": [
    {
      "type": "virtualbox-iso",
      "boot_command": [
        "<esc>",
        "<wait>",
        "linux text inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ol7.ks",
        "<enter>"
      ],
      "disk_size": "12288",
      "guest_additions_path": "/home/vagrant/VBoxGuestAdditions.iso",
      "guest_os_type": "Oracle_64",
      "hard_drive_interface": "sata",
      "hard_drive_nonrotational": "true",
      "http_directory": "http",
      "iso_checksum": "1c1471c49025ffc1105d0aa975f7c8e3",
      "iso_checksum_type": "md5",
      "iso_url": "file:///m/stage/V995537-01-ol78.iso",
      "sata_port_count": "5",
      "shutdown_command": "echo 'packer' | sudo -S shutdown -P now",
      "ssh_timeout": "600s",
      "ssh_username": "vagrant",
      "ssh_agent_auth": true,
      "vboxmanage": [
        [
          "modifyvm",
          "{{.Name}}",
          "--memory",
          "2048"
        ],
        [
          "modifyvm",
          "{{.Name}}",
          "--cpus",
          "2"
        ]
      ],
      "vm_name": "packertest"
    }
  ],
  "provisioners": [
    {
      "type": "ansible",
      "playbook_file": "ansible/guest_additions.yml",
      "user": "vagrant"
    }
  ],
  "post-processors": [
    {
      "keep_input_artifact": true,
      "output": "/home/martin/vagrant/boxes/ol7_7.8.2.box",
      "type": "vagrant"
    }
  ]
} 

If you are new to Packer this probably looks quite odd, but it’s actually very intuitive after a little while. I haven’t used Packer much before coming up with this example, which is a great testimony to the quality of the documentation.

Note this template has 3 main sections:

  • builders: the virtualbox-iso builder allows me to create a VirtualBox VM based on an (Oracle Linux 7.8) ISO image
  • provisioners: once the VM has been created to my specification I can run Ansible playbooks against it
  • post-processors: this line is important as it creates the Vagrant base box after the provisioner finished its work

Contrary to most examples I found I’m using SSH keys for communicating with the VM, rather than a more insecure username/password combination. All you need to do is add the SSH key to the agent via ssh-add before you kick the build off.

While testing the best approach to building the VM and guest additions I ran into a few issues prompting me to upload the guest additions ISO to the vagrant user’s home directory. This way it wasn’t too hard to refer to it in the Ansible playbook (see below).

Kickstarting Oracle Linux 7

The http_directory directive in the first (builder) block is crucial for automating the build. As soon as Packer starts its work, it will create a HTTP server in the directory indicated by the variable. This directory must obviously exist.

Red-Hat-based distributions allow admins to install the operating system in a fully automated way using the Kickstart format. You provide the Kickstart file to the system when you boot it for the first time. A common way to do so is via HTTP, which is why I’m so pleased about the HTTP server started by Packer. It couldn’t be easier: thanks to my http_directory a web server is already started, and using the HTTPIP and HTTPort variables I can refer to files inside the directory from within the template.

As soon as Packer boots the VM the Kickstart file is passed as specified in boot_command. I had to look the syntax up using a search engine. It essentially comes down to simulating a bunch of keystrokes as if you were typing them interactively.

Long story short, I don’t need to worry about the installation, at least as long as my Kickstart file is ok. One way to get the Kickstart file right is to use the one that’s created after a manual operating system installation. I usually end up using /root/anaconda-ks.cfg and customise it.

There are 3 essentials tasks to complete in the Kickstart file if you want to create a Vagrant base box:

  1. Create the vagrant user account
  2. Allow password-less authentication to the vagrant account via SSH
  3. Add vagrant to the list of sudoers

First I have to include a directive to create a vagrant user:

user --name=vagrant

The sshkey keyword allows me to inject my SSH key into the user’s authorized_keys file.

sshkey --username=vagrant "very-long-ssh-key"

I also have to add the vagrant account to the list of sudoers. Using the %post directive, I inject the necessary line into /etc/sudoers:

%post --log=/root/ks-post.log

/bin/echo "vagrant        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers

%end 

Calling the Ansible provisioner

So far I have defined a VM to be created (within the builders section). The installation is completely hands-off thanks for the Kickstart file I provided. However, I’m not done yet: I have yet to install the Virtualbox guest additions. This is done via the Ansible provisioner. It connects as vagrant to the VM and executes the instructions from ansible/guest_additions.yml.

This is a rather simple file:

- hosts: all
  become: yes
  tasks:
  - name: upgrade all packages
    yum:
      name: '*'
      state: latest

  - name: install kernel-uek-devel
    yum:
      name: kernel-uek-devel
      state: present

  - name: reboot to the latest kernel
    reboot:

  # Guest additions are located as per guest_additions_path in 
  # Packer's configuration file
  - name: Mount guest additions ISO read-only
    mount:
      path: /mnt/
      src: /home/vagrant/VBoxGuestAdditions.iso
      fstype: iso9660
      opts: ro
      state: mounted

  - name: execute guest additions
    shell: /mnt/VBoxLinuxAdditions.run 

In plain English, I’m becoming root before updating all software packages. One of the pre-rquisites for compiling Virtualbox’s guest additions is to install the kernel-uek-devel package.

After that operation completed theVM reboot into the new kernel before mounting the guest addition ISO I asked to be copied to /home/vagrant/VBoxGuestAdditions.iso in the builder section of the template.

Once the ISO file is mounted, I call VBoxLinuxAdditions.run to build the guest additions.

Building the Vagrant base box

Putting it all together, this is the output created by Packer:

$ ANSIBLE_STDOUT_CALLBACK=debug ./packer build oracle-linux-7.8.json 
virtualbox-iso: output will be in this color.

==> virtualbox-iso: Retrieving Guest additions
==> virtualbox-iso: Trying /usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: Trying /usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: /usr/share/virtualbox/VBoxGuestAdditions.iso => /usr/share/virtualbox/VBoxGuestAdditions.iso
==> virtualbox-iso: Retrieving ISO
==> virtualbox-iso: Trying file:///m/stage/V995537-01-ol78.iso
==> virtualbox-iso: Trying file:///m/stage/V995537-01-ol78.iso?checksum=md5%3A1c1471c49025ffc1105d0aa975f7c8e3
==> virtualbox-iso: file:///m/stage/V995537-01-ol78.iso?checksum=md5%3A1c1471c49025ffc1105d0aa975f7c8e3 => /m/stage/V995537-01-ol78.iso
==> virtualbox-iso: Starting HTTP server on port 8232
==> virtualbox-iso: Using local SSH Agent to authenticate connections for the communicator...
==> virtualbox-iso: Creating virtual machine...
==> virtualbox-iso: Creating hard drive...
==> virtualbox-iso: Creating forwarded port mapping for communicator (SSH, WinRM, etc) (host port 2641)
==> virtualbox-iso: Executing custom VBoxManage commands...
    virtualbox-iso: Executing: modifyvm packertest --memory 2048
    virtualbox-iso: Executing: modifyvm packertest --cpus 2
==> virtualbox-iso: Starting the virtual machine...
==> virtualbox-iso: Waiting 10s for boot...
==> virtualbox-iso: Typing the boot command...
==> virtualbox-iso: Using ssh communicator to connect: 127.0.0.1
==> virtualbox-iso: Waiting for SSH to become available...
==> virtualbox-iso: Connected to SSH!
==> virtualbox-iso: Uploading VirtualBox version info (6.1.12)
==> virtualbox-iso: Uploading VirtualBox guest additions ISO...
==> virtualbox-iso: Provisioning with Ansible...
    virtualbox-iso: Setting up proxy adapter for Ansible....
==> virtualbox-iso: Executing Ansible: ansible-playbook -e ...
    virtualbox-iso:
    virtualbox-iso: PLAY [all] *********************************************************************
    virtualbox-iso:
    virtualbox-iso: TASK [Gathering Facts] *********************************************************
    virtualbox-iso: ok: [default]
    virtualbox-iso: [WARNING]: Platform linux on host default is using the discovered Python
    virtualbox-iso: interpreter at /usr/bin/python, but future installation of another Python
    virtualbox-iso: interpreter could change this. See https://docs.ansible.com/ansible/2.9/referen
    virtualbox-iso: ce_appendices/interpreter_discovery.html for more information.
    virtualbox-iso:
    virtualbox-iso: TASK [upgrade all packages] ****************************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: TASK [install kernel-uek-devel] ************************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: TASK [reboot to enable latest kernel] ******************************************
==> virtualbox-iso: EOF
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: TASK [Mount guest additions ISO read-only] *************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: TASK [execute guest additions] *************************************************
    virtualbox-iso: changed: [default]
    virtualbox-iso:
    virtualbox-iso: PLAY RECAP *********************************************************************
    virtualbox-iso: default                    : ok=6    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
    virtualbox-iso:
==> virtualbox-iso: Gracefully halting virtual machine...
==> virtualbox-iso: Preparing to export machine...
    virtualbox-iso: Deleting forwarded port mapping for the communicator (SSH, WinRM, etc) (host port 2641)
==> virtualbox-iso: Exporting virtual machine...
    virtualbox-iso: Executing: export packertest --output output-virtualbox-iso/packertest.ovf
==> virtualbox-iso: Deregistering and deleting VM...
==> virtualbox-iso: Running post-processor: vagrant
==> virtualbox-iso (vagrant): Creating a dummy Vagrant box to ensure the host system can create one correctly
==> virtualbox-iso (vagrant): Creating Vagrant box for 'virtualbox' provider
    virtualbox-iso (vagrant): Copying from artifact: output-virtualbox-iso/packertest-disk001.vmdk
    virtualbox-iso (vagrant): Copying from artifact: output-virtualbox-iso/packertest.ovf
    virtualbox-iso (vagrant): Renaming the OVF to box.ovf...
    virtualbox-iso (vagrant): Compressing: Vagrantfile
    virtualbox-iso (vagrant): Compressing: box.ovf
    virtualbox-iso (vagrant): Compressing: metadata.json
    virtualbox-iso (vagrant): Compressing: packertest-disk001.vmdk
Build 'virtualbox-iso' finished.

==> Builds finished. The artifacts of successful builds are:
--> virtualbox-iso: VM files in directory: output-virtualbox-iso
--> virtualbox-iso: 'virtualbox' provider box: /home/martin/vagrant/boxes/ol7_7.8.2.box

This concludes the build of the base box.

Using the newly created base box

I’m not quite done yet though: as you may recall I’m using (local) box versioning. A quick change of the metadata file ~/vagrant/boxes/ol7.json and a call to vagrant init later, I can use the box:

$ vagrant box outdated
Checking if box 'ol7' version '7.8.1' is up to date...
A newer version of the box 'ol7' for provider 'virtualbox' is
available! You currently have version '7.8.1'. The latest is version
'7.8.2'. Run `vagrant box update` to update. 

That looks pretty straight-forward, so let’s do it:

$ vagrant box update
==> server: Checking for updates to 'ol7'
    server: Latest installed version: 7.8.1
    server: Version constraints: 
    server: Provider: virtualbox
==> server: Updating 'ol7' with provider 'virtualbox' from version
==> server: '7.8.1' to '7.8.2'...
==> server: Loading metadata for box 'file:///home/martin/vagrant/boxes/ol7.json'
==> server: Adding box 'ol7' (v7.8.2) for provider: virtualbox
    server: Unpacking necessary files from: file:///home/martin/vagrant/boxes/ol7_7.8.2.box
    server: Calculating and comparing box checksum...
==> server: Successfully added box 'ol7' (v7.8.2) for 'virtualbox'! 

Let’s start the environment:

$ vagrant up
Bringing machine 'server' up with 'virtualbox' provider...
==> server: Importing base box 'ol7'...
==> server: Matching MAC address for NAT networking...
==> server: Checking if box 'ol7' version '7.8.2' is up to date...
==> server: Setting the name of the VM: packertest_server_1598013258821_67878
==> server: Clearing any previously set network interfaces...
==> server: Preparing network interfaces based on configuration...
    server: Adapter 1: nat
==> server: Forwarding ports...
    server: 22 (guest) => 2222 (host) (adapter 1)
==> server: Running 'pre-boot' VM customizations...
==> server: Booting VM...
==> server: Waiting for machine to boot. This may take a few minutes...
    server: SSH address: 127.0.0.1:2222
    server: SSH username: vagrant
    server: SSH auth method: private key
==> server: Machine booted and ready!
==> server: Checking for guest additions in VM...
==> server: Setting hostname...
==> server: Mounting shared folders...
    server: /vagrant => /home/martin/vagrant/packertest 

Happy Automating!

Versioning for your local Vagrant boxes: handling updates

In my last post I summarised how to enable versioning for Vagrant box outside Vagrant’s cloud. In this part I’d like to share how to update a box.

My environment

The environment hasn’t changed compared to the first post. In summary I’m using

  • Ubuntu 20.04 LTS
  • Virtualbox 6.1.6
  • Vagrant 2.2.7

Updating a box

Let’s assume it’s time to update the base box for whatever reason. I most commonly update my boxes every so often after having run an “yum upgrade -y” to bring it up to the most current software. A new drop of the Guest Additions also triggers a rebuild, and so on.

Packaging

Once the changes are made, you need to package the box again. Continuing the previous example I save all my boxes and their JSON metadata in ~/vagrant/boxes. The box comes first:

[martin@host ~]$ vagrant package --base oraclelinux7base --output ~/vagrant/boxes/ol7_7.8.1.box

This creates a second box right next to the existing one. Note I bumped the version number to 7.8.1 to avoid file naming problems:

[martin@host boxes]$ ls -1
ol7_7.8.0.box
ol7_7.8.1.box
ol7.json 

Updating metadata

The next step is to update the JSON document. At this point in time, it references version 7.8.0 of my box:

[martin@host boxes]$ cat ol7.json 
{
    "name": "ol7",
    "description": "Martins Oracle Linux 7",
    "versions": [
      {
        "version": "7.8.0",
        "providers": [
          {
            "name": "virtualbox",
            "url": "file:///home/martin/vagrant/boxes/ol7_7.8.0.box",
            "checksum": "db048c3d61c0b5a8ddf6b59ab189248a42bf9a5b51ded12b2153e0f9729dfaa4",
            "checksum_type": "sha256"
          }
        ]
      }
    ]
  } 

You probably suspected what’s next :) A new version is created by adding a new element into the versions array, like so:

{
  "name": "ol7",
  "description": "Martins Oracle Linux 7",
  "versions": [
    {
      "version": "7.8.0",
      "providers": [
        {
          "name": "virtualbox",
          "url": "file:///home/martin/vagrant/boxes/ol7_7.8.0.box",
          "checksum": "db048c3d61c0b5a8ddf6b59ab189248a42bf9a5b51ded12b2153e0f9729dfaa4",
          "checksum_type": "sha256"
        }
      ]
    },
    {
      "version": "7.8.1",
      "providers": [
        {
          "name": "virtualbox",
          "url": "file:///home/martin/vagrant/boxes/ol7_7.8.1.box",
          "checksum": "f9d74dbbe88eab2f6a76e96b2268086439d49cb776b407c91e4bd3b3dc4f3f49",
          "checksum_type": "sha256"
        }
      ]
    }
  ]
} 

Don’t forget to update the SHA256 checksum!

Check for box updates

Back in my VM directory I can now check if there is a new version of my box:

[martin@host versioning]$ vagrant box outdated
Checking if box 'ol7' version '7.8.0' is up to date...
A newer version of the box 'ol7' for provider 'virtualbox' is
available! You currently have version '7.8.0'. The latest is version
'7.8.1'. Run `vagrant box update` to update.
[martin@host versioning]$ 

And there is! Not entirely surprising though, so let’s update the box:

[martin@host versioning]$ vagrant box update
==> default: Checking for updates to 'ol7'
    default: Latest installed version: 7.8.0
    default: Version constraints: 
    default: Provider: virtualbox
==> default: Updating 'ol7' with provider 'virtualbox' from version
==> default: '7.8.0' to '7.8.1'...
==> default: Loading metadata for box 'file:///home/martin/vagrant/boxes/ol7.json'
==> default: Adding box 'ol7' (v7.8.1) for provider: virtualbox
    default: Unpacking necessary files from: file:///home/martin/vagrant/boxes/ol7_7.8.1.box
    default: Calculating and comparing box checksum...
==> default: Successfully added box 'ol7' (v7.8.1) for 'virtualbox'! 

At the end of this exercise both versions are available:

[martin@host versioning]$ vagrant box list | grep ^ol7
ol7               (virtualbox, 7.8.0)
ol7               (virtualbox, 7.8.1)
[martin@host versioning]$  

This is so much better than my previous approach!

What are the effects of box versioning?

You could read earlier when I created a Vagrant VM based on version 7.8.0 of my box. This VM hasn’t been removed. What happens if I start it up now that there’s a newer version of the ol7 box available?

Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'ol7' version '7.8.0' is up to date...
==> default: A newer version of the box 'ol7' is available and already
==> default: installed, but your Vagrant machine is running against
==> default: version '7.8.0'. To update to version '7.8.1',
==> default: destroy and recreate your machine.
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 => 2222. Now on port 2200.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2200 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2200
    default: SSH username: vagrant
    default: SSH auth method: private key
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Setting hostname...
==> default: Mounting shared folders...
    default: /vagrant => /home/martin/vagrant/versioning
==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run. 

Vagrant tells me that I’m using an old version of the box, and how to switch to the new one. I think I’ll do this eventually, but I can still work with the old version.

And what if I create a new VM? By default, Vagrant creates the new VM based on the latest version of my box, 7.8.1. You can see this here:

Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'ol7'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'ol7' version '7.8.1' is up to date...
==> default: Setting the name of the VM: versioning2_default_1588259041745_89693
==> default: Fixed port collision for 22 => 2222. Now on port 2201.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2201 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2201
    default: SSH username: vagrant
    default: SSH auth method: private key
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Setting hostname...
==> default: Mounting shared folders...
    default: /vagrant => /home/martin/vagrant/versioning2 

Cleaning up

As with every technology, housekeeping is essential to keep disk usage in check. Refer back to the official documentation for more details on housekeeping and local copies of Vagrant boxes.

Summary

In the past I really struggled maintaining my local Vagrant boxes. Updating a box proved quite tricky and came with undesired side effects. Using versioning as demonstrated in this post is a great way out of this dilemma. And contrary to what I thought for a long time uploading my boxes to Vagrant cloud is not needed.

There is of course a lot more to say about versioning as this feature can do so much more. Maybe I’ll write another post about that subject some other time, until then I kindly refer you to the documentation.

Versioning for your local Vagrant boxes: adding a new box

I have been using Vagrant for quite some time now can’t tell you how much of a productivity boost it has been. All the VMs I have on my laptop are either powered by Vagrant, or feed into the Vagrant workflow.

One thing I haven’t worked out though is how to use versioning outside of Vagrant’s cloud. I don’t think I have what it takes to publish a good OS image publicly, and rather keep my boxes to myself to prevent others from injury.

My environment

While putting this post together I used the following software:

  • Ubuntu 20.04 LTS acts as my host operating system
  • Virtualbox 6.1.6
  • Vagrant 2.2.7

This is probably as current as it gets at the time of writing.

The need for box versioning

Vagrant saves you time by providing “gold images” you can spin up quickly. I prefer to always have the latest and greatest software available without having to spend ages on updating kernels and/or other components. As a result, I update my “gold image” VM from time to time, before packaging it up for Vagrant. Until quite recently I haven’t figured out how to update a VM other than delete/recreated it. This isn’t the best idea though, as indicated by this error message:

$ vagrant box remove debianbase-slim
Box 'debianbase-slim' (v0) with provider 'virtualbox' appears
to still be in use by at least one Vagrant environment. Removing
the box could corrupt the environment. We recommend destroying
these environments first:

default (ID: ....)

Are you sure you want to remove this box? [y/N] n 

This week I finally sat down trying to work out a better way of refreshing my Vagrant boxes.

As I understand it, box versioning allows me to update my base box without having to trash any environments. So instead of removing the box and replacing it with another, I can add a new version to the box. Environments using the old version can do so until they are torn down. New environments can use the new version. This works remarkably easy, once you know how to set it up! I found a few good sources on the Internet and combined them into this article.

Box versioning for Oracle Linux 7

As an Oracle person I obviously run Oracle Linux a lot. Earlier I came up with a procedure to create my own base boxes. This article features “oraclelinux7base” as the source for my Vagrant boxes. It adheres to all the requirements for Vagrant base boxes to be used with the Virtualbox provider.

Packaging the base box

Once you are happy to release your Virtualbox VM to your host, you have to package it for use with Vagrant. All my Vagrant boxes go to ~/vagrant/boxes, so this command creates the package:

$ vagrant package --base oraclelinux7base --output ~/vagrant/boxes/ol7_7.8.0.box
==> oraclelinux7base: Attempting graceful shutdown of VM...
==> oraclelinux7base: Clearing any previously set forwarded ports...
==> oraclelinux7base: Exporting VM...
==> oraclelinux7base: Compressing package to: /home/martin/vagrant/boxes/ol7_1.0.0.box 

In plain English this command instructs Vagrant to take Virtualbox’s oraclelinux7base VM and package it into ~/vagrant/boxes/ol7_7.8.0.box. I am creating this VM as the first OL 7.8 system, the naming convention seems optional yet I think it’s best to indicate the purpose and version in the package name.

At this stage, DO NOT “vagrant add” the box!

Creating box metadata

The next step is to create a little metadata describing the box. This time it’s not to be written in YAML, but JSON for a change. I found a few conflicting sources and I couldn’t get them to work until I had a look at how Oracle solved the problem. If you navigate to yum.oracle.com/boxes, you can find the links to their metadata files. I really appreciate Oracle changing to using versioning of their boxes, too!

After a little trial-and-error I came up with this file. It’s probably just the bare minimum, but it works for me in my lab so I’m happy to keep it the way it is. The file lives in ~/vagrant/boxes alongside the box file itself.

$ cat ol7.json
{
    "name": "ol7",
    "description": "Martins Oracle Linux 7",
    "versions": [
      {
        "version": "7.8.0",
        "providers": [
          {
            "name": "virtualbox",
            "url": "file:///home/martin/vagrant/boxes/ol7_7.8.0.box",
            "checksum": "db048c3d61c0b5a8ddf6b59ab189248a42bf9a5b51ded12b2153e0f9729dfaa4",
            "checksum_type": "sha256"
          }
        ]
      }
    ]
  } 

The file should be self-explanatory. The only noteworthy issue to run into is an insufficient number of forward slashes in the URL the URI is composed of “file://” followed by the fully qualified path to the box file, 3 forward slashes in total.

I used “sha256sum /home/martin/vagrant/boxes/ol7_7.8.0.box” to calculate the checksum.

Creating a VM

Finally it’s time to create the VM. I tend to create a directory per Vagrant environment, in this example I called it “versioning”. Within ~/vagrant/versioning I can create a Vagrantfile with the VM’s definition. At this stage, the base box is unknown to Vagrant.

$ nl Vagrantfile 
     1    # -*- mode: ruby -*-
     2    # vi: set ft=ruby :

     3    Vagrant.configure("2") do |config|
     4      config.vm.box = "ol7"
     5      config.vm.box_url = "file:///home/martin/vagrant/boxes/ol7.json"
     6      
     7      config.ssh.private_key_path = '/home/martin/.ssh/vagrantkey'

     8      config.vm.hostname = "server1"

     9      config.vm.provider "virtualbox" do |vb|
    10        vb.cpus = 2
    11        vb.memory = "4096"
    12      end

    13    end
 

The difference to my earlier post is the reference to the JSON file in line 5. The JSON file tells vagrant where to find the Vagrant box. The remaining configuration isn’t different from using non-versioned Vagrant boxes.

Based on this configuration file I can finally spin up my VM:

$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'ol7' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox
    default: Box Version: >= 0
==> default: Loading metadata for box 'file:///home/martin/vagrant/boxes/ol7.json'
    default: URL: file:///home/martin/vagrant/boxes/ol7.json
==> default: Adding box 'ol7' (v7.8.0) for provider: virtualbox
    default: Unpacking necessary files from: file:///home/martin/vagrant/boxes/ol7_7.8.0.box
    default: Calculating and comparing box checksum...
==> default: Successfully added box 'ol7' (v7.8.0) for 'virtualbox'!
==> default: Importing base box 'ol7'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'ol7' version '7.8.0' is up to date...
==> default: Setting the name of the VM: versioning_default_1588251635800_49095
==> default: Fixed port collision for 22 => 2222. Now on port 2200.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2200 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2200
    default: SSH username: vagrant
    default: SSH auth method: private key
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Setting hostname...
==> default: Mounting shared folders...
    default: /vagrant => /home/martin/vagrant/versioning 

Right at the beginning you can see that Vagrant loads “metadata for box ‘file:///home/martin/vagrant/boxes/ol7.json'” and then loads the box from the location specified in the JSON file.

Once the machine is started, I can also see it available for future use:

$ vagrant box list | grep ^ol7
ol7               (virtualbox, 7.8.0) 

The box is registered as ol7, using the Virtualbox provider in version 7.8.0.

Summary

In this post I summarised (mainly for my own later use ;) how to use box versioning on my development laptop. It really isn’t that much of a difference compared to the previous way I worked and the benefit will become apparent once you update the box. I’m going to cover upgrading my “ol7” box in another post.

Vagrant tips’n’tricks: changing /etc/hosts automatically for Oracle Universal Installer

Oracle Universal Installer, or OUI for short, doesn’t at all like it if the hostname resolves to an IP address in the 127.0.0.0/0 range. At best it complains, at worst it starts installing and configuring software only to abort and bury the real cause deep in the logs.

I am a great fan of HashiCorp’s Vagrant as you might have guessed reading some of the previous articles, and as such wanted a scripted solution to changing the hostname to something more sensible before I begin provisioning software. I should probably add that I’m using my own base boxes; the techniques in this post should equally apply to other boxes as well.

Each of the Vagrant VMs I’m creating is given a private network for communication with its peers. This is mainly done to prevent me from having to deal with port forwarding on the NAT device. If you haven’t used Vagrant before you might not know that by default, each Vagrant VM will come up with a single NIC that has to use NAT. The end goal for this post is to ensure that my VM’s hostname maps to the private network’s IP address, not 127.0.0.1 as it would normally do.

Setting the scene

By default, Vagrant doesn’t seem to mess with the hostname of the VM. This can be changed by using a configuration variable. Let’s start with the Vagrantfile for my Oracle Linux 7 box:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.define "ol7guest" do |u|
    # this is a base box I created and stored locally
    u.vm.box = "oracleLinux7Base"

    u.ssh.private_key_path = "/path/to/key"

    u.vm.hostname = "ol7guest"
    u.vm.network "private_network", ip: "192.168.56.204"

    u.vm.provider "virtualbox" do |v|
      v.memory = 2048
      v.name = "ol7guest"
      v.cpus = 1
    end
  end
end 

Please ignore the fact that my Vagrantfile is slightly more complex than it needs to be. I do like having speaking names for my VMs, rather than “default” showing up in vagrant status. Using this terminology in the Vagrantfile also makes it easier to add more VMs to the configuration should I so need.

Apart from you just read the only remarkable thing to mention about this file is this line:

    u.vm.hostname = "ol7guest"

As per the Vagrant documentation, I can use this directive to set the hostname of the VM. And indeed, it does:

$ vagrant ssh ol7guest
Last login: Thu Jan 09 21:14:59 2020 from 10.0.2.2
[vagrant@ol7guest ~]$  

The hostname is set, however it resolves to 127.0.0.1 as per /etc/hosts:

[vagrant@ol7guest ~]$ cat /etc/hosts
127.0.0.1    ol7guest    ol7guest
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 

Not quite what I had in mind, but apparently expected behaviour. So the next step is to change the first line in /etc/hosts to match the private IP address I assigned to the second NIC. As an Ansible fan I am naturally leaning towards using a playbook, but I also understand that not everyone has Ansible installed on the host and using the ansible_local provisioner might take longer than necessary unless your box has Ansible pre-installed.

The remainder of this post deals with an Ansible solution and the least common denominator, the shell provisioner.

Using an Ansible playbook

Many times I’m using Ansible playbooks to deploy software to Vagrant VMs anyway, so embedding a little piece of code into my playbooks to change /etc/hosts isn’t a lot of work. The first step is to amend the Vagrantfile to reference the Ansible provisioner. One possible way to do this in the context of my example is this:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.define "ol7guest" do |u|
    # this is a base box I created and stored locally
    u.vm.box = "oracleLinux7Base"

    u.ssh.private_key_path = "/path/to/key"

    u.vm.hostname = "ol7guest"
    u.vm.network "private_network", ip: "192.168.56.204"

    u.vm.provision "ansible" do |ansible|
      ansible.playbook = "change_etc_hosts.yml"
      ansible.verbose = "v"
    end

    u.vm.provider "virtualbox" do |v|
      v.memory = 2048
      v.name = "ol7guest"
      v.cpus = 1
    end
  end
end  

It is mostly the same file with the addition of the call to Ansible. As you can imagine the playbook is rather simple:

---
- hosts: ol7guest
  become: yes
  tasks:
  - name: change /etc/hosts
    lineinfile:
      path: '/etc/hosts'
      regexp: '.*ol7guest.*' 
      line: '192.168.56.204   ol7guest.example.com   ol7guest' 
      backup: yes

It uses the lineinfile module to find lines containing ol7guest and replaces that line with the “correct” IP address. The resulting hosts file is exactly what I need:

[vagrant@ol7guest ~]$ cat /etc/hosts
192.168.56.204   ol7guest.example.com   ol7guest
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
[vagrant@ol7guest ~]$ 

The first line of the original file has been replaced with the private IP which should enable OUI to progress past this potential stumbling block.

Using the shell provisioner

The second solution involves the shell provisioner, which – unlike Ansible – isn’t distribution agnostic and needs to be tailored to the target platform. On Oracle Linux, the following worked for me:

# -*- mode: ruby -*-
# vi: set ft=ruby :

$script = <<-SCRIPT
/usr/bin/cp /etc/hosts /root && \
/usr/bin/sed -ie '/ol7guest/d' /etc/hosts && \
/usr/bin/echo '192.168.56.204 ol7guest.example.com ol7guest' >> /etc/hosts
SCRIPT

Vagrant.configure("2") do |config|
  config.vm.define "ol7guest" do |u|
    # this is a base box I created and stored locally
    u.vm.box = "oracleLinux7Base"

    u.ssh.private_key_path = "/path/to/key"

    u.vm.hostname = "ol7guest"
    u.vm.network "private_network", ip: "192.168.56.204"

    u.vm.provision "shell", inline: $script

    u.vm.provider "virtualbox" do |v|
      v.memory = 2048
      v.name = "ol7guest"
      v.cpus = 1
    end
  end
end 

The script copies /etc/hosts to root’s home directory and then changes it to match my needs. At the end, the file is in exactly the shape I need it to be in.

Summary

Whether you go with the shell provisioner or embed the change to the hostname in an (existing) Ansible playbook doesn’t matter much. I would definitely argue in support of having the code embedded in a playbook if that’s what will provision additional software anyways. If installing Ansible on the host isn’t an option, using the shell as a fallback mechanism is perfectly fine, too. Happy hacking!

Tips’n’tricks: finding the (injected) private key pair used in Vagrant boxes

In an earlier article I described how you could use SSH keys to log into a Vagrant box created by the Virtualbox provider. The previous post emphasised my preference for using custom Vagrant boxes and my own SSH keys.

Nevertheless there are occasions when you can’t create your own Vagrant box, and you have to resort to the Vagrant insecure-key-pair-swap procedure instead. If you are unsure about these security related discussion points, review the documentation about creating one’s own Vagrant boxes (section “Default User Settings”) for some additional background information.

Continuing the discussion from the previous post, what does a dynamically injected SSH key imply for the use with the SSH agent?

Vagrant cloud, boxes, and the insecure key pair

Let’s start with an example to demonstrate the case. I have decided to use the latest Ubuntu 16.04 box from HashiCorp’s Vagrant cloud for no particular reason. In hindsight I should have gone for 18.04 instead, as it’s much newer. For the purpose of this post it doesn’t really matter though.

$ vagrant up ubuntu
Bringing machine 'ubuntu' up with 'virtualbox' provider...
==> ubuntu: Importing base box 'ubuntu/xenial64'...
==> ubuntu: Matching MAC address for NAT networking...
==> ubuntu: Checking if box 'ubuntu/xenial64' version '20191204.0.0' is up to date...
==> ubuntu: Setting the name of the VM: ubuntu
==> ubuntu: Fixed port collision for 22 => 2222. Now on port 2200.
==> ubuntu: Clearing any previously set network interfaces...
==> ubuntu: Preparing network interfaces based on configuration...
    ubuntu: Adapter 1: nat
    ubuntu: Adapter 2: hostonly
==> ubuntu: Forwarding ports...
    ubuntu: 22 (guest) => 2200 (host) (adapter 1)
==> ubuntu: Running 'pre-boot' VM customizations...
==> ubuntu: Booting VM...
==> ubuntu: Waiting for machine to boot. This may take a few minutes...
    ubuntu: SSH address: 127.0.0.1:2200
    ubuntu: SSH username: vagrant
    ubuntu: SSH auth method: private key
    ubuntu: 
    ubuntu: Vagrant insecure key detected. Vagrant will automatically replace
    ubuntu: this with a newly generated keypair for better security.
    ubuntu: 
    ubuntu: Inserting generated public key within guest...
    ubuntu: Removing insecure key from the guest if it's present...
    ubuntu: Key inserted! Disconnecting and reconnecting using new SSH key...
==> ubuntu: Machine booted and ready!
==> ubuntu: Checking for guest additions in VM...
    ubuntu: The guest additions on this VM do not match the installed version of
    ubuntu: VirtualBox! In most cases this is fine, but in rare cases it can
    ubuntu: prevent things such as shared folders from working properly. If you see
    ubuntu: shared folder errors, please make sure the guest additions within the
    ubuntu: virtual machine match the version of VirtualBox you have installed on
    ubuntu: your host and reload your VM.
    ubuntu: 
    ubuntu: Guest Additions Version: 5.1.38
    ubuntu: VirtualBox Version: 6.0
==> ubuntu: Setting hostname...
==> ubuntu: Mounting shared folders...
    ubuntu: /vagrant => /home/martin/vagrant/ubunutu 

This started my “ubuntu” VM (I don’t like it when my VMs are called “default”, so I tend to give them better designations):

$ vboxmanage list vms | grep ubuntu
"ubuntu" {a507ba0c-...24bb} 

You may have noticed that 2 network interfaces are brought online in the output created by vagrant up. This is done to stay in line with the story of the previous post and not something that’s strictly speaking necessary.

The key message in the context of this blog post found the logs is this:

    ubuntu: SSH auth method: private key
    ubuntu: 
    ubuntu: Vagrant insecure key detected. Vagrant will automatically replace
    ubuntu: this with a newly generated keypair for better security.
    ubuntu: 
    ubuntu: Inserting generated public key within guest...
    ubuntu: Removing insecure key from the guest if it's present...
    ubuntu: Key inserted! Disconnecting and reconnecting using new SSH key... 

As you can read, the insecure key was detected and replaced. But where can I find the replaced key?

Locating the new private key

This took me a little while to find out, and I’m hoping this post saves you a minute. The key information (drum roll please) can be found in the output of vagrant ssh-config:

$ vagrant ssh-config ubuntu
Host ubuntu
  HostName 127.0.0.1
  User vagrant
  Port 2200
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /home/martin/vagrant/ubunutu/.vagrant/machines/ubuntu/virtualbox/private_key
  IdentitiesOnly yes
  LogLevel FATAL 

This contains all the information you need to SSH into the machine! It doesn’t seem to print information about the second NIC though, but that’s ok as I can always look at its details in the Vagrantfile itself.

Connection!

Using the information from above, I can connect to the system using either port 2200 (forwarded on the NAT device), or the private IP (which is 192.168.56.204 and has not been shown here):

$ ssh -p 2200 \
> -i /home/martin/vagrant/ubunutu/.vagrant/machines/ubuntu/virtualbox/private_key \
> vagrant@localhost hostname
ubuntu

$ ssh -i /home/martin/vagrant/ubunutu/.vagrant/machines/ubuntu/virtualbox/private_key \
> vagrant@192.168.56.204 hostname
ubuntu 

This should be all you need to get cracking with the Vagrant box. But wait! The full path to the key is somewhat lengthy, and that makes it a great candidate for storing it with the SSH agent. That’s super-easy, too:

$ ssh-add /home/martin/vagrant/ubunutu/.vagrant/machines/ubuntu/virtualbox/private_key
Identity added: /home/martin/vagrant/ubunutu/.vagrant/machines/ubuntu/virtualbox/private_key (/home/martin/vagrant/ubunutu/.vagrant/machines/ubuntu/virtualbox/private_key)

Apologies for the formatting. But it was worth it!

$ ssh vagrant@192.168.56.204 hostname
ubuntu

That’s a lot less typing than before…

By the way, it should be easy to spot this key in the output of ssh-add -l as it’s most likely the one with the longest path. If that doesn’t help you identify the key, ssh-keygen -lf /path/to/key prints the key’s fingerprint, for which you can grep in the output of ssh-add -l.

Have fun!

Ansible Tips’n’tricks: rebooting Vagrant boxes after a kernel upgrade

Occasionally I have to reboot my Vagrant boxes after kernel updates have been installed as part of an Ansible playbook during the “vagrant up” command execution.

I create my own Vagrant base boxes because that’s more convenient for me than pulling them from Vagrant’s cloud. However they, too, need TLC and updates. So long story short, I run a yum upgrade after spinning up Vagrant boxes in Ansible to have access to the latest and greatest (and hopefully most secure) software.

To stay in line with Vagrant’s philosophy, Vagrant VMs are lab and playground environments I create quickly. And I can dispose of them equally quickly, because all that I’m doing is controlled via code. This isn’t something you’d do with Enterprise installations!

Vagrant and Ansible for lab VMs!

Now how do you reboot a Vagrant controlled VM in Ansible? Here is how I’m doing this for VirtualBox 6.0.14 and Vagrant 2.2.6. Ubuntu 18.04.3 comes with Ansible 2.5.1.

Finding out if a kernel upgrade is needed

My custom Vagrant boxes are all based on Oracle Linux 7 and use UEK as the kernel of choice. That is important because it determines how I can find out if yum upgraded the kernel (eg UEK) as part of a “yum upgrade”.

There are many ways to do so, I have been using the following code snippet with some success:

 - name: check if we need to reboot after a kernel upgrade
    shell: if [ $(/usr/bin/rpm -q kernel-uek|/usr/bin/tail -n 1) != kernel-uek-$(uname -r) ]; then /usr/bin/echo 'reboot'; else /usr/bin/echo 'no'; fi
    register: must_reboot

So in other words I compare the last line from rpm -q kernel-uek to the name of the running kernel. If they match – all good. If they don’t, it seems there is a newer kernel-uek* RPM on disk than that of the running kernel. If the variable “must_reboot” contains “reboot”, I guess I have to reboot.

Rebooting

Ansible introduced a reboot module recently, however my Ubuntu 18.04 system’s Ansible version is too old for that and I wanted to stay with the distribution’s package. I needed an alternative.

There are lots of code snippets out there to reboot systems in Ansible, but none of them worked for me. So I decided to write the process up in this post :)

The following block worked for my very specific setup:

  - name: reboot if needed
    block:
    - shell: sleep 5 && systemctl reboot
      async: 300
      poll: 0
      ignore_errors: true

    - name: wait for system to come back online
      wait_for_connection:
        delay: 60
        timeout: 300
    when: '"reboot" in must_reboot.stdout'

This works nicely with the systems I’m using.

Except there’s a catch lurking in the code: when installing Oracle the software is made available via Virtualbox’s shared folders as defined in the Vagrantfile. When rebooting a Vagrant box outside the Vagrant interface (eg not using the vagrant reload command), shared folders aren’t mounted automatically. In other words, my playbook will fail trying to unzip binaries because it can’t find them. Which isn’t what I want. To circumvent this situation I add the following instruction into the block you just saw:

    - name: re-mount the shared folder after a reboot
      mount:
        path: /mnt
        src: mnt
        fstype: vboxsf
        state: mounted

This re-mounts my shared folder, and I’m good to go!

Summary

Before installing Oracle software in Vagrant for lab and playground use I always want to make sure I have all the latest and greatest patches installed as part of bringing a Vagrant box online for the first time.

Using Ansible I can automate the entire process from start to finish, even including kernel updates in the process. These are applied before I install the Oracle software!

Upgrading the kernel (or any other software components for that matter) post Oracle installation is more involved, and I usually don’t need to do this during the lifetime of the Vagrant (playground/lab) VM. Which is why Vagrant is beautiful, especially when used together with Ansible.

Ansible tips’n’tricks: provision multiple machines in parallel with Vagrant and Ansible

Vagrant is a great tool that I’m regularly using for building playground environments on my laptop. I recently came across a slight inconvenience with Vagrant’s Virtualbox provider: occasionally I would like to spin up a Data Guard environment and provision both VMs in parallel to save time. Sadly you can’t bring up multiple machines in parallel using the VirtualBox provisioner according to the documentation . This was true as of April 11 2019 and might change in the future, so keep an eye out on the reference.

I very much prefer to save time by doing things in parallel, and so I started digging around how I could achieve this goal.

The official documentation mentions something that looks like a for loop to wait for all machines to be up. This isn’t really an option, I wanted more control over machine names and IP addresses. So I came up with this approach, it may not be the best, but it falls into the “good enough for me” category.

Vagrantfile

The Vagrantfile is actually quite simple and might remind you of a previous article:

  1 Vagrant.configure("2") do |config|
  2   config.ssh.private_key_path = "/path/to/key"
  3 
  4   config.vm.define "server1" do |server1|
  5     server1.vm.box = "ansibletestbase"
  6     server1.vm.hostname = "server1"
  7     server1.vm.network "private_network", ip: "192.168.56.11"
  8     server1.vm.synced_folder "/path/to/stuff", "/mnt",
  9       mount_options: ["uid=54321", "gid=54321"]
 10 
 11     config.vm.provider "virtualbox" do |vb|
 12       vb.memory = 2048
 13       vb.cpus = 2
 14     end
 15   end
 16 
 17   config.vm.define "server2" do |server2|
 18     server2.vm.box = "ansibletestbase"
 19     server2.vm.hostname = "server2"
 20     server2.vm.network "private_network", ip: "192.168.56.12"
 21     server2.vm.synced_folder "/path/to/stuff", "/mnt",
 22       mount_options: ["uid=54321", "gid=54321"]
 23 
 24     config.vm.provider "virtualbox" do |vb|
 25       vb.memory = 2048
 26       vb.cpus = 2
 27     end
 28   end
 29 
 30   config.vm.provision "ansible" do |ansible|
 31     ansible.playbook = "hello.yml"
 32     ansible.groups = {
 33       "oracle_si" => ["server[1:2]"],
 34       "oracle_si:vars" => { 
 35         "install_rdbms" => "true",
 36         "patch_rdbms" => "true",
 37       }
 38     }
 39   end
 40 
 41 end

Ansibletestbase is my custom Oracle Linux 7 image that I keep updated for personal use. I define a couple of machines, server1 and server2 and from line 30 onwards let Ansible provision them.

A little bit of an inconvenience

Now here is the inconvenient bit: if I provided an elaborate playbook to provision Oracle in line 31 of the Vagrantfile, it would be run serially. First for server1, and only after it completed (or failed…) server2 will be created and provisioned. This is the reason for a rather plain playbook, hello.yml:

$ cat hello.yml 
---
- hosts: oracle_si
  tasks:
  - name: say hello
    debug: var=ansible_hostname

This literally takes no time to execute at all, so no harm is done running it serially once per VM. Not only is no harm done, quite the contrary: Vagrant discovered an Ansible provider in the Vagrantfile and created a suitable inventory file for me. I’ll gladly use it later.

How does this work out?

Enough talking, time to put this to test and to bring up both machines. As you will see in the captured output, they start one-by-one, run their provisioning tool and proceed to the next system.

$ vagrant up 
Bringing machine 'server1' up with 'virtualbox' provider...
Bringing machine 'server2' up with 'virtualbox' provider...
==> server1: Importing base box 'ansibletestbase'...
==> server1: Matching MAC address for NAT networking...

[...]

==> server1: Running provisioner: ansible...

[...]

    server1: Running ansible-playbook...

PLAY [oracle_si] ***************************************************************

TASK [Gathering Facts] *********************************************************
ok: [server1]

TASK [say hello] ***************************************************************
ok: [server1] => {
    "ansible_hostname": "server1"
}

PLAY RECAP *********************************************************************
server1                    : ok=2    changed=0    unreachable=0    failed=0   

==> server2: Importing base box 'ansibletestbase'...
==> server2: Matching MAC address for NAT networking...

[...]

==> server2: Running provisioner: ansible...

[...]
    server2: Running ansible-playbook...

PLAY [oracle_si] ***************************************************************

TASK [Gathering Facts] *********************************************************
ok: [server2]

TASK [say hello] ***************************************************************
ok: [server2] => {
    "ansible_hostname": "server2"
}

PLAY RECAP *********************************************************************
server2                    : ok=2    changed=0    unreachable=0    failed=0

As always the Ansible provisioner created an inventory file I can use in ./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory. The inventory looks exactly as described in the |ansible| block, and it has the all important global variables as well.

$cat ./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
# Generated by Vagrant
server2 ansible_host=127.0.0.1 ansible_port=2201 ansible_user='vagrant' ansible_ssh_private_key_file='/path/to/key'
server1 ansible_host=127.0.0.1 ansible_port=2200 ansible_user='vagrant' ansible_ssh_private_key_file='/path/to/key'

[oracle_si]
server[1:2]

[oracle_si:vars]
install_rdbms=true
patch_rdbms=true

After ensuring both that machines are up using the “ping” module, I can run the actual playbook. You might have to confirm the servers’ ssh keys the first time you run this:

$ ansible -i ./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory -m ping oracle_si
server1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
server2 | SUCCESS => {
"changed": false,
"ping": "pong"
}

All good to go! Let’s call the actual playbook to provision my machines.

$ ansible-playbook -i ./.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory provisioning/oracle.yml 

TASK [Gathering Facts] ***************************************************************
ok: [server1]
ok: [server2]

TASK [test for Oracle Linux 7] *******************************************************
skipping: [server1]
skipping: [server2]

And we’re off to the races. Happy automating!