At last year’s UKOUG I presented about Vagrant and how to use this great piece of software to test and debug Ansible scripts easily. Back then in December I promised a write-up, but for various reasons only now got around to finishing it.
Vagrant’s Ansible Provisioner
Vagrant offers two different Ansible provisioners: “ansible” and “ansible_local”. The “ansible” provisioner depends on a local Ansible installation, on the host. If this isn’t feasible, you can use “ansible_local” instead. As the name implies it executes code on the VM instead of on the host. This post is about the “ansible” provisioner.
Most people use Vagrant with the default VirtualBox provider, and so do I in this post.
A closer look at the Vagrantfile
It all starts with a Vagrantfile. A quick “vagrant init ” will get you one. My test image I use for deploying the Oracle database comes with all the necessary block devices and packages needed, saving me quite some time. Naturally I’ll start with that one.
$ cat -n Vagrantfile 1 # -*- mode: ruby -*- 2 # vi: set ft=ruby : 3 4 Vagrant.configure("2") do |config| 5 6 config.ssh.private_key_path = "/path/to/ssh/key" 7 8 config.vm.box = "ansibletestbase" 9 config.vm.define "server1" do |server1| 10 server1.vm.box = "ansibletestbase" 11 server1.vm.hostname = "server1" 12 server1.vm.network "private_network", ip: "192.168.56.15" 13 14 config.vm.provider "virtualbox" do |vb| 15 vb.memory = 2048 16 vb.cpus = 2 17 end 18 end 19 20 config.vm.provision "ansible" do |ansible| 21 ansible.playbook = "blogpost.yml" 22 ansible.groups = { 23 "oracle_si" => ["server1"], 24 "oracle_si:vars" => { 25 "install_rdbms" => "true", 26 "patch_rdbms" => "true", 27 "create_db" => "true" 28 } 29 } 30 end 31 32 end
Since I have decided to create my own custom image without relying on the “insecure key pair” I need to keep track of my SSH keys. This is done in line 6. Otherwise there wouldn’t be an option to connect to the system and Vagrant couldn’t bring the VM up.
Lines 8 to 18 define the VM – which image to derive it from, and how to configure it. The settings are pretty much self-explanatory so I won’t go into too much detail. Only this much:
- I usually want a host-only network instead of just a NAT device, and I create one in line 12. The IP address maps to and address on vboxnet0 in my configuration. If you don’t have a host-only network and want one, you can create it in VirtualBox’s preferences.
- In line 14 to 17 I set some properties of my VM. I want it to come up with 2 GB of RAM and 2 CPUs.
Integrating Ansible into the Vagrantfile
The Ansible configuration is found on lines 20 to 30. As soon as the VM comes up I want Vagrant to run the Ansible provisioner and execute my playbook named “blogpost.yml”.
Most of my playbooks rely on global variables I define in the inventory file. Vagrant will create an inventory for me when it finds an Ansible provisioner in the Vagrantfile. The inventory it creates doesn’t fit my needs though, but that is easy to change. Recent Vagrant versions allow me to create the inventory just as I need it. You see this in lines 22 to 28. The resulting inventory file is created in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory and looks like this:
$ cat .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
# Generated by Vagrant
server1 ansible_host=127.0.0.1 ansible_port=2222 ansible_user='vagrant' ansible_ssh_private_key_file='/path/to/ssh/key'
[oracle_si]
server1
[oracle_si:vars]
install_rdbms=true
patch_rdbms=true
create_db=true
That’s exactly what I’d use if I manually edited the inventory file, except that I don’t need to use “vagrant ssh-config” to figure out what the current SSH configuration is.
I define a group of hosts, and a few global variables for my playbook. This way all I need to do is change the Vagrantfile and control the execution of my playbook rather than maintaining information in 2 places (Vagrantfile and static inventory).
Ansible Playbook
The final piece of information is the actual Ansible playbook. Except for the host group I’m not going to use the inventory’s variables to keep the example simple.
$ cat -n blogpost.yml 1 --- 2 - name: blogpost 3 hosts: oracle_si 4 vars: 5 - oravg_pv: /dev/sdb 6 become: yes 7 tasks: 8 - name: say hello 9 debug: msg="hello from {{ ansible_hostname }}" 10 11 - name: partitioning PVs for the volume group 12 parted: 13 device: "{{ oravg_pv }}" 14 number: 1 15 state: present 16 align: optimal 17 label: gpt
Expressed in plain English, it reads: take the block device indicated by the variable oravg_pv and create a single partition on it spanning the entire device.
As soon as I “vagrant up” the VM, it all comes together:
$ vagrant up
Bringing machine 'server1' up with 'virtualbox' provider…
==> server1: Importing base box 'ansibletestbase'…
==> server1: Matching MAC address for NAT networking…
==> server1: Setting the name of the VM: blogpost_server1_1554188252201_2080
==> server1: Clearing any previously set network interfaces…
==> server1: Preparing network interfaces based on configuration…
server1: Adapter 1: nat
server1: Adapter 2: hostonly
==> server1: Forwarding ports…
server1: 22 (guest) => 2222 (host) (adapter 1)
==> server1: Running 'pre-boot' VM customizations…
==> server1: Booting VM…
==> server1: Waiting for machine to boot. This may take a few minutes…
server1: SSH address: 127.0.0.1:2222
server1: SSH username: vagrant
server1: SSH auth method: private key
==> server1: Machine booted and ready!
==> server1: Checking for guest additions in VM…
==> server1: Setting hostname…
==> server1: Configuring and enabling network interfaces…
server1: SSH address: 127.0.0.1:2222
server1: SSH username: vagrant
server1: SSH auth method: private key
==> server1: Mounting shared folders…
server1: /vagrant => /home/martin/vagrant/blogpost
==> server1: Running provisioner: ansible…
Vagrant has automatically selected the compatibility mode '2.0'according to the Ansible version installed (2.7.7).
Alternatively, the compatibility mode can be specified in your Vagrantfile:
https://www.vagrantup.com/docs/provisioning/ansible_common.html#compatibility_mode
server1: Running ansible-playbook...
PLAY [blogpost] *************************************************************
TASK [Gathering Facts] ******************************************************
ok: [server1]
TASK [say hello] ************************************************************
ok: [server1] => {
"msg": "hello from server1"
}
TASK [partitioning PVs for the volume group] ********************************
changed: [server1]
PLAY RECAP ******************************************************************
server1 : ok=3 changed=1 unreachable=0 failed=0
Great! But I forgot to partition /dev/sd[cd] in the same way as I partition /dev/sdb! That’s a quick fix:
--- - name: blogpost hosts: oracle_si vars: - oravg_pv: /dev/sdb - asm_disks: - /dev/sdc - /dev/sdd become: yes tasks: - name: say hello debug: msg="hello from {{ ansible_hostname }}" - name: partitioning PVs for the Oracle volume group parted: device: "{{ oravg_pv }}" number: 1 state: present align: optimal label: gpt - name: partition block devices for ASM parted: device: "{{ item }}" number: 1 state: present align: optimal label: gpt loop: "{{ asm_disks }}"
Re-running the provisioning script couldn’t be easier. Vagrant has a command for this: “vagrant provision”. This command re-runs the provisioning code against a VM. A quick “vagrant provision” later my system is configured exactly the way I want:
$ vagrant provision ==> server1: Running provisioner: ansible... Vagrant has automatically selected the compatibility mode '2.0' according to the Ansible version installed (2.7.7). Alternatively, the compatibility mode can be specified in your Vagrantfile: https://www.vagrantup.com/docs/provisioning/ansible_common.html#compatibility_mode server1: Running ansible-playbook... PLAY [blogpost] **************************************************************** TASK [Gathering Facts] ********************************************************* ok: [server1] TASK [say hello] *************************************************************** ok: [server1] => { "msg": "hello from server1" } TASK [partitioning PVs for the Oracle volume group] **************************** ok: [server1] TASK [partition block devices for ASM] ***************************************** changed: [server1] => (item=/dev/sdc) changed: [server1] => (item=/dev/sdd) PLAY RECAP ********************************************************************* server1 : ok=4 changed=1 unreachable=0 failed=0
This is it! Using just a few commands I can spin up VMs, test my Ansible scripts and later on when I’m happy with them, check the code into source control.