Author Archives: Martin Bach

About Martin Bach

Oracle DBA and Linux enthusiast, part time author and presenter.

Data Access using Java Stored Procedures in Oracle Database

Recently I had to troubleshoot a problem related to Java stored procedures in Oracle Database. SQL access has been a bit slow, and I wanted to create a small test case where my stored procedure accesses information in a table. To my surprise I didn’t find any suitable reference/tutorial so I thought I’d quickly write up how to access the SQL layer from within a Java stored procedure.

Java stored procedures are covered in the Java Developer’s Guide, chapter 5.

Java Stored Procedures

Here is my little code example. In contrast to my earlier post I am going to use the SQL (JDBC server-side) driver to query the database.

import java.sql.Connection;
import java.sql.Statement;
import java.sql.ResultSet;
import java.sql.SQLException;

import oracle.jdbc.OracleDriver;

public class SQLAccessDemo {
    
    public static String whoAmI() 
        throws SQLException
    {

        String username = "nobody";
        OracleDriver ora = new OracleDriver();

        try (
            Connection conn = ora.defaultConnection();
            Statement stmt = conn.createStatement();
            ResultSet rs = stmt.executeQuery("select user from dual");
        ) {

            while (rs.next()) {
                username = rs.getString("user");
            }
        }

        return username;
    }
}

The code isn’t particularly sophisticated, and it isn’t too different from client-side code either. The way you “connect” to the database is the main difference between client-side JDBC and server-side JDBC. In client-side JDBC you need to authenticate first before you can issue commands against the database. In the context of (Java) stored procedures you are already connected and don’t need to worry about connection pools, username or how to pass the password in a secure manner. The server-side JDBC driver is explained in the JDBC Developer’s Guide and Reference, chapter 7. My example uses the defaultConnection() method, but there are others you might be interested in.

Loading the Java Code into the Database

I decided to use the loadjava tool this time to load the code into the database. Here is how you can load and resolve the code:

$ loadjava -thin -user martin/superSecretPassword@localhost/pdb1 -resolve -verbose SQLAccessDemo.java 
arguments: '-user' 'martin/***@localhost/pdb1' '-thin' '-resolve' '-verbose' 'SQLAccessDemo.java' 
creating : source SQLAccessDemo
loading  : source SQLAccessDemo
created  : CREATE$JAVA$LOB$TABLE
resolving: source SQLAccessDemo
Classes Loaded: 0
Resources Loaded: 0
Sources Loaded: 1
Published Interfaces: 0
Classes generated: 0
Classes skipped: 0
Synonyms Created: 0
Errors: 0

In case there are errors with the code loadjava will print these on the terminal screen. In my case there were 0 errors, allowing me to proceed. I can see that both a JAVA SOURCE and JAVA CLASS have been created in my schema.

Creating a Call Specification

With that done all I need to do is create a call specification to publish whoAmI() to PL/SQL and SQL. Here is an example of an appropriate call specification:

create or replace function who_am_i return varchar2
as language java
name 'SQLAccessDemo.whoAmI() return java.lang.String';
/

The call specification maps the Java function to a PL/SQL code unit. In this example, a PL/SQL function named who_am_I returns a VARCHAR2 (“String”) after invoking whoAmI() in (Java) class SQLAccessDemo.

Invoking the stored procedure in SQL

Using the call specification I can call the function from SQL:

SQL> select who_am_i from dual;

WHO_AM_I
------------------------------------
MARTIN

This is of course a trivial example, but it should allow you to create your own Java stored procedures, accessing the database using server-side JDBC.

Summary

Java stored procedures aren’t too dissimilar from client-side JDBC code, which makes it easy for experienced developers to switch to writing server-side code in Java if needed. The main difference is the use of the server-side driver, see above for a link to the docs.

Advertisement

Vagrant Ansible Provisioner: working with the Ansible Inventory – addendum

Recently I wrote a post about one of my dream combinations, Ansible and Vagrant. After hitting the publish button I noticed that there might be a need for a part II – passing complex data types such as lists and dicts to Ansible via a Vagrantfile.

I wrote a similar post for when you are in a situation where you invoke an Ansible playbook directly from the command line. For this article the invocation of the Ansible playbook happens as part of a call to vagrant up or vagrant provision.

Setup

I’m going to reuse the Vagrantfile from the previous article:

Vagrant.configure("2") do |config|
  
  config.vm.box = "debianbase"
  config.vm.hostname = "debian"

  config.ssh.private_key_path = "/home/martin/.ssh/debianbase"

  config.vm.provider "virtualbox" do |vb|
    vb.vcpus = 2
    vb.memory = "1024"
    vb.name = "debian"
  end

  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "provisioning/example01.yml"
    ansible.verbose = "v"
    # ...
  end
end

The directory/file layout is also identical, repeated here for convenience:

$ tree provisioning/
provisioning/
├── example01.yml
├── example02.yml
├── group_vars
│   └── all.yml
└── roles
    └── role1
        └── tasks
            └── main.yml

I used Ubuntu 22.04, patched to 230306 with both Ansible and Vagrant versions as provided by the distribution:

  • Ansible 2.10.8
  • Vagrant 2.2.19

Passing lists to the Ansible playbook

This time however I’d like to pass a list to the playbook indicating which block devices to partition. The type of variable is a list, with either 1 or more elements. The Ansible code iterates over the list and performs the action on the current item. Here’s the code from the playbook example01.yml:

- hosts: default

  tasks: 
  - ansible.builtin.debug:
      var: blkdevs

  - name: print block devices to be partitioned
    ansible.builtin.debug:
      msg: If this was a call to community.general.parted I'd partition {{ item }} now
    loop: "{{ blkdevs }}"

The question is: how can I pass a list to the playbook? As with scalar data types I wrote about yesterday you use host_vars in the Vagrantfile:

  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "provisioning/example01.yml"
    ansible.verbose = "v"
    ansible.host_vars = {
      "default" => {
        "blkdevs" => '[ "/dev/sdb", "/dev/sdc" ]'
      }
    }
  end

Note the use of single and double quotes! Without quotes around the entire RHS expression Ansible will complain about a syntax error in the dynamically generated inventory. The provisioner does what it’s supposed to do:

PLAY [default] *****************************************************************

TASK [Gathering Facts] *********************************************************
ok: [default]

TASK [ansible.builtin.debug] ***************************************************
ok: [default] => {
    "blkdevs": [
        "/dev/sdb",
        "/dev/sdc"
    ]
}

TASK [print block devices to be partitioned] ***********************************
ok: [default] => (item=/dev/sdb) => {
    "msg": "If this was a call to community.general.parted I'd partition /dev/sdb now"
}
ok: [default] => (item=/dev/sdc) => {
    "msg": "If this was a call to community.general.parted I'd partition /dev/sdc now"
}

PLAY RECAP *********************************************************************
default                    : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 

Passing Dicts to the Ansible playbook

Passing a dict works exactly the same way, which is why I feel like I can keep this section short. The Vagrantfile uses the same host_var, blkdevs, but this time it’s a dict with keys indicating the intended use of the block devices. Each key is associated with a list of values containing the actual block device(s). Lists are perfectly fine even if they only contain a single item ;)

  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "provisioning/example02.yml"
    ansible.verbose = "v"
    ansible.host_vars = {
      "default" => {
        "blkdevs" => 
          '{ "binaries": ["/dev/sdb"], "database": ["/dev/sdc", "/dev/sdd"], "fast_recovery_area": ["/dev/sde"] }'
      }
    }
  end

The playbook iterates over the list of block devices provided as the dict’s values:

- hosts: default
  become: true

  tasks: 
  - name: format block devices for Oracle binaries
    ansible.builtin.debug:
      msg: If this was a call to community.general.parted I'd partition {{ item }} now
    loop: "{{ blkdevs.binaries }}"
  
  - name: format block devices for Oracle database files
    ansible.builtin.debug:
      msg: If this was a call to community.general.parted I'd partition {{ item }} now
    loop: "{{ blkdevs.database }}"
  
  - name: format block devices for Oracle database Fast Recovery Area
    ansible.builtin.debug:
      msg: If this was a call to community.general.parted I'd partition {{ item }} now
    loop: "{{ blkdevs.fast_recovery_area }}"

Using lists as the dict’s values solves the problem of having to distinguish between a scalar variable like /dev/sdc and multiple block devices like /dev/sdc, /dev/sdd to be used.

Et voila! Here’s the result:

PLAY [default] *****************************************************************

TASK [Gathering Facts] *********************************************************
ok: [default]

TASK [format block devices for Oracle binaries] ********************************
ok: [default] => (item=/dev/sdb) => {
    "msg": "If this was a call to community.general.parted I'd partition /dev/sdb now"
}

TASK [format block devices for Oracle database files] **************************
ok: [default] => (item=/dev/sdc) => {
    "msg": "If this was a call to community.general.parted I'd partition /dev/sdc now"
}
ok: [default] => (item=/dev/sdd) => {
    "msg": "If this was a call to community.general.parted I'd partition /dev/sdd now"
}

TASK [format block devices for Oracle database Fast Recovery Area] *************
ok: [default] => (item=/dev/sde) => {
    "msg": "If this was a call to community.general.parted I'd partition /dev/sde now"
}

PLAY RECAP *********************************************************************
default                    : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 

Happy automating!

Ansible tips’n’tricks: gather facts in an ad-hoc fashion

There are times when I really need to get some ansible_facts from a host to work out details about, say the network card, storage, or Linux Distribution to continue coding. And I don’t want to/have the patience to run add a debug step in my Ansible playbook either :) Thankfully Ansible has just the right tool for the case, called ad-hoc command execution.

Since I can never remember how to gather ansible_facts I decided to write it down, hopefully this saves me (and you!) 5 minutes next time.

Setup

I am using ansible-5.9.0-1.fc36.noarch as provided by Fedora 36 (which includes ansible-core-2.12.10-1.fc36.noarch) on Linux x86-64. Vagrant 2.3.4 has been provided by the HashiCorp repository.

Gathering facts: using Vagrant’s dynamic Ansible inventory

If you are using the Ansible provisioner with your Vagrant box, Vagrant will create a suitable inventory for you. Assuming there is only a single VM defined in your Vagrantfile you can use the following command to gather facts:

ansible -i .vagrant/provisioners/ansible/inventory/ default -m setup
default | SUCCESS => {
    "ansible_facts": {
        "ansible_all_ipv4_addresses": [
            "10.0.2.15"
        ],
        "ansible_all_ipv6_addresses": [
            "fe80::a00:27ff:fec0:f04e"
        ],
        "ansible_apparmor": {
            "status": "enabled"

If you have multiple VMs defined in your Vagrantfile you need to either specify all or the VM name as defined in the inventory.

Gathering facts without an inventory

If you have a VM you can SSH to there is an alternative option available to you: simply specify the IP address or DNS name of the VM as the Ansible inventory followed by a ",", like so:

ansible -i nginx, nginx -u ansible --private-key ~/.ssh/ansible -m ansible.builtin.setup | head
nginx | SUCCESS => {
    "ansible_facts": {
        "ansible_all_ipv4_addresses": [
            "10.0.2.15",
            "192.168.56.43"
        ],
        "ansible_all_ipv6_addresses": [
            "fe80::a00:27ff:fe8d:7f5f",
            "fe80::a00:27ff:fe37:33f6"
        ],

That’s all there is to gathering ansible_facts in an ad-hoc fashion. Happy automating!

Vagrant Ansible Provisioner: working with the Ansible Inventory

Vagrant and Ansible are a great match: using Vagrant it’s very easy to work with virtual machines. Creating, updating, and removing VMs is just a short command away. Vagrant provides various provisioners to configure the VM, and Ansible is one of these. This article covers the ansible provisioner as opposed to ansible_local.

Earlier articles I wrote might be of interest in this context:

The post was written using Ubuntu 22.04 patched to 230306, I used Ansible and Vagrant as provided by the distribution:

  • Ansible 2.10.8
  • Vagrant 2.2.19

Configuring the Ansible Inventory

Very often the behaviour of an Ansible playbook is controlled using variables. Providing variables to Ansible from a Vagrantfile is quite neat and subject of this article.

Let’s have a look at the most basic Vagrantfile:

Vagrant.configure("2") do |config|
  
  config.vm.box = "debianbase"
  config.vm.hostname = "debian"

  config.ssh.private_key_path = "/home/martin/.ssh/debianbase"

  config.vm.provider "virtualbox" do |vb|
    vb.cpus = 2
    vb.memory = "2048"
    vb.name = "debian"
  end
  
  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "provisioning/blogpost.yml"
    ansible.verbose = "v"
  end
end

I frequently use a flag indicating if the Ansible script should reboot the VM after the update of all packages completed. Within the provisioning folder I store group_vars, roles, and the main playbook as per the recommendation in the docs:

$ tree provisioning/
provisioning/
├── blogpost.yml
├── group_vars
│   └── all.yml
└── roles
    └── role1
        └── tasks
            └── main.yml

All global variables I don’t necessarily expect to change are stored in group_vars/all.yml. This includes the reboot_flag flag that defaults to false. The playbook does not need to list the variable in its own vars section, in fact doing so would grant the variable a higher precedence and my way of providing a variable to Ansible via Vagrant would fail. Here is the playbook:

- hosts: default
  become: true

  tasks: 
  - debug:
      var: reboot_flag

  - name: reboot
    ansible.builtin.reboot:
    when: reboot_flag | bool

Since rebooting can be a time consuming task I don’t want to do this by default, which is fine by me as I understand that I have to reboot manually later.

Let’s see what happens when the VM is provisioned:

PLAY [default] *****************************************************************

TASK [Gathering Facts] *********************************************************
ok: [default]

TASK [debug] *******************************************************************
ok: [default] => {
    "reboot_flag": false
}

TASK [reboot] ******************************************************************
skipping: [default] => {
    "changed": false,
    "skip_reason": "Conditional result was False"
}

PLAY RECAP *********************************************************************
default                    : ok=2    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

Overriding variables

In case I want to override the flag I can do so without touching my Ansible playbook only by changing the Vagrantfile. Thanks to host_vars I can pass variables to Ansible via the inventory. Here’s the changed section in the Vagrantfile:

  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "provisioning/blogpost.yml"
    ansible.verbose = "v"
    ansible.host_vars = {
      "default" => {
        "reboot_flag" => true
      }
    }
  end

All host_vars for my default VM are then appended to the inventory in .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory.

Next time I run vagrant provision the flag is changed to true, and the VM is rebooted:

PLAY [default] *****************************************************************

TASK [Gathering Facts] *********************************************************
ok: [default]

TASK [debug] *******************************************************************
ok: [default] => {
    "reboot_flag": "true"
}

TASK [reboot] ******************************************************************
changed: [default] => {
    "changed": true,
    "elapsed": 20,
    "rebooted": true
}

PLAY RECAP *********************************************************************
default                    : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 

Summary

Vagrant offers a very neat way of creating an Ansible inventory on the fly. If your Ansible playbooks are written in a way that different execution paths/options are configurable via variables a single playbook is highly flexible and can be used for many things. In the age of version control it’s very convenient not having to touch the source code of an Ansible playbook as that might interfere with other projects. Variables, passed at runtime, are much better suited to create flexible automation scripts.

Avoiding pitfalls when using cURL in CI/CD pipelines

Continuous Integration/Continuous Delivery (or Deployment, depending on your point of view) pipelines are at the core of many successful software projects. When designing your pipelines you sooner or later end up using REST calls to perform certain tasks. cURL is a popular command line tool to invoke REST APIs, and is commonly used in pipelines. Before you start using cURL in that capacity I’d like to draw your attention to a potential pitfall you can run into.

CI/CD Pipelines

A CI/CD pipeline typically consists of a series of tasks executed after a (git) commit is pushed to the remote registry. The idea is to ensure compliance with coding standards, formatting, and code quality, amongst a great wealth of other things. A pipeline is typically sub-divided into stages such as “build”, “lint”, “deploy” or anything else you can think of. Each stage consists of one or more tasks.

Whether or not the pipeline progresses to the next stage depends on the success or failure of tasks. Return codes are usually used to determine success or failure: a return code of 0 implies success, everything else usually terminates the pipeline’s execution.

Experimenting with cURL Exit Codes

In order to use cURL effectively in a CI pipeline it’s important to understand its error codes. Consider the following simulated API using node and express.js:

import express from 'express'
const app = express()
const port = 8080
const host = '0.0.0.0'

// allow for a successful test
app.get('/', (req, res) => {
  res.set('Content-Type', 'text/plain')
  res.send('test successful')
})

// invoke this URL to provoke a HTTP 400 (bad request) error
// see https://expressjs.com/en/4x/api.html#res.set for details
app.get('/failure', (req, res) => {
  res.set('Content-Type', 'text/plain')
  res.status(400).send('Bad Request')
})

app.listen(port, host, () => {
  console.log(`Simulated API server available on ${host}:${port}!`)
})

I created a small container image with the above code using the node:lts image (that’s node 18.14.2 and express 4.18.2, the most current versions at the time of writing, Feb 25th) and ran it.

“But what about security?” I hear you ask. You will undoubtedly have noted that this isn’t production code, it lacks authentication and other security features, logging, and basically everything apart from returning a bit of text and a HTTP status code. I’m also going to use HTTP calls for the API – enabling HTTPS would have been overkill for my example. In the real world you wouldn’t run APIs without TLS protection, would you? Since this post is about HTTP status codes and cURL in CI pipelines none of the extra bells and whistles are necessary, and crucially they’d probably distract from the actual problem. If you’re coding your APIs you should always adhere to industry best practices!

Starting the container

I stated the container as follows:

podman run --rm -it --name some-api --publish 8080:8080 api:0.5

The CMD directive in the project’s Dockerfile starts node and passes the api.mjs file to it. The API is now ready for business:

Simulated API server available on 0.0.0.0:8080!

Scenario 1: normal, successful completion

Let’s start with the successful invocation of the simulated API:

$ curl http://localhost:8080
test successful
$ echo $?
0

OK, nothing to see here, moving on… This is what was expected and shown for reference ;)

Scenario 2: Bad Request

I’m pointing curl to http://localhost:8080/failure next:

$ curl http://localhost:8080/failure
Bad Request
$ echo $?
0

Hmm, so that’s odd, curl‘s return code is 0 (= success) despite the error? Let’s dig a little deeper by using the verbose option and returning the headers

$ curl -iv http://localhost:8080/failure
*   Trying ::1:8080...
* Connected to localhost (::1) port 8080 (#0)
> GET /failure HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.74.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 400 Bad Request
HTTP/1.1 400 Bad Request
< X-Powered-By: Express
X-Powered-By: Express
< Content-Type: text/plain; charset=utf-8
Content-Type: text/plain; charset=utf-8
< Content-Length: 11
Content-Length: 11
< ETag: W/"b-EFiDB1U+dmqzx9Mo2UjcZ1SJPO8"
ETag: W/"b-EFiDB1U+dmqzx9Mo2UjcZ1SJPO8"
< Date: Sat, 25 Feb 2023 11:34:16 GMT
Date: Sat, 25 Feb 2023 11:34:16 GMT
< Connection: keep-alive
Connection: keep-alive
< Keep-Alive: timeout=5
Keep-Alive: timeout=5

< 
* Connection #0 to host localhost left intact
Bad Request

So it’s pretty clear that the HTTP status code is 400 (Bad Request). But that’s not reflected in the return code. Let’s fix this!

Instructing cURL to fail

A look at the cURL manual page reveals this interesting option:

       -f, --fail
              (HTTP) Fail silently (no output at all) on server  errors.  This
              is  mostly done to enable scripts etc to better deal with failed
              attempts. In normal cases when an HTTP server fails to deliver a
              document,  it  returns  an HTML document stating so (which often
              also describes why and more). This flag will prevent  curl  from
              outputting that and return error 22.

              This  method is not fail-safe and there are occasions where non-
              successful response codes will slip through, especially when au‐
              thentication is involved (response codes 401 and 407).

Which looks like exactly what I need. Let’s try this option:

$ curl --fail http://localhost:8080/failure
curl: (22) The requested URL returned error: 400 Bad Request
$ echo $?
22

Well that’s better! There’s a non-zero return code now.

Summary

The --fail option in curl (or --fail-with-body if your version of curl is 7.76 or later) allows DevOps engineers to architect their pipelines with greater resilience. Rather than manually parsing the cURL output checking for errors you can now rely on the REST API call’s return code to either proceed with the pipeline or stop execution. Please note that the –fail option isn’t fail-safe, as per the above comment in the man-page. Neither does it protect you from an API returning a HTTP-200 code if in fact an error occurred. But it’s definitely something I’ll use from now on by default.

Rendering .adoc include directives properly on GitHub

I recently worked on an issue where a perfectly fine ASCIIDoc file didn’t render properly in GitHub. At first I thought it was a broken file reference, but when I used the preview in my IDE I noticed that the syntax and link to the file are both correct. And yes, you can tell I’m new to ASCIIDoc :)

Here’s a screenshot of my document not rendering the way I intended in GitHub:

The Terraform example in section 2.3 should have displayed the contents of ./main.tf instead of a line starting with link:

As it turns out this is a long standing problem, see issue 1095 in GitHub. The relevant section from the source file reads:

=== Python Example

This is equally useless Python code.

[source, python]
----
import os
print ("hello ASCII doc")
----

=== Terraform Example

Unlike the previous examples the Terraform code is imported. This might not render properly in Github.

[source, hcl]
----
include::./main.tf[lines=1..-1]
----

You should see the configuration of a `provider {}` block

I didn’t go to great length with the Terraform code, all it does is show the definition of the Oracle Cloud Infrastructure provider:

#
# configure the Terraform Provider for Oracle Cloud Infrastructure
#
provider "oci" {
  fingerprint          = var.api_fingerprint
  private_key_path     = var.api_private_key_path
  region               = var.region
  tenancy_ocid         = var.tenancy_id
  user_ocid            = var.user_id
}

# add some actual code next...

Interestingly most IDEs render the ASCIIDoc correctly, they shown the combined text from both files even in preview mode. It’s really down to the aforementioned issue in GitHub that my file doesn’t render the way I have in mind.

Working around the problem

In an attempt at trying to save you 5 minutes I’d like to show you a potential workaround using asciidoctor-reducer. I opted to install it in a container, this way it should be easier to use it in my CI pipeline. The exact way you choose to invoke the tool does not matter, the actual call is most likely very similar to this:

$ asciidoctor-reducer my.adoc -o my-reduced.adoc

If you can use GitHub actions in your project you might want to have a look at an example featuring GitHub actions instead.

The asciidoctor-reducer post-processor combined the files, instead of an include directive the contents of main.tf was present in the ASCIIDoc file.

$ diff my.adoc my-reduced.adoc
55c55,67
< include::./main.tf[lines=1..-1]
---
> 
> #
> # configure the Terraform Provider for Oracle Cloud Infrastructure
> #
> provider "oci" {
>   fingerprint          = var.api_fingerprint
>   private_key_path     = var.api_private_key_path
>   region               = var.region
>   tenancy_ocid         = var.tenancy_id
>   user_ocid            = var.user_id
> }
> 
> # add some actual code next...
63d74
< 

This is a solution that works quite well for me personally. All I have to do is plug the container into my CI pipeline and have the tool create the combined document for me. Since I can review/test all inputs separately there is no need for me to check the generated file back into git. An alternative way of automating the generation of the reduced document is to create a pre-commit git hook. As with everything, possibilities are endless. Just pick the one that works for you.

Summary

Until there is no support for the include directive in GitHub, ASCII Doc workarounds are needed for documents to be rendered with the correct information. Using asciidoctor-reducer is a great option since it generates the desired results without requiring duplication of content.

As with all open-source tools make sure their license is compatible with your use case/company. I haven’t tested the tool thoroughly yet, so please ensure you comfortable with the way it works, especially with regards to unwanted side effects. This post is not an endorsement of either ASCIIDoc nor asciidoctor-reducer: use at your own risk and always have a backup ;)

Putty-like SSH port forwarding on Linux and MacOS

As a Linux or Mac user you benefit from a very useful, built-in terminal and SSH client implementation that’s mostly identical across all Unix-like systems. The situation used to be different on Windows.

Before Windows supported a built-in SSH client on the command line Putty was (and still is!) one of the primary tools available to perform remote administration. One of the nice things in Putty is its ability to add port forwarding rules on the fly, e.g. after the session has already been established. A similar feature exists for SSH clients on MacOS and Linux (and even Windows as its ssh client is also based on OpenSSH)

Port-forwarding in openSSH clients

The contents of this post was tested with a wide range of SSH clients. I did not go so far as to research when dynamic port forwarding was introduced but it seems to be present for a little while. For the most part I used the SSH client shipping with Oracle Linux 8.6.

Port-forwarding at connection time

You can specify either the -L or -R flag (and -D for some fancy SOCKS options not relevant to this post) when establishing a SSH session to a remote host, specifying how ports should be forwarded. Throw in the -N flag and you don’t even open your login shell! That’s a very convenient way to enable port forwarding. As long as the command shown below isn’t CTRL-C’d the SSH tunnel will persist.

[martin@host]$ ssh -i ~/.ssh/vagrant -N -L 5510:server2:5510 vagrant@server2

Occasionally I don’t know in advance which ports I have to forward, and I’m not always keen to establish a new session. Wouldn’t it be nice if you could simply add a port forwarding rules just like with Putty?

Putty-like port-forwarding on the command line

Once established you can control the behaviour of your SSH session using escape characters. The ssh(1) man page lists the available options in a section titled “ESCAPE CHARACTERS” (yes, the man page lists it in uppercase, it wasn’t me shouting).

The most interesting escape key is ~C: it opens a command line. I’m quoting from the docs here:

[~C] Open command line. Currently this allows the addition of port forwardings using the -L, -R and -D options (see above). It also allows the cancellation of existing port-forwardings with -KL[bind_address:]port for local, -KR[bind_address:]port for remote and -KD[bind_address:]port for dynamic port-forwardings. !command allows the user to execute a local command if the PermitLocalCommand option is enabled in ssh_config(5). Basic help is available, using the -h option.

man ssh(1)

Let’s try this in practice. Let’s assume I’d like to use port-forwarding to tunnel the Oracle Enterprise Manager (EM) Express port for one of my Pluggable Databases (PDBs) to my local laptop. The first step is to establish the port number used by EM Express.

SQL> show con_name

CON_NAME
------------------------------
PDB1

SQL> select dbms_xdb_config.gethttpsport from dual;

GETHTTPSPORT
------------
	5510

Right, the port number is 5510! It’s above the magic number of 1024 and therefore not a protected port (only root can work with ports <= 1024). Let’s add this to my existing interactive SSH connection:

[vagrant@server2 ~]$      # hit ~ followed by C to open the command line
ssh> L5510:server2:5510   # add a local port forwarding rule
Forwarding port.

As soon as you see the message “Forwarding port” you are all set, provided of course the ports are defined correctly and there’s no service running on your laptop’s port 5510. Next, when I point my favourite web browser to https://localhost:5510/em the connection request is forwarded to server2’s port 5510. In other words, I can connect to Enterprise Manager Express.

Should you find yourself in a situation where you’re unsure which ports you have forwarded, you can find out about that, too. Escape character ~# displays currently forwarded ports:

[vagrant@server2 ~]$ ~#
The following connections are open:
  #0 client-session (t4 r0 i0/0 o0/0 e[write]/4 fd 4/5/6 sock -1 cc -1 io 0x01/0x01)
  #3 direct-tcpip: listening port 5510 for server2 port 5510, connect from 127.0.0.1 port 58950 to 127.0.0.1 port 5510 (t4 r1 i0/0 o0/0 e[closed]/0 fd 9/9/-1 sock 9 cc -1 io 0x01/0x00)

Your client session is always present as #0. In the above output #3 indicates my browser session I established to EM Express. Unfortunately the forwarded port is only shown after an initial connection was established. This is close to Putty’s behaviour, but not a match. If you really need to know you have to use lsof or netstat and related tools.

You can even stop forwarding sessions on the command line:

[vagrant@server2 ~]$ 
ssh> KL5510
Canceled forwarding.

Once all sessions previously using the forwarded port have ended, the information is removed from the output of ~# in ssh.

Summary

The ssh command line client offers quite a few options not many users are aware of. Dynamically adding port forwarding rules to a session is a great feature I use frequently. Although it’s not quite on par with Putty’s port forwarding options dialogue it’s nevertheless very useful and I find myself mainly adding forwarding rules. The sshd (= server) configuration must of course allow port forwarding for this to work, if port forwarding fails because the admin disabled it you’ll get a message similar to this on in your ssh session:

[vagrant@server2 ~]$ channel 3: open failed: administratively prohibited: open failed

In which case you are out of luck.

Podman secrets: a better way to pass environment variables to containers

Podman became the standard container runtime with Oracle Linux 8 and later, and I have become a fan almost instantly. I especially like the fact that it is possible to run containers with far fewer privileges than previously required. Most Podman containers can be started by regular users, a fact I greatly appreciate in my development environments.

A common technique of passing information to containers is to use the –env command line argument when invoking podman run. Since passing sensitive information on the command line is never a good idea I searched for alternatives for a local development environment. Podman secrets are a very promising approach although not without their own issues. Before you consider using them please ensure your security department signs their use off, as always. They are pretty easy to translate back into plain text if you have access to the container host!

Podman Secrets

Podman secrets provide an alternative way for handling environment variables in containers. According to the documentation,

A secret is a blob of sensitive data which a container needs at runtime but should not be stored in the image or in source control, such as usernames and passwords, TLS certificates and keys, SSH keys or other important generic strings or binary content (up to 500 kb in size).

Why is this a big deal for me? You frequently find instructions in blog posts and other sources how to pass information to containers, such as user names and passwords. For example:

$ podman run --rm -it --name some-container \
-e USERNAME=someUsername \
-e PASSWORD=thisShouldNotBeEnteredHere \
docker.io/…/

Whilst specifying usernames and passwords on the command line is convenient for testing, passing credentials this way is not exactly secure. Quite the contrary. Plus there is a risk these things make it into a source control system and thus become publicly available.

An alternative to passing information to containers is by using Podman secrets. This way deployment and configuration can be kept separately.

Podman Secrets in Action

Secrets are separate entities, you create them using podman secret create (documentation link). They work quite well, provided they do not contain newlines. A little application using Oracle’s node driver to connect to an XE database and printing some connection details serves as an example. The example assumes that an XE 21c container database has been started. An “application account” has been created, and the password to this account is stored locally as a secret named oracle-secret.

The “application” is rather simple, it is based on the getting started section of the node-oracledb 5.5 driver documentation.

$ cat app.mjs 
import oracledb from 'oracledb';

(async() => {

  let connection;

  try {
    connection = await oracledb.getConnection( {
      user          : process.env.USERNAME,
      password      : process.env.PASSWORD,
      connectString : process.env.CONNECTSTRING
    });

    const result = await connection.execute(`
      select
        sys_context('userenv', 'instance_name') instance, 
        sys_context('userenv', 'con_name') pdb_name, 
        user
       from dual`,
      [],
      {
        outFormat: oracledb.OUT_FORMAT_OBJECT
      }
    );
    
    for (let r of result.rows) {
      console.log(`you are connected to ${r.INSTANCE}, PDB ${r.PDB_NAME} as ${r.USER}`);
    }

  } catch (err) {
    console.error(err);
  } finally {
    if (connection) {
      try {
        await connection.close();
      } catch (err) {
        console.error(err);
      }
    }
  }
})();

The important bit is right at the top: the Connection object is created using information provided via environment variables (USERNAME, PASSWORD, CONNECTSTRING). I locked the oracledb driver version in package.json at version 5.5.0, the most current release at the time of writing.

Running the Container

After building the container image, the container can be started as follows:

$ podmanrun --rm -it \
--net oracle-net \
--name some-node-app \
-e USERNAME=martin \
-e CONNECTSTRING="oraclexe:/xepdb1" \
--secret oracle-secret,type=env,target=PASSWORD \
localhost/nodeapp:0.1

Translated into plain English this commands starts an interactive, temporary container instance with an attached terminal for testing. The container is instructed to connect to the oracle-net network (a Podman network). A couple of environment variables are passed to the container: USERNAME and CONNECTSTRING. The use of the secret requires a little more explanation. The (existing) secret oracle-secret is passed as an environment variable (type=env). If it weren’t for the target=PASSWORD directive the secret would be accessible in the container by its name oracle-secret. Since I need an environment variable named PASSWORD (cf app.mjs above) I changed the target name to match. Once the container is up an environment variable named PASSWORD is available.

The new container should connect to the database and print something along the lines of:

you are connected to XE, PDB XEPDB1 as MARTIN

Summary

Podman secrets allow developers to provide information that shouldn’t be part of a container image or (configuration) code to containers. Provided the secrets are set up correctly they provide a much better way of passing sensitive information than hard-coded values on the command line. I said they provide a better way, but whilst secrets are definitely a step in the right direction they aren’t perfect (read: secure).

Whenever touching the point of sensitive information the advice remains the same: please have someone from the security team review the solution and sign it off before going ahead and using it. There are certainly more secure ways available to provide credentials to applications. Podman secrets however are a good first step in the right direction in my opinion.

Vagrant: always provision virtual machines

Since Spectre and Meltdown (2 infamous side channel attack vectors on CPUs) have become public I thought about better, more secure ways to browse the web. When I read that a commercial vendor for operating systems created a solution where a browser is started in a disposable sandbox that gets discarded when you exit the browser session I thought of ways to implement this feature myself.

Since I’m a great fan of both Virtualbox and Vagrant I decided to use the combination of the two to get this done. My host runs Ubuntu 22.04 LTS, and I’m using Vagrant 2.2.19 (the one shipping with the distribution, it’s not the latest version!) as well as Virtualbox 6.1.40. Whilst the solution presented in this article provides a more secure (notice how I didn’t claim this to be secure ;) ) approach to web browsing it doesn’t keep the host up to date. Security updates for the host O/S and hypervisor (read: Virtualbox) are crucial, too.

Please be super-careful when thinking of implementing a strategy where provisioners are run always, it can and potentially will break your system! For most use cases provisioning a VM each time it starts is not what you want.

Building a “browser” VM

I started off by creating a small “browser” VM with a minimal GUI and a web browser – nothing else – and registered this system as a vagrant box. This is the first step towards my solution: being able to create/tear down the sandbox. Not perfect, and there are more secure ways, but I’m fine with my approach.

The one thing that’s necessary though is updating the VM, ideally performed automatically, at each start. Vagrant provisioners can help with that.

Defining one or more provisioners in the Vagrantfile is a great way to initially configure a VM when it is created for the first time and works really well. Provisioners thankfully do NOT run with each subsequent start of the VM. If they were run each time it would probably be a disaster for all of my other Vagrant VMs. For my sandbox browser VM though I want all packages to be updated automatically.

Switching from on-demand provisioning to automatic provisioning

As I said, VMs are provisioned once by default, subsequent starts won’t run the provisioners as you can see in the output:

$ vagrant up

[output of vagrant bringing my VM up skipped]

==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run.

The section detailing provisioners in my Vagrantfile is super simple because it has to run in Linux and Windows and I’m too lazy to install Ansible on my Windows box. The above output was caused by the following directive:

Vagrant.configure("2") do |config|

  # ... more directives ...

  config.vm.provision "shell",
    inline: "sudo apt-get update --error-on=any && sudo apt-get dist-upgrade -y"

  # ... even more directives ...

Looking at the command you may have guessed that this is a Debian-based VM, and I’m neither using Flatpack nor Snaps. All packages in this environment are DEBs. That’s easier to maintain for me.

To change the provision section to always run, simply tell it to:

Vagrant.configure("2") do |config|

  # ... more directives ...

  config.vm.provision "shell",
    inline: "sudo apt-get update --error-on=any && sudo apt-get dist-upgrade -y",
    run: "always"

Next time the vagrant VM starts, the provisioner marked as “run: always” will be triggered, even though the VM wasn’t created from scratch:

$ vagrant up

[output of vagrant bringing my VM up skipped once more]

==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run.
==> default: Running provisioner: shell...
    default: Running: inline script

[output of apt-get omitted for brevity]

There you go! I could have achieved the same by telling vagrant to provision the VM using the --provision flag but I’m sure I would have forgotten that half the time.

Anyone using Ansible can benefit from running provisioners always, too:

Vagrant.configure("2") do |config|

  # ... more directives ...

  config.vm.provision "ansible", run: "always" do |ansible|
      ansible.playbook = "/path/to/ansible/playbook.yaml"
  end

Next time the VM is started by vagrant the Ansible playbook will be executed.

Summary

Vagrant can be instructed to run provisioners always if the use case merits it. For the most part it’s not advisable to run provisioners each time the VM comes up as it might well mess with the installation already present.

Creating a Java Stored Procedure in Oracle Database

This blog post provides a quick (and dirty) way of creating Java Stored Procedures in Oracle Database because I can’t ever remember how to do that. The Java Developer’s Guide details the use of Java in the database, chapter 5 explains how to create Java Stored Procedures. Please refer to the documentation for a proper discussion of Java in the Oracle database.

This blog was written using

  • Oracle 21c Enterprise Edition patched to 21.7.0
  • Oracle Linux 8.6
  • VirtualBox 6.1.40

Java stored procedures are written in Java (unsurprisingly). Before they can be used they have to be made available to the PL/SQL and SQL layer of the Oracle database. Therefore there are a few extra steps involved compared to writing stored procedures in PL/SQL.

Creating the Java Source

Rather than relying on the loadjava tool this post uses the CREATE JAVA command to create and compile the Java source. Note that errors in the code are not reported back to you so make sure that what you’re loading into the database is valid Java and complies with the requirements for Java Stored Procedures (like using static functions etc).

CREATE JAVA SOURCE NAMED helloClassSRC AS
public class HelloClass { 
    public static string hello( string who )  {
        return "hello, " + who ; 
    }
}
/

This creates 2 new objects in the schema, a JAVA source and its associated class in my current schema.

SELECT
    object_name,
    object_type
FROM
    user_objects
WHERE
    created > sysdate - 1;

      OBJECT_NAME    OBJECT_TYPE 
_________________ ______________ 
HelloClass        JAVA CLASS     
HELLOCLASS_SRC    JAVA SOURCE  

With the Java class stored in the database the next step is to make it available to the SQL and PL/SQL layers.

Publishing the Java Class

The hello() Java function returns a string, and I’m going to do the same with the PL/SQL call specification.

CREATE FUNCTION hello_java (
    p_who VARCHAR2
) RETURN VARCHAR2 
AS LANGUAGE JAVA 
NAME 'HelloClass.hello(java.lang.String) return java.lang.String';
/

The hello_java (PL/SQL!) function takes a single argument, p_who of (database) type VARCHAR2 and returns a VARCHAR2. The function is then mapped to the static hello() function in HelloClass, which is where you enter the Java world. hello() takes a string as an input parameter and returns a string.

Using hello_java

Once the PL/SQL call specification is created, it’s very easy to use the function:

SELECT
    hello_java('world') AS greeting
FROM
    dual;

       GREETING 
_______________ 
hello, world 

Although I named the function hello_java, there is no need to specify that Java is used under the covers. It just makes it easier for me to see that this function isn’t a PL/SQL but rather a Java function. Any valid PL/SQL identifier can be used in the call specification. Speaking of PL/SQL, I can of course use hello_java() in PL/SQL:

DECLARE
    l_string VARCHAR2(100);
BEGIN
    l_string := hello_java('world');
    DBMS_OUTPUT.PUT_LINE(l_string);
END;
/

Which prints “hello, world” once serveroutput is enabled.

Summary

I can never remember how to create Java stored procedures and hope this post helps you save 5 minutes as it does for me. There is of course a lot more to say about the topic, so please head over to the Java Developer’s Guide for more details.