Category Archives: Container

Putty-like SSH port forwarding on Linux and MacOS

As a Linux or Mac user you benefit from a very useful, built-in terminal and SSH client implementation that’s mostly identical across all Unix-like systems. The situation used to be different on Windows.

Before Windows supported a built-in SSH client on the command line Putty was (and still is!) one of the primary tools available to perform remote administration. One of the nice things in Putty is its ability to add port forwarding rules on the fly, e.g. after the session has already been established. A similar feature exists for SSH clients on MacOS and Linux (and even Windows as its ssh client is also based on OpenSSH)

Port-forwarding in openSSH clients

The contents of this post was tested with a wide range of SSH clients. I did not go so far as to research when dynamic port forwarding was introduced but it seems to be present for a little while. For the most part I used the SSH client shipping with Oracle Linux 8.6.

Port-forwarding at connection time

You can specify either the -L or -R flag (and -D for some fancy SOCKS options not relevant to this post) when establishing a SSH session to a remote host, specifying how ports should be forwarded. Throw in the -N flag and you don’t even open your login shell! That’s a very convenient way to enable port forwarding. As long as the command shown below isn’t CTRL-C’d the SSH tunnel will persist.

[martin@host]$ ssh -i ~/.ssh/vagrant -N -L 5510:server2:5510 vagrant@server2

Occasionally I don’t know in advance which ports I have to forward, and I’m not always keen to establish a new session. Wouldn’t it be nice if you could simply add a port forwarding rules just like with Putty?

Putty-like port-forwarding on the command line

Once established you can control the behaviour of your SSH session using escape characters. The ssh(1) man page lists the available options in a section titled “ESCAPE CHARACTERS” (yes, the man page lists it in uppercase, it wasn’t me shouting).

The most interesting escape key is ~C: it opens a command line. I’m quoting from the docs here:

[~C] Open command line. Currently this allows the addition of port forwardings using the -L, -R and -D options (see above). It also allows the cancellation of existing port-forwardings with -KL[bind_address:]port for local, -KR[bind_address:]port for remote and -KD[bind_address:]port for dynamic port-forwardings. !command allows the user to execute a local command if the PermitLocalCommand option is enabled in ssh_config(5). Basic help is available, using the -h option.

man ssh(1)

Let’s try this in practice. Let’s assume I’d like to use port-forwarding to tunnel the Oracle Enterprise Manager (EM) Express port for one of my Pluggable Databases (PDBs) to my local laptop. The first step is to establish the port number used by EM Express.

SQL> show con_name

CON_NAME
------------------------------
PDB1

SQL> select dbms_xdb_config.gethttpsport from dual;

GETHTTPSPORT
------------
	5510

Right, the port number is 5510! It’s above the magic number of 1024 and therefore not a protected port (only root can work with ports <= 1024). Let’s add this to my existing interactive SSH connection:

[vagrant@server2 ~]$      # hit ~ followed by C to open the command line
ssh> L5510:server2:5510   # add a local port forwarding rule
Forwarding port.

As soon as you see the message “Forwarding port” you are all set, provided of course the ports are defined correctly and there’s no service running on your laptop’s port 5510. Next, when I point my favourite web browser to https://localhost:5510/em the connection request is forwarded to server2’s port 5510. In other words, I can connect to Enterprise Manager Express.

Should you find yourself in a situation where you’re unsure which ports you have forwarded, you can find out about that, too. Escape character ~# displays currently forwarded ports:

[vagrant@server2 ~]$ ~#
The following connections are open:
  #0 client-session (t4 r0 i0/0 o0/0 e[write]/4 fd 4/5/6 sock -1 cc -1 io 0x01/0x01)
  #3 direct-tcpip: listening port 5510 for server2 port 5510, connect from 127.0.0.1 port 58950 to 127.0.0.1 port 5510 (t4 r1 i0/0 o0/0 e[closed]/0 fd 9/9/-1 sock 9 cc -1 io 0x01/0x00)

Your client session is always present as #0. In the above output #3 indicates my browser session I established to EM Express. Unfortunately the forwarded port is only shown after an initial connection was established. This is close to Putty’s behaviour, but not a match. If you really need to know you have to use lsof or netstat and related tools.

You can even stop forwarding sessions on the command line:

[vagrant@server2 ~]$ 
ssh> KL5510
Canceled forwarding.

Once all sessions previously using the forwarded port have ended, the information is removed from the output of ~# in ssh.

Summary

The ssh command line client offers quite a few options not many users are aware of. Dynamically adding port forwarding rules to a session is a great feature I use frequently. Although it’s not quite on par with Putty’s port forwarding options dialogue it’s nevertheless very useful and I find myself mainly adding forwarding rules. The sshd (= server) configuration must of course allow port forwarding for this to work, if port forwarding fails because the admin disabled it you’ll get a message similar to this on in your ssh session:

[vagrant@server2 ~]$ channel 3: open failed: administratively prohibited: open failed

In which case you are out of luck.

Advertisement

Podman secrets: a better way to pass environment variables to containers

Podman became the standard container runtime with Oracle Linux 8 and later, and I have become a fan almost instantly. I especially like the fact that it is possible to run containers with far fewer privileges than previously required. Most Podman containers can be started by regular users, a fact I greatly appreciate in my development environments.

A common technique of passing information to containers is to use the –env command line argument when invoking podman run. Since passing sensitive information on the command line is never a good idea I searched for alternatives for a local development environment. Podman secrets are a very promising approach although not without their own issues. Before you consider using them please ensure your security department signs their use off, as always. They are pretty easy to translate back into plain text if you have access to the container host!

Podman Secrets

Podman secrets provide an alternative way for handling environment variables in containers. According to the documentation,

A secret is a blob of sensitive data which a container needs at runtime but should not be stored in the image or in source control, such as usernames and passwords, TLS certificates and keys, SSH keys or other important generic strings or binary content (up to 500 kb in size).

Why is this a big deal for me? You frequently find instructions in blog posts and other sources how to pass information to containers, such as user names and passwords. For example:

$ podman run --rm -it --name some-container \
-e USERNAME=someUsername \
-e PASSWORD=thisShouldNotBeEnteredHere \
docker.io/…/

Whilst specifying usernames and passwords on the command line is convenient for testing, passing credentials this way is not exactly secure. Quite the contrary. Plus there is a risk these things make it into a source control system and thus become publicly available.

An alternative to passing information to containers is by using Podman secrets. This way deployment and configuration can be kept separately.

Podman Secrets in Action

Secrets are separate entities, you create them using podman secret create (documentation link). They work quite well, provided they do not contain newlines. A little application using Oracle’s node driver to connect to an XE database and printing some connection details serves as an example. The example assumes that an XE 21c container database has been started. An “application account” has been created, and the password to this account is stored locally as a secret named oracle-secret.

The “application” is rather simple, it is based on the getting started section of the node-oracledb 5.5 driver documentation.

$ cat app.mjs 
import oracledb from 'oracledb';

(async() => {

  let connection;

  try {
    connection = await oracledb.getConnection( {
      user          : process.env.USERNAME,
      password      : process.env.PASSWORD,
      connectString : process.env.CONNECTSTRING
    });

    const result = await connection.execute(`
      select
        sys_context('userenv', 'instance_name') instance, 
        sys_context('userenv', 'con_name') pdb_name, 
        user
       from dual`,
      [],
      {
        outFormat: oracledb.OUT_FORMAT_OBJECT
      }
    );
    
    for (let r of result.rows) {
      console.log(`you are connected to ${r.INSTANCE}, PDB ${r.PDB_NAME} as ${r.USER}`);
    }

  } catch (err) {
    console.error(err);
  } finally {
    if (connection) {
      try {
        await connection.close();
      } catch (err) {
        console.error(err);
      }
    }
  }
})();

The important bit is right at the top: the Connection object is created using information provided via environment variables (USERNAME, PASSWORD, CONNECTSTRING). I locked the oracledb driver version in package.json at version 5.5.0, the most current release at the time of writing.

Running the Container

After building the container image, the container can be started as follows:

$ podmanrun --rm -it \
--net oracle-net \
--name some-node-app \
-e USERNAME=martin \
-e CONNECTSTRING="oraclexe:/xepdb1" \
--secret oracle-secret,type=env,target=PASSWORD \
localhost/nodeapp:0.1

Translated into plain English this commands starts an interactive, temporary container instance with an attached terminal for testing. The container is instructed to connect to the oracle-net network (a Podman network). A couple of environment variables are passed to the container: USERNAME and CONNECTSTRING. The use of the secret requires a little more explanation. The (existing) secret oracle-secret is passed as an environment variable (type=env). If it weren’t for the target=PASSWORD directive the secret would be accessible in the container by its name oracle-secret. Since I need an environment variable named PASSWORD (cf app.mjs above) I changed the target name to match. Once the container is up an environment variable named PASSWORD is available.

The new container should connect to the database and print something along the lines of:

you are connected to XE, PDB XEPDB1 as MARTIN

Summary

Podman secrets allow developers to provide information that shouldn’t be part of a container image or (configuration) code to containers. Provided the secrets are set up correctly they provide a much better way of passing sensitive information than hard-coded values on the command line. I said they provide a better way, but whilst secrets are definitely a step in the right direction they aren’t perfect (read: secure).

Whenever touching the point of sensitive information the advice remains the same: please have someone from the security team review the solution and sign it off before going ahead and using it. There are certainly more secure ways available to provide credentials to applications. Podman secrets however are a good first step in the right direction in my opinion.

Installing Podman on Oracle Linux 8

Rather than having to use a search engine to read up on how to install podman on Oracle Linux 8, I thought I’d write the procedure down. Hopefully this saves you (and me) a few minutes next time the task comes up. I probably should write a short Ansible Playbook at some point, but that’s for another post.

The easiest way to install podman on Oracle Linux 8 is to install the entire podman module. If you haven’t used modules and application streams yet in Oracle Linux 8 yet, you can find more details in the Oracle Linux 8 documentation. Quoting from chapter 5, section “Use DNF Modules and Application Streams” in Managing Software in Oracle Linux Guide:

DNF introduces the concepts of modules, streams and profiles to allow for the management of different versions of software applications within a single operating system release. Modules can be used to group together many packages that comprise a single application and its dependencies.

This sounds exactly like what I want.

Installing the Podman Module

So what do the concepts of stream and module mean in practice? Podman is shipped as a module in the Application Stream (AppStream):

[root@ol8podman ~]# dnf module list container-tools:ol8
Last metadata expiration check: 0:12:47 ago on Fri 15 Jul 2022 10:13:15 BST.
Oracle Linux 8 Application Stream (x86_64)
Name            Stream  Profiles Summary                                                                 
container-tools ol8 [d] common [ Most recent (rolling) versions of podman, buildah, skopeo, runc, conmon,
                        d]        runc, conmon, CRIU, Udica, etc as well as dependencies such as containe
                                 r-selinux built and tested together, and updated as frequently as every 
                                 12 weeks.

Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled

Rather than installing the podman RPM on its own and figure out which other packages I need I went with the installation of the entire module:

[root@ol8podman ~]# dnf module install container-tools:ol8 
Last metadata expiration check: 0:15:25 ago on Fri 15 Jul 2022 10:13:15 BST.
Dependencies resolved.
=========================================================================================================
 Package                      Arch   Version                                     Repository         Size
=========================================================================================================
Installing group/module packages:
 buildah                      x86_64 1:1.24.2-4.module+el8.6.0+20665+a3b29bef    ol8_appstream     8.1 M
 cockpit-podman               noarch 43-1.module+el8.6.0+20665+a3b29bef          ol8_appstream     493 k
 conmon                       x86_64 2:2.1.0-1.module+el8.6.0+20665+a3b29bef     ol8_appstream      55 k
 container-selinux            noarch 2:2.179.1-1.module+el8.6.0+20665+a3b29bef   ol8_appstream      58 k
 containernetworking-plugins  x86_64 1:1.0.1-2.module+el8.6.0+20665+a3b29bef     ol8_appstream      18 M
 containers-common            x86_64 2:1-27.0.1.module+el8.6.0+20665+a3b29bef    ol8_appstream      67 k
 criu                         x86_64 3.15-3.module+el8.6.0+20665+a3b29bef        ol8_appstream     518 k
 crun                         x86_64 1.4.4-1.module+el8.6.0+20665+a3b29bef       ol8_appstream     209 k
 fuse-overlayfs               x86_64 1.8.2-1.module+el8.6.0+20665+a3b29bef       ol8_appstream      73 k
 libslirp                     x86_64 4.4.0-1.module+el8.6.0+20665+a3b29bef       ol8_appstream      70 k
 podman                       x86_64 2:4.0.2-6.module+el8.6.0+20665+a3b29bef     ol8_appstream      13 M
 python3-podman               noarch 4.0.0-1.module+el8.6.0+20665+a3b29bef       ol8_appstream     149 k
 runc                         x86_64 1:1.0.3-2.module+el8.6.0+20665+a3b29bef     ol8_appstream     3.0 M
 skopeo                       x86_64 2:1.6.1-2.module+el8.6.0+20665+a3b29bef     ol8_appstream     6.7 M
 slirp4netns                  x86_64 1.1.8-2.module+el8.6.0+20665+a3b29bef       ol8_appstream      51 k
 udica                        noarch 0.2.6-3.module+el8.6.0+20665+a3b29bef       ol8_appstream      49 k
Installing dependencies:
 checkpolicy                  x86_64 2.9-1.el8                                   ol8_baseos_latest 346 k
 cockpit-bridge               x86_64 264.1-1.0.1.el8                             ol8_baseos_latest 535 k
 fuse-common                  x86_64 3.3.0-15.0.2.el8                            ol8_baseos_latest  22 k
 fuse3                        x86_64 3.3.0-15.0.2.el8                            ol8_baseos_latest  55 k
 fuse3-libs                   x86_64 3.3.0-15.0.2.el8                            ol8_baseos_latest  95 k
 glib-networking              x86_64 2.56.1-1.1.el8                              ol8_baseos_latest 155 k
 gsettings-desktop-schemas    x86_64 3.32.0-6.el8                                ol8_baseos_latest 633 k
 json-glib                    x86_64 1.4.4-1.el8                                 ol8_baseos_latest 144 k
 libmodman                    x86_64 2.0.1-17.el8                                ol8_baseos_latest  36 k
 libnet                       x86_64 1.1.6-15.el8                                ol8_appstream      67 k
 libproxy                     x86_64 0.4.15-5.2.el8                              ol8_baseos_latest  75 k
 podman-catatonit             x86_64 2:4.0.2-6.module+el8.6.0+20665+a3b29bef     ol8_appstream     354 k
 policycoreutils-python-utils noarch 2.9-19.0.1.el8                              ol8_baseos_latest 253 k
 protobuf-c                   x86_64 1.3.0-6.el8                                 ol8_appstream      37 k
 python3-audit                x86_64 3.0.7-2.el8.2                               ol8_baseos_latest  87 k
 python3-chardet              noarch 3.0.4-7.el8                                 ol8_baseos_latest 195 k
 python3-idna                 noarch 2.5-5.el8                                   ol8_baseos_latest  97 k
 python3-libsemanage          x86_64 2.9-8.el8                                   ol8_baseos_latest 128 k
 python3-pip                  noarch 9.0.3-22.el8                                ol8_appstream      20 k
 python3-policycoreutils      noarch 2.9-19.0.1.el8                              ol8_baseos_latest 2.2 M
 python3-pysocks              noarch 1.6.8-3.el8                                 ol8_baseos_latest  34 k
 python3-pytoml               noarch 0.1.14-5.git7dea353.el8                     ol8_appstream      25 k
 python3-pyxdg                noarch 0.25-16.el8                                 ol8_appstream      94 k
 python3-requests             noarch 2.20.0-2.1.el8_1                            ol8_baseos_latest 123 k
 python3-setools              x86_64 4.3.0-3.el8                                 ol8_baseos_latest 624 k
 python3-setuptools           noarch 39.2.0-6.el8                                ol8_baseos_latest 163 k
 python3-urllib3              noarch 1.24.2-5.0.1.el8                            ol8_baseos_latest 177 k
 python36                     x86_64 3.6.8-38.module+el8.5.0+20329+5c5719bc      ol8_appstream      19 k
 shadow-utils-subid           x86_64 2:4.6-16.el8                                ol8_baseos_latest 112 k
 yajl                         x86_64 2.1.0-10.el8                                ol8_appstream      41 k
Installing module profiles:
 container-tools/common                                                                                 
Enabling module streams:
 container-tools                     ol8                                                                
 python36                            3.6                                                                

Transaction Summary
=========================================================================================================
Install  46 Packages

Total download size: 58 M
Installed size: 200 M

...

This is great! The current stable podman release at the time of writing is 4.1.1, so getting 4.0.2 doesn’t look too bad to me :)

Summary

Installing podman on Oracle Linux 8 is quite simple provided you are happy to install the entire podman module. The module provides a very convenient way to install recent releases for podman and its related tools (buildah, skopeo, …). Podman is important enough to get its own User Guide in the Oracle Linux 8 documentation set, the installation instructions I used when putting this post together can be found in chapter 2.

Linking Containers with Podman

Users of the Docker engine might find that their container runtime isn’t featured prominently in Oracle Linux 8. In fact, unless you change the default confifguration a dnf search does not reveal the engine at all. For better or for worse, it appears the industry has been gradually switching from Docker to Podman and its related ecosystem.

Whilst most Docker commands can be translated 1:1 to the Podman world, some differences exist. Instead of highlighting all the changes here please have a look at the Podman User Guide.

Overview

This article explains how to create a network link between 2 containers:

  1. Oracle XE 21c
  2. SQLcl client

These containers are going to be run "rootless", which has a few implications. By default Podman will allocate storage for containers in ~/.local/share/containers/ so please ensure you have sufficient space in your home directory.

The article refers to Gerald Venzl’s Oracle-XE images and you will create another image for SQLcl.

Installation

If you haven’t already installed Podman you can do so by installing the container-tools:ol8 module:

[opc@podman ~]$ $ sudo dnf module install container-tools:ol8
Last metadata expiration check: 0:06:04 ago on Mon 21 Mar 2022 13:19:40 GMT.
Dependencies resolved.
========================================================================================================================
 Package                         Arch      Version                                           Repository            Size
========================================================================================================================
Installing group/module packages:
 buildah                         x86_64    1:1.23.1-2.0.1.module+el8.5.0+20494+0311868c      ol8_appstream        7.9 M
 cockpit-podman                  noarch    39-1.module+el8.5.0+20494+0311868c                ol8_appstream        483 k
 conmon                          x86_64    2:2.0.32-1.module+el8.5.0+20494+0311868c          ol8_appstream         55 k
 container-selinux               noarch    2:2.173.0-1.module+el8.5.0+20494+0311868c         ol8_appstream         57 k
 containernetworking-plugins     x86_64    1.0.1-1.module+el8.5.0+20494+0311868c             ol8_appstream         19 M
 containers-common               noarch    2:1-8.0.1.module+el8.5.0+20494+0311868c           ol8_appstream         62 k
 criu                            x86_64    3.15-3.module+el8.5.0+20416+d687fed7              ol8_appstream        518 k
 crun                            x86_64    1.4.1-1.module+el8.5.0+20494+0311868c             ol8_appstream        205 k
 fuse-overlayfs                  x86_64    1.8-1.module+el8.5.0+20494+0311868c               ol8_appstream         73 k
 libslirp                        x86_64    4.4.0-1.module+el8.5.0+20416+d687fed7             ol8_appstream         70 k
 podman                          x86_64    1:3.4.2-9.0.1.module+el8.5.0+20494+0311868c       ol8_appstream         12 M
 python3-podman                  noarch    3.2.1-1.module+el8.5.0+20494+0311868c             ol8_appstream        148 k
 runc                            x86_64    1.0.3-1.module+el8.5.0+20494+0311868c             ol8_appstream        3.1 M
 skopeo                          x86_64    2:1.5.2-1.0.1.module+el8.5.0+20494+0311868c       ol8_appstream        6.7 M
 slirp4netns                     x86_64    1.1.8-1.module+el8.5.0+20416+d687fed7             ol8_appstream         51 k
 udica                           noarch    0.2.6-1.module+el8.5.0+20494+0311868c             ol8_appstream         48 k
Installing dependencies:
 fuse-common                     x86_64    3.2.1-12.0.3.el8                                  ol8_baseos_latest     22 k
 fuse3                           x86_64    3.2.1-12.0.3.el8                                  ol8_baseos_latest     51 k
 fuse3-libs                      x86_64    3.2.1-12.0.3.el8                                  ol8_baseos_latest     95 k
 libnet                          x86_64    1.1.6-15.el8                                      ol8_appstream         67 k
 podman-catatonit                x86_64    1:3.4.2-9.0.1.module+el8.5.0+20494+0311868c       ol8_appstream        345 k
 policycoreutils-python-utils    noarch    2.9-16.0.1.el8                                    ol8_baseos_latest    252 k
 python3-pytoml                  noarch    0.1.14-5.git7dea353.el8                           ol8_appstream         25 k
 python3-pyxdg                   noarch    0.25-16.el8                                       ol8_appstream         94 k
 yajl                            x86_64    2.1.0-10.el8                                      ol8_appstream         41 k
Installing module profiles:
 container-tools/common                                                                                                
Enabling module streams:
 container-tools                           ol8                                                                         

Transaction Summary
========================================================================================================================
Install  25 Packages

If you like DNS on your container network, install podman-plugins and dnsmasq. This article assumes you do so. The latter of the 2 services needs to be enabled and started:

[opc@podman ~]$ for task in enable start is-active; do sudo systemctl ${task} dnsmasq; done
active

If you see active in the output as in the example dnsmasq is working. If your system is part of a more elaborate setup, the use of dnsmasq is discouraged and you should ask your friendly network admin for advice.

Virtual Network Configuration

This section describes setting up a virtual network. That way you are emulating the way you’d previously have worked with Docker. If I should find the time for it I’ll write a second article and introduce you to Podman’s PODs, an elegant concept similar to Kubernetes that is not available with the Docker engine.

Network creation

Before containers can communicate with one another, they need to be told which network to use. The easiest way to do so is by creating a new, custom network as shown in this example:

[opc@podman ~]$ podman network create oranet
/home/opc/.config/cni/net.d/oranet.conflist
[opc@podman ~]$ podman network ls
NETWORK ID    NAME        VERSION     PLUGINS
2f259bab93aa  podman      0.4.0       bridge,portmap,firewall,tuning
4f4bfc6d2c15  oranet      0.4.0       bridge,portmap,firewall,tuning,dnsname
[opc@podman ~]$ 

As you can see the new network – oranet – has been created and it’s capable of using DNS thanks for the dnsname extension. If you opted not to install podman-plugins and dnsmasq this feature won’t be availble. Testing showed that availability of DNS on the container network made life a lot easier.

Storage Volumes

Containers are transient by nature, things you store in them are ephemeral by design. Since that’s not ideal for databases, a persistence layer should be used instead. The industry’s best known method to do so is by employing (Podman) volumes. Volumes are crated using the podman volume create command, for example:

[opc@podman ~]$ podman volume create oradata
oradata

As it is the case with the Container images, by default alll the volume’s data will reside in ~/.local/share/containers.

Database Secrets

The final step while preparing for running a database in Podman is to create a secret. Secrets are a relatively new feature in Podman and relieve you from having to consider workarounds passing sensitive data to containers. The Oracle XE containers to be used need to be initialised with a DBA password and it is prudent not to pass this in clear text on the command line.

For this example the necessary database password has been created as a secret and stored as oracle-password using podman secret create ...

[opc@podman ~]$ podman secret create oracle-password ~/.passwordFileToBeDeletedAfterUse
0c5d6d9eff16c4d30d36c6133
[opc@podman ~]$ podman secret ls
ID                         NAME             DRIVER      CREATED        UPDATED        
0c5d6d9eff16c4d30d36c6133  oracle-password  file        2 minutes ago  2 minutes ago 

This concludes the necessary preparations.

Let there be Containers

With all the setup completed the next step is to start an Oracle 21c XE instance and build the SQLcl container.

Oracle XE

Using the instructions by Gerald Venzl’s GitHub repository, adapted for this use case, a call to podman run might look like this:

[opc@podman ~]$ podman run --name oracle21xe --secret oracle-password \
-e ORACLE_PASSWORD_FILE=/run/secrets/oracle-password -d \
--net oranet -v oradata:/opt/oracle/oradata \
docker.io/gvenzl/oracle-xe:21-slim
5d94c0c3620f811bbe522273f73cbcb7c5210fecc0f88b0ecacc1f5474c0855a

The necessary flags are as follows:

  • --name assigns a name to the container so you can reference it later
  • --secret passes a named secret to the container, accessible in /run/secrets/oracle-password
  • -d tells the container to run in the background
  • --net defines the network the container should be attached to
  • -v maps the newly created volume to a directory in the container

You can check whether the container is up an running by executing podman ps:

[opc@podman ~]$ podman ps
CONTAINER ID  IMAGE                               COMMAND     CREATED         STATUS             PORTS       NAMES
5d94c0c3620f  docker.io/gvenzl/oracle-xe:21-slim              53 seconds ago  Up 54 seconds ago              oracle21xe

Creating a small SQLcl container:

Creating a container to run sqlcl is really quite straight forward. A suitable Dockerfile is shown here, please ensure you update the ZIPFILE with the current SQLcl release.

FROM docker.io/openjdk:11

RUN useradd --comment "sqlcl owner" --home-dir /home/sqlcl --uid 1000 --create-home --shell $(which bash) sqlcl 

USER sqlcl
WORKDIR /home/sqlcl

ENV ZIPFILE=sqlcl-21.4.1.17.1458.zip

RUN curl -LO "https://download.oracle.com/otn_software/java/sqldeveloper/${ZIPFILE}" && \
        /usr/local/openjdk-11/bin/jar -xf ${ZIPFILE} && \
        rm ${ZIPFILE}

ENTRYPOINT ["bash", "/home/sqlcl/sqlcl/bin/sql", "/nolog"]

You could of course pull the latest sqlcl ZIP from https://download.oracle.com/otn_software/java/sqldeveloper/sqlcl-latest.zip. Using a named release should simplify the non-trivial task of naming ("tagging") your container image.

The image can be build using podman much in the same way Docker images were built:

[opc@podman ~]$ podman build . -t tools/sqlcl:21.4.1.17.1458

As you can see from the ENTRYPOINT the image cannot be sent to the backround (-d) by podman, it needs to be run interactively as you will see in the next section.

Linking Containers

The last step is to start the sqlcl container and connect to the database.

podman run --rm -it --name sqlcl --net oranet localhost/tools/sqlcl:21.4.1.17.1458

Here is an example how this works in my container:

[opc@podman ~]$ podman run --rm -it --name sqlcl --net oranet localhost/tools/sqlcl:21.4.1.17.1458


SQLcl: Release 21.4 Production on Mon Mar 21 13:35:05 2022

Copyright (c) 1982, 2022, Oracle.  All rights reserved.

SQL> connect system@oracle21xe/xepdb1
Password? (**********?) ***************
Connected.
SQL> show con_name
CON_NAME 
------------------------------
XEPDB1

The connection string consists of a username (system) and the container name assigned as part of the call to podman run ... --name. Thanks to the dnsname extension and linking the container to the oranet network it is possible to address systems by name. XEPDB1 is the default name of the XE instance’s Pluggable Database.

Instead of connecting to a Pluggable Database it is of course possible to connect to the Container Database’s Root (CDB$ROOT).

Summary

Podman is very compatible to Docker, easing the transition. In this part of the mini-series you could read how to use Podman functionality with Oracle Linux 8 to link a container running Oracle XE and SQLcl.

Learning Kubernetes: persistent storage with Minikube

As part of my talk at (the absolutely amazing) Riga Dev Days 2019 I deployed Oracle Restful Data Services (ORDS) in Minikube as my application’s endpoint. I’ll blog about deploying ORDS 19 in docker and running it on Kubernetes later, but before I can do so I want to briefly touch about persistent storage in Minikube because I saw it as a pre-requisite.

In a previous post I wrote about using Minikube as a runtime environment for learning Kubernetes. This post uses the same versions as before – Docker 18.09 and Kubernetes 1.14 are invoked by Minikube 1.1.0.

Quick & dirty intro to storage concepts for Kubernetes

Container storage is – at least to my understanding – ephemeral. So, in other words, if you delete the container, and create a new one, locally stored data is gone. In “standalone” Docker, you can use so-called volumes to store things more permanently. A similar concept exists in Kubernetes in form of a persistent volume (PV) and a corresponding persistent volume claim (PVC).

In a nutshell, and greatly simplified, a persistent volume is a piece of persistent storage pre-allocated on a Kubernetes cluster that can be “claimed”. Hence the names …

There is a lot more to say about persistent data in Kubernetes, and others have already done that, so you won’t find an explanation of storage concepts in Kubernetes here. Please head over to the documentation set for more details.

Be careful though, as the use of persistent volumes and persistent volume claims is quite an in-depth topic, especially outside the developer-scoped Minikube. I specifically won’t touch on the subject of persistent storage outside of Minikube, I’d rather save that for a later post.

Persistent Volumes for my configuration files

ORDS requires you to run an installation routine first, in the cause of which it will create a directory containing its runtime configuration. The shell script I’m using to initialise my container checks for the existence of the configuration directory and skips the installation step if it finds one. It proceeds straight to starting the Tomcat container. This is primarily done to speed the startup process up. If I were not to use the PV/PVC combination the pods in my deployment would have to run the installation each time they start up, something I wanted to avoid.

A less complex example please

I didn’t want to make things more complicated than necessary, so instead I’ll use a much simpler example before writing up on my ORDS deployment. My example is based on the official Ubuntu image, and I’m writing a “heartbeat” file into the mounted volume and see if I can find it on the underlying infrastructure.

The Minikube documentation informs us that Minikube preserves data in /data and a few other locations. Anything you try to put elsewhere will be lost. With that piece of information at hand I proceeded with the storage creation.

Experienced Minikube users might point out at this stage that pre-creating a volume isn’t needed as Minikube supports dynamic volume creation. From what I can tell that’s correct, but not what I chose to do.

Creating a persistent volume

Based on the previously mentioned documentation I created a persistent volume and accompanying volume claim like this:

$ cat persistent-volumes.yml 

kind: PersistentVolume
apiVersion: v1
metadata:
  name: research-vol
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/research-vol"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: research-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ""
  volumeName: research-vol
  resources:
    requests:
      storage: 1Gi

This approach is very specific to Minikube because I’m using persistent volumes of type HostPath.

Translated to plain English this means that I’m creating a 2 GB volume pointing to /data/research-vol on my Minikube system. And I’m asking for 1 GB in my persistent volume claim. The access mode (ReadWriteOnce) seems to be related to mounting the volume (concurrently) on multi-node clusters. Or it’s a bug because I successfully wrote to a single PVC from multiple pods as you can see in a bit… In my opinion the documentation wasn’t particularly clear on the subject.

Build the Docker image

Before I can deploy my example application to Minikube, I need to build a Docker image first. This is really boring, but I’ll show it here for the sake of completeness:

$ cat Dockerfile 
FROM ubuntu
COPY run.sh /usr/local/bin
RUN chmod +x /usr/local/bin/run.sh
ENTRYPOINT ["/usr/local/bin/run.sh"]

The shell script I’m running is shown here:

 $ cat run.sh 
 #!/usr/bin/env bash

 set -euxo pipefail

 if [[ ! -d $VOLUME ]]; then
   /bin/echo ERR: cannot find the volume $VOLUME, exiting
   exit 1
 fi

 while true; do
   /usr/bin/touch ${VOLUME}/$(hostname)-$(date +%Y%m%d-%H%M%S)
   /bin/sleep 10
 done

As you can see it is just an infinite loop writing heartbeat files into the directory indicated by the VOLUME environment variable. The image is available to Minikube after building it. Refer to my previous post on how to build Docker images for Minikube, or have a look at the documentation. I built the image as “research:v2”

Deploy

With the foundation laid, I can move on to defining the deployment. I’ll do that again with the help of a YAML file:

 $ cat research-deployment.yml 
 apiVersion: extensions/v1beta1
 kind: Deployment
 metadata:
   labels:
     app: research
   name: research
 spec:
   replicas: 2
   selector:
     matchLabels:
       app: research
   template:
     metadata:
       labels:
         app: research
     spec:
       containers:
       - image: research:v2
         env:
         - name: VOLUME
           value: "/var/log/heartbeat"
         name: research
         volumeMounts:
         - mountPath: "/var/log/heartbeat"
           name: research-vol
       volumes:
       - name: research-vol
         persistentVolumeClaim: 
           claimName: research-pvc
       restartPolicy: Always

If you haven’t previously worked with Kubernetes this might look daunting, but it isn’t actually. I’m asking for the deployment of 2 copies (“replicas”) of my research:v2 image. I am passing an environment variable – VOLUME – to the container image. It contains the path to the persistent volume’s mount point. I’m also mounting a PV named research-vol as /var/log/heartbeat/ in each container. This volume in the container scope is based on the definition found in the volumes: section of the YAML file. It’s important to match the information in volumes and volumeMounts.

Running

After a quick kubectl apply -f research-deployment.yml I have 2 pods happily writing to the persistent volume.

 $ kubectl get pods
 NAME                       READY   STATUS    RESTARTS   AGE
 research-f6668c975-98684   1/1     Running   0          6m22s
 research-f6668c975-z6pll   1/1     Running   0          6m22s

The directory exists as specified on the Minikube system. If you used the VirtualBox driver, use minikube ssh to access the VM and change to the PV’s directory:

# pwd
/data/research-vol
# ls -l *research-f6668c975-z6pll* | wc -l
53
# ls -l *research-f6668c975-98684* | wc -l
53

As you can see, both pods are able to write to the directory. The volume’s contents were preserved when I deleted the deployment:

$ kubectl delete deploy/research
deployment.extensions "research" deleted
$ kubectl get pods
No resources found.

This demonstrates that both pods from my deployment are gone. What does it look like back on the Minikube VM?

# ls -l *research-f6668c975-z6pll* | wc -l
63
# ls -l *research-f6668c975-98684* | wc -l
63

You can see that files are still present. The persistent volume lives up to its name.

Next I wanted to see if (new) Pods pick up what’s in the PV. After a short modification to the shell script and a quick docker build followed by a modification of the deployment to use research:v3, the container prints the number of files:

#!/usr/bin/env bash

set -euxo pipefail

if [[ ! -d $VOLUME ]]; then
  /bin/echo ERR: cannot find the volume $VOLUME, exiting
  exit 1
fi

/bin/echo INFO: found $(ls -l $VOLUME/* | wc -l) files in the volume

while true; do
  /usr/bin/touch ${VOLUME}/$(hostname)-$(date +%Y%m%d-%H%M%S)
  /bin/sleep 10
done

As proven by the logs:

 $ kubectl get pods
 NAME                        READY   STATUS        RESTARTS   AGE
 research-556cc9989c-bwzkg   1/1     Running       0          13s
 research-556cc9989c-chg76   1/1     Running       0          13s

 $ kubectl logs research-556cc9989c-bwzkg
 [[ ! -d /var/log/heartbeat ]]
 ++ wc -l
 ++ ls -l /var/log/heartbeat/research-7d4c7c8dc8-5xrtr-20190606-105014 
 ( a lot of output not shown )
 /bin/echo INFO: found 166 files in the volume
 INFO: found 166 files in the volume
 true
 ++ hostname
 ++ date +%Y%m%d-%H%M%S
 /usr/bin/touch /var/log/heartbeat/research-556cc9989c-bwzkg-20190606-105331
 /bin/sleep 10
 true
 ++ hostname
 ++ date +%Y%m%d-%H%M%S
 /usr/bin/touch /var/log/heartbeat/research-556cc9989c-bwzkg-20190606-105341
 /bin/sleep 10 

It appears as if I successfully created and used persistent volumes in Minikube.

Learning about Kubernetes: JDBC database connectivity to an Oracle database

In the past few months I have spent some time trying to better understand Kubernetes and how application developers can make use of it in the context of the Oracle database. In this post I’m sharing what I learned along that way. Please be careful: this is very much a moving target, and I wouldn’t call myself an expert in the field. If you find anything in this post that could be done differently/better, please let me know!

By the way, I am going to put something similar together where Oracle Restful Data Services (ORDS) will provide a different, more popular yet potentially more difficult-to-get-right connection method.

My Motivation for this post

The question I wanted to answer to myself was: can I deploy an application into Kubernetes that uses JDBC to connect to Oracle? The key design decision I made was to have a database outside the Kubernetes cluster. From what I read on blogs and social media it appears to be the consensus at the time of writing to keep state out of Kubernetes. I am curious to see if this is going to change in the future…

My environment

As this whole subject of container-based deployment and orchestration is developing at an incredible pace, here’s the specification of my lab environment used for this little experiment so I remember which components I had in use in case something breaks later…

  • Ubuntu 18.04 LTS serves as my host operating environment
  • Local installation of kubectl v1.14.1
  • I am using minikube version: v1.0.1 and it pulled the following default components:
    • Container runtime: docker 18.06.3-ce
    • kubernetes 1.14.1 (kubeadm v1.14.1, kubelet v1.14.1)
  • The database service is provided by Oracle XE (oracle-database-xe-18c-1.0-1.x86_64)
    • Oracle XE 18.4.0 to provide data persistance in a VM (“oraclexe.example.com”)
    • The Pluggable Database (PDB) name is XEPDB1
    • I am using an unprivileged user to connect

I would like to take the opportunity again to point out that the information you see in this post is my transcript of how I built this proof-of-concept, can-I-even-do-it application. It’s by no means meant to be anything other than that. There are a lot more bells and whistles to be attached to this in order to make it remotely ready for more serious use. But it was super fun putting this all together which is why I’m sharing this anyway.

Preparing Minikube

Since I’m only getting started with Kubernetes I didn’t want to set up a multi-node Kubernetes cluster in the cloud, and instead went with Minikube just to see if my ideas pan out ok or not. As I understand it, Minikube is a development/playground Kubernetes environment and it’s explicitly stated not to be suitable for production. It does seem to give me all that I need to work out my ideas. Since I’ve been using Virtualbox for a long time I went with the Minikube Virtualbox provider and all the defaults. It isn’t too hard to get this to work, I followed the documentation (see earlier link) on how to install kubectl (the main Kubernetes command line tool) and Minikube.

Once kubectl and Minikube were installed, I started the playground environment. On a modern terminal this prints some very fancy characters in the output as you can see:

$ minikube start
😄  minikube v1.0.1 on linux (amd64)
💿  Downloading Minikube ISO ...
 142.88 MB / 142.88 MB [============================================] 100.00% 0s
🤹  Downloading Kubernetes v1.14.1 images in the background ...
🔥  Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
📶  "minikube" IP address is 192.168.99.100
🐳  Configuring Docker as the container runtime ...
🐳  Version of container runtime is 18.06.3-ce
⌛  Waiting for image downloads to complete ...
✨  Preparing Kubernetes environment ...
💾  Downloading kubeadm v1.14.1
💾  Downloading kubelet v1.14.1
🚜  Pulling images required by Kubernetes v1.14.1 ...
🚀  Launching Kubernetes v1.14.1 using kubeadm ... 
⌛  Waiting for pods: apiserver proxy etcd scheduler controller dns
🔑  Configuring cluster permissions ...
🤔  Verifying component health .....
💗  kubectl is now configured to use "minikube"
🏄  Done! Thank you for using minikube!

This gives me a fully working Kubernetes lab environment. Next I need an application! My plan is to write the world’s most basic JDBC application connecting to the database and return success or failure. The application will be deployed into Kubernetes, the database will remain outside the cluster as you just read.

Let’s start with the datbase

I needed a database to connect to, obviously. This was super easy: create a VM, install the Oracle XE RPM for version 18c, and create a database. I started the database and listener to enable external connectivity. The external IP address to which the listener is bound is 192.168.99.113. This is unusual for Virtualbox as its internal network defaults to 192.168.56.0/24. Minikube started on 192.168.99.0/24 though and I had to make an adjustment to allow both VMs to communicate without having to mess with advanced routing rules.

Within XEPDB1 (which appears to be the default PDB in XE) I created a user named martin with a super secret password, granted him the connect role and left at at that. This account will later be used by my application.

Developing a super simple Java app to test connectivity

Modern applications are most often based on some sort of web technology, so I decided to go with that, too. I decided to write a very simple Java Servlet in which I connect to the database and print some status messages along the way. It’s a bit old-fashioned but makes it easy to get the job done.

I ended up using Tomcat 8.5 as the runtime environment, primarily because I have a working demo based on it and I wanted to save a little bit of time. I am implementing a Universal Connection Pool (UCP) as a JNDI resource in Tomcat for my application. The Servlet implements doGet(), and that’s where most of the action is going to take place. For anything but this playground environment I would write factory methods to handle all interactions with UCP and a bunch of other convenience methods. Since this is going to be a simple demo I decided to keep it short without confusing readers with 42 different Java classes. The important part of the Servlet code is shown here:

	protected void doGet(HttpServletRequest request, HttpServletResponse response)
          throws ServletException, IOException {

		PrintWriter out = response.getWriter();
		response.setContentType("text/html");
		
		out.println("<html><head><title>JDBC connection test</title></head>");
		
		out.println("<body>");
		
		out.println("<h1>Progress report</h1>");
		out.println("<ul>");
		
		out.println("<li>Trying to look up the JNDI data source ... ");
		PoolDataSource pds = null;
		Connection con = null;
		DataSource ds = null;
		PreparedStatement pstmt = null;
		
		// JNDI resource look-up
		Context envContext = null;
		Context ctx;
		try {
			ctx = new InitialContext();
			envContext = (Context) ctx.lookup("java:/comp/env");

			ds = (javax.sql.DataSource) envContext.lookup("jdbc/UCPTest");
		} catch (NamingException e) {
			
			out.println("error looking up jdbc/UCPTest: " + e.getMessage());
			e.printStackTrace();
			return;
		}
		
		pds = ((PoolDataSource) ds);
		
		out.println("Data Source successfully acquired</li>");
		out.println("<li>Trying to get a connection to the database ... ");
		
		try {
			con = pds.getConnection();
			
			// double-checking the connection although that shouldn't be necessary
			// due to the "validateConnectionOnBorrow flag in context.xml
			if (con == null || !((ValidConnection)con).isValid()) {
				out.println("Error: connection is either null or otherwise invalid</li>");
			}
			
			out.println(" successfully grabbed a connection from the pool</li>");
			out.println("<li>Trying run a query against the database ... ");
		
			pstmt = con.prepareStatement("select sys_context('userenv','db_name') from dual");
			ResultSet rs = pstmt.executeQuery();
			
			while (rs.next()) {
				out.println(" query against " + rs.getString(1) + " completed successfully</li>");
			} 
			
			// clean up
			rs.close();
			pstmt.close();
			con.close();
			con = null;
			
			out.println("<li>No errors encountered, we are done</li>");
			out.println("</ul>");
			
		} catch (SQLException e) {
			out.println("</ul>");
			out.println("<h2>A generic error was encountered. </h2>");
			out.println("<p>Error message:</p> <pre> " + e.getMessage() + "</pre>");
		}

		out.println("</body></html>");
	}

The context.xml file describing my JNDI data source goes into META-INF/context.xml in the WAR file.

<?xml version="1.0" encoding="UTF-8"?>

<Context>
	<Resource name="jdbc/UCPTest" 
		auth="Container"
		factory="oracle.ucp.jdbc.PoolDataSourceImpl"
		type="oracle.ucp.jdbc.PoolDataSource"
		description="UCP JNDI Connection Pool"
		connectionFactoryClassName="oracle.jdbc.pool.OracleDataSource"
		initialPoolSize="10" 
		minPoolSize="10" 
		maxPoolSize="10"
		inactiveConnectionTimeout="20" 
		user="martin" 
		password="..."
		url="jdbc:oracle:thin:@//oracle-dev/XEPDB1"
		connectionPoolName="UCPTest" 
		validateConnectionOnBorrow="true"
		fastConnectionFailoverEnabled="false" />
</Context>

Note the URL: it references oracle-dev as part of the JDBC connection string. That’s not the DNS name of my VM, and I’ll get back to that later.

As always, I debugged/compiled/tested the code until I was confident it worked. Once it did, I created a WAR file named servletTest.war for use with Tomcat.

Services

In a microservice-driven world it is very important NOT to hard-code IP addresses or machine names. In the age of on-premises deployments it was very common to hard code machine names and/o IP addresses into applications. Needless to say that … wasn’t ideal.

In Kubernetes you expose your deployments as services to the outside world to enable connectivity. With the database outside of Kubernetes in its own VM – which appears to be the preferred way of doing this at the time of writing – I need to expose it as a headless service. Using an external name appears to be the right choice. The YAML file defining the service is shown here:

$ cat service.yml 
---
kind: Service
apiVersion: v1
metadata:
  name: oracle-dev
spec:
  type: ExternalName
  externalName: oraclexe.example.com

If you scroll up you will notice that this service’s name matches the JDBC URL in the context.xml file shown earlier. The name is totally arbitrary, I went with oracle-dev to indicate this is a playground environment. The identifier listed as external name must of course resolve to the appropriate machine:

$ ping -c 1 oraclexe.example.com 
PING oraclexe.example.com (192.168.99.113) 56(84) bytes of data.
64 bytes from oraclexe.example.com (192.168.99.113): icmp_seq=1 ttl=64 time=0.290 ms

--- oraclexe.example.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms

A quick kubectl apply later the service is defined in Minikube:

$ kubectl get service/oracle-dev -o wide
NAME         TYPE           CLUSTER-IP   EXTERNAL-IP            PORT(S)   AGE   SELECTOR
oracle-dev   ExternalName   <none>       oraclexe.example.com   <none>    14m   <none>

That should be sorted! Whenever applications in the Kubernetes cluster refer to oracle-dev, they are pointed to oraclexe.example.com, my database VM.

Docker all the things, eh application

With the WAR file ready and a service I could use for the JDBC connection pool ready I could start thinking about deploying the application.

Create the Docker image

The first step is to package the application in Docker. To do so I used the official Tomcat Docker image. The Dockerfile is trivial: specify tomcat:8.5-slim and copy the WAR file into /usr/local/tomcat/webapps. That’s it! The CMD directive invokes “catalina.sh run” to start the servlet engine. This is the output created by “docker build”:

$ docker build -t servlettest:v1 .
Sending build context to Docker daemon  10.77MB
Step 1/3 : FROM tomcat:8.5-slim
8.5-slim: Pulling from library/tomcat
27833a3ba0a5: Pull complete 
16d944e3d00d: Pull complete 
9019de9fce5f: Pull complete 
9b053055f644: Pull complete 
110ab0e6e34d: Pull complete 
54976ba77289: Pull complete 
0afe340b9ec5: Pull complete 
6ddc5164be39: Pull complete 
c622a1870b10: Pull complete 
Digest: sha256:7c1ed9c730a6b537334fbe1081500474ca54e340174661b8e5c0251057bc4895
Status: Downloaded newer image for tomcat:8.5-slim
 ---> 78e1f34f235b
Step 2/3 : COPY servletTest.war /usr/local/tomcat/webapps/
 ---> c5b9afe06265
Step 3/3 : CMD ["catalina.sh", "run"]
 ---> Running in 2cd82f7358d3
Removing intermediate container 2cd82f7358d3
 ---> 342a8a2a31d3
Successfully built 342a8a2a31d3
Successfully tagged servlettest:v1

Once the docker image is built (remember to use eval $(minikube docker-env) before you run docker build to make the image available to Minikube) you can reference it in Kubernetes.

Deploying the application

The next step is to run the application. I have previously used “kubectl run” but since that’s deprecated I switched to creating deployments as YAML files. I dumped the deployment generated by “kubectl run” into a YML file and used the documentation to adjust it to what I need. The result is shown here:

$ cat deployment.yml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: servlettest
  name: servlettest
spec:
  replicas: 1
  selector:
    matchLabels:
      app: servlettest
  template:
    metadata:
      labels:
        app: servlettest
    spec:
      containers:
      - image: servlettest:v1
        imagePullPolicy: IfNotPresent
        name: servlettest
        ports:
        - containerPort: 8080
          protocol: TCP
      restartPolicy: Always

The application is deployed into Kubernetes using “kubectl apply -f deployment.yml”. A few moments later I can see it appearing:

$ kubectl get deploy/servlettest -o wide
NAME          READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS    IMAGES           SELECTOR
servlettest   1/1     1            1           7m23s   servlettest   servlettest:v1   app=servlettest

I’m delighted to see 1/1 pods ready to serve the application! It looks everything went to plan. I can see the pod as well:

$ kubectl get pods -o wide
NAME                           READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES
servlettest-6c6598b487-wssp6   1/1     Running   0          8m12s   172.17.0.4   minikube   <none>           <none>

Accessing the application

I now have an application running in Kubernetes! Umm, but what next? I haven’t been able to get access to the application yet. To do so, I need to expose another service. The Minikube quick start suggests exposing the application service via a NodePort, and that seems to be the quickest way. In a “proper Kubernetes environment” in the cloud you’d most likely go for a service of type LoadBalancer to expose applications to the public.

So far I have only shown you YAML files to configure Kubernetes resources, many things can be done via kubectl directly. For example, it’s possible to expose the deployment I just created as a service on the command line:

$ kubectl expose deployment servlettest --type NodePort
service/servlettest exposed

$ kubectl get service/servlettest -o wide
NAME          TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE   SELECTOR
servlettest   NodePort   10.100.249.181   <none>        8080:31030/TCP   71s   app=servlettest

One of the neat things with Minikube is that you don’t need to worry too much about how to actually access that service: the minikube service command provides the URL for the service, and it even opens it in the default browser if you so want. I’ll see if I can get the ingress add-on to work as well, but that should probably go into its own post…

In my case I’m only interested in the URL because I want a textual representation of the output:

$ minikube service servlettest --url
http://192.168.99.100:31030

$ lynx --dump http://192.168.99.100:31030/servletTest/JDBCTestServlet
                                Progress report

     * Trying to look up the JNDI data source ... Data Source successfully
       acquired
     * Trying to get a connection to the database ... successfully grabbed
       a connection from the pool
     * Trying run a query against the database ... query against XE
       completed successfully
     * No errors encountered, we are done

The URL I am passing to lynx is made up of the IP and node port (192.168.99.100:31030) as well as the servlet path based on the WAR file’s name.

So that’s it! I have a working application deployed into Minikube, using a service of type “ExternalName” to query my database. Although these steps have been created using Minikube, they shouldn’t be different on a “real” Kubernetes environment.

What about the database?

The JNDI resource definition for my connection pool requested 10 sessions, and they have been dutifully created:

SQL> select username, machine from v$session where username = 'MARTIN';

USERNAME                       MACHINE
------------------------------ ----------------------------------------
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6

10 rows selected.

The “machine” matches the pod name as shown in kubectl output:

$ kubectl get pods -l app=servlettest -o wide
NAME                           READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
servlettest-6c6598b487-wssp6   1/1     Running   0          30m   172.17.0.4   minikube   <none>           <none>

Well I guess you can actually deploy the world’s most basic JDBC application into Kubernetes. Whether this works for you is an entirely different matter. As many much cleverer people than me have been pointing out for some time: just because a technology is “hot” doesn’t mean it’s the best tool for the job. It’s always worthwhile weighing up pros and cons for your solution carefully, and just because something can be done (technically speaking) doesn’t mean it should be done.