Category Archives: Container

Learning Kubernetes: persistent storage with Minikube

As part of my talk at (the absolutely amazing) Riga Dev Days 2019 I deployed Oracle Restful Data Services (ORDS) in Minikube as my application’s endpoint. I’ll blog about deploying ORDS 19 in docker and running it on Kubernetes later, but before I can do so I want to briefly touch about persistent storage in Minikube because I saw it as a pre-requisite.

In a previous post I wrote about using Minikube as a runtime environment for learning Kubernetes. This post uses the same versions as before – Docker 18.09 and Kubernetes 1.14 are invoked by Minikube 1.1.0.

Quick & dirty intro to storage concepts for Kubernetes

Container storage is – at least to my understanding – ephemeral. So, in other words, if you delete the container, and create a new one, locally stored data is gone. In “standalone” Docker, you can use so-called volumes to store things more permanently. A similar concept exists in Kubernetes in form of a persistent volume (PV) and a corresponding persistent volume claim (PVC).

In a nutshell, and greatly simplified, a persistent volume is a piece of persistent storage pre-allocated on a Kubernetes cluster that can be “claimed”. Hence the names …

There is a lot more to say about persistent data in Kubernetes, and others have already done that, so you won’t find an explanation of storage concepts in Kubernetes here. Please head over to the documentation set for more details.

Be careful though, as the use of persistent volumes and persistent volume claims is quite an in-depth topic, especially outside the developer-scoped Minikube. I specifically won’t touch on the subject of persistent storage outside of Minikube, I’d rather save that for a later post.

Persistent Volumes for my configuration files

ORDS requires you to run an installation routine first, in the cause of which it will create a directory containing its runtime configuration. The shell script I’m using to initialise my container checks for the existence of the configuration directory and skips the installation step if it finds one. It proceeds straight to starting the Tomcat container. This is primarily done to speed the startup process up. If I were not to use the PV/PVC combination the pods in my deployment would have to run the installation each time they start up, something I wanted to avoid.

A less complex example please

I didn’t want to make things more complicated than necessary, so instead I’ll use a much simpler example before writing up on my ORDS deployment. My example is based on the official Ubuntu image, and I’m writing a “heartbeat” file into the mounted volume and see if I can find it on the underlying infrastructure.

The Minikube documentation informs us that Minikube preserves data in /data and a few other locations. Anything you try to put elsewhere will be lost. With that piece of information at hand I proceeded with the storage creation.

Experienced Minikube users might point out at this stage that pre-creating a volume isn’t needed as Minikube supports dynamic volume creation. From what I can tell that’s correct, but not what I chose to do.

Creating a persistent volume

Based on the previously mentioned documentation I created a persistent volume and accompanying volume claim like this:

$ cat persistent-volumes.yml 

kind: PersistentVolume
apiVersion: v1
metadata:
  name: research-vol
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/research-vol"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: research-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ""
  volumeName: research-vol
  resources:
    requests:
      storage: 1Gi

This approach is very specific to Minikube because I’m using persistent volumes of type HostPath.

Translated to plain English this means that I’m creating a 2 GB volume pointing to /data/research-vol on my Minikube system. And I’m asking for 1 GB in my persistent volume claim. The access mode (ReadWriteOnce) seems to be related to mounting the volume (concurrently) on multi-node clusters. Or it’s a bug because I successfully wrote to a single PVC from multiple pods as you can see in a bit… In my opinion the documentation wasn’t particularly clear on the subject.

Build the Docker image

Before I can deploy my example application to Minikube, I need to build a Docker image first. This is really boring, but I’ll show it here for the sake of completeness:

$ cat Dockerfile 
FROM ubuntu
COPY run.sh /usr/local/bin
RUN chmod +x /usr/local/bin/run.sh
ENTRYPOINT ["/usr/local/bin/run.sh"]

The shell script I’m running is shown here:

 $ cat run.sh 
 #!/usr/bin/env bash

 set -euxo pipefail

 if [[ ! -d $VOLUME ]]; then
   /bin/echo ERR: cannot find the volume $VOLUME, exiting
   exit 1
 fi

 while true; do
   /usr/bin/touch ${VOLUME}/$(hostname)-$(date +%Y%m%d-%H%M%S)
   /bin/sleep 10
 done

As you can see it is just an infinite loop writing heartbeat files into the directory indicated by the VOLUME environment variable. The image is available to Minikube after building it. Refer to my previous post on how to build Docker images for Minikube, or have a look at the documentation. I built the image as “research:v2”

Deploy

With the foundation laid, I can move on to defining the deployment. I’ll do that again with the help of a YAML file:

 $ cat research-deployment.yml 
 apiVersion: extensions/v1beta1
 kind: Deployment
 metadata:
   labels:
     app: research
   name: research
 spec:
   replicas: 2
   selector:
     matchLabels:
       app: research
   template:
     metadata:
       labels:
         app: research
     spec:
       containers:
       - image: research:v2
         env:
         - name: VOLUME
           value: "/var/log/heartbeat"
         name: research
         volumeMounts:
         - mountPath: "/var/log/heartbeat"
           name: research-vol
       volumes:
       - name: research-vol
         persistentVolumeClaim: 
           claimName: research-pvc
       restartPolicy: Always

If you haven’t previously worked with Kubernetes this might look daunting, but it isn’t actually. I’m asking for the deployment of 2 copies (“replicas”) of my research:v2 image. I am passing an environment variable – VOLUME – to the container image. It contains the path to the persistent volume’s mount point. I’m also mounting a PV named research-vol as /var/log/heartbeat/ in each container. This volume in the container scope is based on the definition found in the volumes: section of the YAML file. It’s important to match the information in volumes and volumeMounts.

Running

After a quick kubectl apply -f research-deployment.yml I have 2 pods happily writing to the persistent volume.

 $ kubectl get pods
 NAME                       READY   STATUS    RESTARTS   AGE
 research-f6668c975-98684   1/1     Running   0          6m22s
 research-f6668c975-z6pll   1/1     Running   0          6m22s

The directory exists as specified on the Minikube system. If you used the VirtualBox driver, use minikube ssh to access the VM and change to the PV’s directory:

# pwd
/data/research-vol
# ls -l *research-f6668c975-z6pll* | wc -l
53
# ls -l *research-f6668c975-98684* | wc -l
53

As you can see, both pods are able to write to the directory. The volume’s contents were preserved when I deleted the deployment:

$ kubectl delete deploy/research
deployment.extensions "research" deleted
$ kubectl get pods
No resources found.

This demonstrates that both pods from my deployment are gone. What does it look like back on the Minikube VM?

# ls -l *research-f6668c975-z6pll* | wc -l
63
# ls -l *research-f6668c975-98684* | wc -l
63

You can see that files are still present. The persistent volume lives up to its name.

Next I wanted to see if (new) Pods pick up what’s in the PV. After a short modification to the shell script and a quick docker build followed by a modification of the deployment to use research:v3, the container prints the number of files:

#!/usr/bin/env bash

set -euxo pipefail

if [[ ! -d $VOLUME ]]; then
  /bin/echo ERR: cannot find the volume $VOLUME, exiting
  exit 1
fi

/bin/echo INFO: found $(ls -l $VOLUME/* | wc -l) files in the volume

while true; do
  /usr/bin/touch ${VOLUME}/$(hostname)-$(date +%Y%m%d-%H%M%S)
  /bin/sleep 10
done

As proven by the logs:

 $ kubectl get pods
 NAME                        READY   STATUS        RESTARTS   AGE
 research-556cc9989c-bwzkg   1/1     Running       0          13s
 research-556cc9989c-chg76   1/1     Running       0          13s

 $ kubectl logs research-556cc9989c-bwzkg
 [[ ! -d /var/log/heartbeat ]]
 ++ wc -l
 ++ ls -l /var/log/heartbeat/research-7d4c7c8dc8-5xrtr-20190606-105014 
 ( a lot of output not shown )
 /bin/echo INFO: found 166 files in the volume
 INFO: found 166 files in the volume
 true
 ++ hostname
 ++ date +%Y%m%d-%H%M%S
 /usr/bin/touch /var/log/heartbeat/research-556cc9989c-bwzkg-20190606-105331
 /bin/sleep 10
 true
 ++ hostname
 ++ date +%Y%m%d-%H%M%S
 /usr/bin/touch /var/log/heartbeat/research-556cc9989c-bwzkg-20190606-105341
 /bin/sleep 10 

It appears as if I successfully created and used persistent volumes in Minikube.

Advertisements

Learning about Kubernetes: JDBC database connectivity to an Oracle database

In the past few months I have spent some time trying to better understand Kubernetes and how application developers can make use of it in the context of the Oracle database. In this post I’m sharing what I learned along that way. Please be careful: this is very much a moving target, and I wouldn’t call myself an expert in the field. If you find anything in this post that could be done differently/better, please let me know!

By the way, I am going to put something similar together where Oracle Restful Data Services (ORDS) will provide a different, more popular yet potentially more difficult-to-get-right connection method.

My Motivation for this post

The question I wanted to answer to myself was: can I deploy an application into Kubernetes that uses JDBC to connect to Oracle? The key design decision I made was to have a database outside the Kubernetes cluster. From what I read on blogs and social media it appears to be the consensus at the time of writing to keep state out of Kubernetes. I am curious to see if this is going to change in the future…

My environment

As this whole subject of container-based deployment and orchestration is developing at an incredible pace, here’s the specification of my lab environment used for this little experiment so I remember which components I had in use in case something breaks later…

  • Ubuntu 18.04 LTS serves as my host operating environment
  • Local installation of kubectl v1.14.1
  • I am using minikube version: v1.0.1 and it pulled the following default components:
    • Container runtime: docker 18.06.3-ce
    • kubernetes 1.14.1 (kubeadm v1.14.1, kubelet v1.14.1)
  • The database service is provided by Oracle XE (oracle-database-xe-18c-1.0-1.x86_64)
    • Oracle XE 18.4.0 to provide data persistance in a VM (“oraclexe.example.com”)
    • The Pluggable Database (PDB) name is XEPDB1
    • I am using an unprivileged user to connect

I would like to take the opportunity again to point out that the information you see in this post is my transcript of how I built this proof-of-concept, can-I-even-do-it application. It’s by no means meant to be anything other than that. There are a lot more bells and whistles to be attached to this in order to make it remotely ready for more serious use. But it was super fun putting this all together which is why I’m sharing this anyway.

Preparing Minikube

Since I’m only getting started with Kubernetes I didn’t want to set up a multi-node Kubernetes cluster in the cloud, and instead went with Minikube just to see if my ideas pan out ok or not. As I understand it, Minikube is a development/playground Kubernetes environment and it’s explicitly stated not to be suitable for production. It does seem to give me all that I need to work out my ideas. Since I’ve been using Virtualbox for a long time I went with the Minikube Virtualbox provider and all the defaults. It isn’t too hard to get this to work, I followed the documentation (see earlier link) on how to install kubectl (the main Kubernetes command line tool) and Minikube.

Once kubectl and Minikube were installed, I started the playground environment. On a modern terminal this prints some very fancy characters in the output as you can see:

$ minikube start
😄  minikube v1.0.1 on linux (amd64)
💿  Downloading Minikube ISO ...
 142.88 MB / 142.88 MB [============================================] 100.00% 0s
🤹  Downloading Kubernetes v1.14.1 images in the background ...
🔥  Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
📶  "minikube" IP address is 192.168.99.100
🐳  Configuring Docker as the container runtime ...
🐳  Version of container runtime is 18.06.3-ce
⌛  Waiting for image downloads to complete ...
✨  Preparing Kubernetes environment ...
💾  Downloading kubeadm v1.14.1
💾  Downloading kubelet v1.14.1
🚜  Pulling images required by Kubernetes v1.14.1 ...
🚀  Launching Kubernetes v1.14.1 using kubeadm ... 
⌛  Waiting for pods: apiserver proxy etcd scheduler controller dns
🔑  Configuring cluster permissions ...
🤔  Verifying component health .....
💗  kubectl is now configured to use "minikube"
🏄  Done! Thank you for using minikube!

This gives me a fully working Kubernetes lab environment. Next I need an application! My plan is to write the world’s most basic JDBC application connecting to the database and return success or failure. The application will be deployed into Kubernetes, the database will remain outside the cluster as you just read.

Let’s start with the datbase

I needed a database to connect to, obviously. This was super easy: create a VM, install the Oracle XE RPM for version 18c, and create a database. I started the database and listener to enable external connectivity. The external IP address to which the listener is bound is 192.168.99.113. This is unusual for Virtualbox as its internal network defaults to 192.168.56.0/24. Minikube started on 192.168.99.0/24 though and I had to make an adjustment to allow both VMs to communicate without having to mess with advanced routing rules.

Within XEPDB1 (which appears to be the default PDB in XE) I created a user named martin with a super secret password, granted him the connect role and left at at that. This account will later be used by my application.

Developing a super simple Java app to test connectivity

Modern applications are most often based on some sort of web technology, so I decided to go with that, too. I decided to write a very simple Java Servlet in which I connect to the database and print some status messages along the way. It’s a bit old-fashioned but makes it easy to get the job done.

I ended up using Tomcat 8.5 as the runtime environment, primarily because I have a working demo based on it and I wanted to save a little bit of time. I am implementing a Universal Connection Pool (UCP) as a JNDI resource in Tomcat for my application. The Servlet implements doGet(), and that’s where most of the action is going to take place. For anything but this playground environment I would write factory methods to handle all interactions with UCP and a bunch of other convenience methods. Since this is going to be a simple demo I decided to keep it short without confusing readers with 42 different Java classes. The important part of the Servlet code is shown here:

	protected void doGet(HttpServletRequest request, HttpServletResponse response)
          throws ServletException, IOException {

		PrintWriter out = response.getWriter();
		response.setContentType("text/html");
		
		out.println("<html><head><title>JDBC connection test</title></head>");
		
		out.println("<body>");
		
		out.println("<h1>Progress report</h1>");
		out.println("<ul>");
		
		out.println("<li>Trying to look up the JNDI data source ... ");
		PoolDataSource pds = null;
		Connection con = null;
		DataSource ds = null;
		PreparedStatement pstmt = null;
		
		// JNDI resource look-up
		Context envContext = null;
		Context ctx;
		try {
			ctx = new InitialContext();
			envContext = (Context) ctx.lookup("java:/comp/env");

			ds = (javax.sql.DataSource) envContext.lookup("jdbc/UCPTest");
		} catch (NamingException e) {
			
			out.println("error looking up jdbc/UCPTest: " + e.getMessage());
			e.printStackTrace();
			return;
		}
		
		pds = ((PoolDataSource) ds);
		
		out.println("Data Source successfully acquired</li>");
		out.println("<li>Trying to get a connection to the database ... ");
		
		try {
			con = pds.getConnection();
			
			// double-checking the connection although that shouldn't be necessary
			// due to the "validateConnectionOnBorrow flag in context.xml
			if (con == null || !((ValidConnection)con).isValid()) {
				out.println("Error: connection is either null or otherwise invalid</li>");
			}
			
			out.println(" successfully grabbed a connection from the pool</li>");
			out.println("<li>Trying run a query against the database ... ");
		
			pstmt = con.prepareStatement("select sys_context('userenv','db_name') from dual");
			ResultSet rs = pstmt.executeQuery();
			
			while (rs.next()) {
				out.println(" query against " + rs.getString(1) + " completed successfully</li>");
			} 
			
			// clean up
			rs.close();
			pstmt.close();
			con.close();
			con = null;
			
			out.println("<li>No errors encountered, we are done</li>");
			out.println("</ul>");
			
		} catch (SQLException e) {
			out.println("</ul>");
			out.println("<h2>A generic error was encountered. </h2>");
			out.println("<p>Error message:</p> <pre> " + e.getMessage() + "</pre>");
		}

		out.println("</body></html>");
	}

The context.xml file describing my JNDI data source goes into META-INF/context.xml in the WAR file.

<?xml version="1.0" encoding="UTF-8"?>

<Context>
	<Resource name="jdbc/UCPTest" 
		auth="Container"
		factory="oracle.ucp.jdbc.PoolDataSourceImpl"
		type="oracle.ucp.jdbc.PoolDataSource"
		description="UCP JNDI Connection Pool"
		connectionFactoryClassName="oracle.jdbc.pool.OracleDataSource"
		initialPoolSize="10" 
		minPoolSize="10" 
		maxPoolSize="10"
		inactiveConnectionTimeout="20" 
		user="martin" 
		password="..."
		url="jdbc:oracle:thin:@//oracle-dev/XEPDB1"
		connectionPoolName="UCPTest" 
		validateConnectionOnBorrow="true"
		fastConnectionFailoverEnabled="false" />
</Context>

Note the URL: it references oracle-dev as part of the JDBC connection string. That’s not the DNS name of my VM, and I’ll get back to that later.

As always, I debugged/compiled/tested the code until I was confident it worked. Once it did, I created a WAR file named servletTest.war for use with Tomcat.

Services

In a microservice-driven world it is very important NOT to hard-code IP addresses or machine names. In the age of on-premises deployments it was very common to hard code machine names and/o IP addresses into applications. Needless to say that … wasn’t ideal.

In Kubernetes you expose your deployments as services to the outside world to enable connectivity. With the database outside of Kubernetes in its own VM – which appears to be the preferred way of doing this at the time of writing – I need to expose it as a headless service. Using an external name appears to be the right choice. The YAML file defining the service is shown here:

$ cat service.yml 
---
kind: Service
apiVersion: v1
metadata:
  name: oracle-dev
spec:
  type: ExternalName
  externalName: oraclexe.example.com

If you scroll up you will notice that this service’s name matches the JDBC URL in the context.xml file shown earlier. The name is totally arbitrary, I went with oracle-dev to indicate this is a playground environment. The identifier listed as external name must of course resolve to the appropriate machine:

$ ping -c 1 oraclexe.example.com 
PING oraclexe.example.com (192.168.99.113) 56(84) bytes of data.
64 bytes from oraclexe.example.com (192.168.99.113): icmp_seq=1 ttl=64 time=0.290 ms

--- oraclexe.example.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms

A quick kubectl apply later the service is defined in Minikube:

$ kubectl get service/oracle-dev -o wide
NAME         TYPE           CLUSTER-IP   EXTERNAL-IP            PORT(S)   AGE   SELECTOR
oracle-dev   ExternalName   <none>       oraclexe.example.com   <none>    14m   <none>

That should be sorted! Whenever applications in the Kubernetes cluster refer to oracle-dev, they are pointed to oraclexe.example.com, my database VM.

Docker all the things, eh application

With the WAR file ready and a service I could use for the JDBC connection pool ready I could start thinking about deploying the application.

Create the Docker image

The first step is to package the application in Docker. To do so I used the official Tomcat Docker image. The Dockerfile is trivial: specify tomcat:8.5-slim and copy the WAR file into /usr/local/tomcat/webapps. That’s it! The CMD directive invokes “catalina.sh run” to start the servlet engine. This is the output created by “docker build”:

$ docker build -t servlettest:v1 .
Sending build context to Docker daemon  10.77MB
Step 1/3 : FROM tomcat:8.5-slim
8.5-slim: Pulling from library/tomcat
27833a3ba0a5: Pull complete 
16d944e3d00d: Pull complete 
9019de9fce5f: Pull complete 
9b053055f644: Pull complete 
110ab0e6e34d: Pull complete 
54976ba77289: Pull complete 
0afe340b9ec5: Pull complete 
6ddc5164be39: Pull complete 
c622a1870b10: Pull complete 
Digest: sha256:7c1ed9c730a6b537334fbe1081500474ca54e340174661b8e5c0251057bc4895
Status: Downloaded newer image for tomcat:8.5-slim
 ---> 78e1f34f235b
Step 2/3 : COPY servletTest.war /usr/local/tomcat/webapps/
 ---> c5b9afe06265
Step 3/3 : CMD ["catalina.sh", "run"]
 ---> Running in 2cd82f7358d3
Removing intermediate container 2cd82f7358d3
 ---> 342a8a2a31d3
Successfully built 342a8a2a31d3
Successfully tagged servlettest:v1

Once the docker image is built (remember to use eval $(minikube docker-env) before you run docker build to make the image available to Minikube) you can reference it in Kubernetes.

Deploying the application

The next step is to run the application. I have previously used “kubectl run” but since that’s deprecated I switched to creating deployments as YAML files. I dumped the deployment generated by “kubectl run” into a YML file and used the documentation to adjust it to what I need. The result is shown here:

$ cat deployment.yml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: servlettest
  name: servlettest
spec:
  replicas: 1
  selector:
    matchLabels:
      app: servlettest
  template:
    metadata:
      labels:
        app: servlettest
    spec:
      containers:
      - image: servlettest:v1
        imagePullPolicy: IfNotPresent
        name: servlettest
        ports:
        - containerPort: 8080
          protocol: TCP
      restartPolicy: Always

The application is deployed into Kubernetes using “kubectl apply -f deployment.yml”. A few moments later I can see it appearing:

$ kubectl get deploy/servlettest -o wide
NAME          READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS    IMAGES           SELECTOR
servlettest   1/1     1            1           7m23s   servlettest   servlettest:v1   app=servlettest

I’m delighted to see 1/1 pods ready to serve the application! It looks everything went to plan. I can see the pod as well:

$ kubectl get pods -o wide
NAME                           READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE   READINESS GATES
servlettest-6c6598b487-wssp6   1/1     Running   0          8m12s   172.17.0.4   minikube   <none>           <none>

Accessing the application

I now have an application running in Kubernetes! Umm, but what next? I haven’t been able to get access to the application yet. To do so, I need to expose another service. The Minikube quick start suggests exposing the application service via a NodePort, and that seems to be the quickest way. In a “proper Kubernetes environment” in the cloud you’d most likely go for a service of type LoadBalancer to expose applications to the public.

So far I have only shown you YAML files to configure Kubernetes resources, many things can be done via kubectl directly. For example, it’s possible to expose the deployment I just created as a service on the command line:

$ kubectl expose deployment servlettest --type NodePort
service/servlettest exposed

$ kubectl get service/servlettest -o wide
NAME          TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE   SELECTOR
servlettest   NodePort   10.100.249.181   <none>        8080:31030/TCP   71s   app=servlettest

One of the neat things with Minikube is that you don’t need to worry too much about how to actually access that service: the minikube service command provides the URL for the service, and it even opens it in the default browser if you so want. I’ll see if I can get the ingress add-on to work as well, but that should probably go into its own post…

In my case I’m only interested in the URL because I want a textual representation of the output:

$ minikube service servlettest --url
http://192.168.99.100:31030

$ lynx --dump http://192.168.99.100:31030/servletTest/JDBCTestServlet
                                Progress report

     * Trying to look up the JNDI data source ... Data Source successfully
       acquired
     * Trying to get a connection to the database ... successfully grabbed
       a connection from the pool
     * Trying run a query against the database ... query against XE
       completed successfully
     * No errors encountered, we are done

The URL I am passing to lynx is made up of the IP and node port (192.168.99.100:31030) as well as the servlet path based on the WAR file’s name.

So that’s it! I have a working application deployed into Minikube, using a service of type “ExternalName” to query my database. Although these steps have been created using Minikube, they shouldn’t be different on a “real” Kubernetes environment.

What about the database?

The JNDI resource definition for my connection pool requested 10 sessions, and they have been dutifully created:

SQL> select username, machine from v$session where username = 'MARTIN';

USERNAME                       MACHINE
------------------------------ ----------------------------------------
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6
MARTIN                         servlettest-6c6598b487-wssp6

10 rows selected.

The “machine” matches the pod name as shown in kubectl output:

$ kubectl get pods -l app=servlettest -o wide
NAME                           READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
servlettest-6c6598b487-wssp6   1/1     Running   0          30m   172.17.0.4   minikube   <none>           <none>

Well I guess you can actually deploy the world’s most basic JDBC application into Kubernetes. Whether this works for you is an entirely different matter. As many much cleverer people than me have been pointing out for some time: just because a technology is “hot” doesn’t mean it’s the best tool for the job. It’s always worthwhile weighing up pros and cons for your solution carefully, and just because something can be done (technically speaking) doesn’t mean it should be done.