What is Containerized Middleware ?

Rahul Chenny
3 min readApr 17, 2022

Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of their organization.

Background

Most organizations are at different levels of maturity in the adoption of container based infrastructure typically on a Kubernetes platform. Key drivers include nimbleness, automation, workload scalability and portability in a Hybrid Multi-cloud environment.

Typically, each container runs workloads, which includes applications (apps), and middleware (m/w) and agents that are bundled as curated docker images. Container lifecycle are orchestrated by a Kubernetes platforms such as Native Kubernetes, Red Hat OpenShift, VMware Tanzu etc. This approach presents a technology shift from that of deploying middleware on bare metal (i.e. traditional m/w deployment).

Objective

The goal of this article is to provide an overview of containerized m/w and to touch upon a few differences from that of a traditional m/w deployment. Furthermore, enable App and Ops Architects, SRE to make an informed architecture decision on adoption decisions.

Containerized Middleware Overview

This section provides an overview on “Functional Aspects” (i.e. its purpose) and “Deployment Aspect” i.e. what, how and where does one deploy it.

Layered representation of the
Containerized and Traditional middleware Deployment

Functional Aspect

At the onset the role and definition of middleware has not changed i.e. to provide application enabling services. Broadly speaking it includes the following functions

  1. Store or retrieve application data (database functions) for example MySQL, IBM DB2, MongoDB etc.

2. Send or receive data synchronously or asynchronously (integration functions) e.g. MQ Series, Rabbit MQ etc.

3. Application supporting functions (services such as runtime, transaction, security etc.) for example Apache Tomcat, IBM Websphere Liberty etc.

Deployment Aspect

Containerized middleware represents middleware software deployed in a container hosted by a Pod, which can be effectively orchestrated or managed by a Kubernetes (k8s) platform. Click here for more information on K8s platform.

Deploying a containerized middleware for example MongoDB, a NoSQL database is simple. To start with one can get a MongoDB image from an image repository such as dockerhub and deploy it using a kubectl command in a k8s platform. Today commercial container platforms also provide a service catalog to do the same. Note: It’s important to note that license requirements vary by middleware.

Differences between Containerized and Traditional m/w deployment

An enumeration of the differences between the two is a pretty significant topic by itself however noted below is an illustration using Installation and DevSecOps.

Installation
Containerized m/w: Typical installation entails using a container image and deployed using a Custom Resource Definition (CRD), which also includes details on configuration, security, health monitoring, and node affinity. Treatment of containerized m/w is akin to that of an application. Images are typically sourced from public or private repository:

  • Private repository for example IBM Red Hat. Individual organization can have their own images stored on a private container registry e.g. Red Hat Quay.
  • Public repository e.g. Docker Hub, Bitnami

Traditional m/w deployment: Typically performed using an executable or package manager or extracted from a tar/zip.

DevSecOps
Containerized m/w: Role of DevSecOps is synonymous with Day 1 and Day 2 Operations on Containerized m/w and is the preferred way. Infact GitOps, a DevSecOps variant, is widely accepted. Given the K8s platform’s API capabilities, it’s easier to implement it.

Traditional m/w deployment: While it’s possible to adopt DevSecOps to realize benefits of managing a containerized middleware, its relatively complex and cumbersome to implement it.

Note: In addition to the two points noted above there are several other differences between Day 1 and Day 2 operations. It includes differences in configuration, packaging, workload resources, observability, and developer friendliness. Perhaps best dealt in another another post.

Conclusion

Traditional m/w deployment has been around for several decades and has its key benefits including widespread skills, and simpler install. In comparison, containerized m/w significantly exceeds the benefits owing to simplicity, speed and scalability.

Authors:

Rahul Chenny, is an Executive Architect for manage offerings Container Platforms and DevSecOps at Kyndryl. He has also led and co-authored a framework for assessing Maturity Model for Container Infrastructure.

Hariharan Venkitachalam, is a Senior Technical Staff Member (STSM) and Master Inventor at Kyndryl. He has also developed the “Certified Containers” offering, to meet enterprise security and industry standard criteria for packing containerized workloads.

--

--

Rahul Chenny

Executive IT Architect, Sr. Inventor and Design Thinking Coach