Updated: August 2, 2022
We’ve all been there. You’ve read a lot about the basics of Docker, Kubernetes, Pods, ReplicaSets, Deployments & more. The different parts that are used to build cloud native applications.
Now, you’re looking for a practical example where you can connect all the parts together. That’s exactly the purpose of our current series of articles on cloud native applications. Today, we’re starting off with an article on creating a mail server environment with Docker. Practically, this is a stepping stone for the rest of our cloud-native deployment articles.
Short Backstory on Deploying a Mail Server App into a Docker Container
Here’s a little backstory: I was recently going over the process of converting a standard RPM package to a cloud application with a friend who’s a Software Engineer. He’d already read a lot of what’s out there about containerization and was ready to take the next step, to try to do it himself.
That’s how we got here, experimenting and going over the basic steps of how to deploy a mail server application into a Docker container, and then into a Kubernetes cluster. We hope us sharing this real-time, intuitive experiment with our community piques your interest, as it did ours.
We’re going to show you every step, what issues we encountered, and how we solved them.
We want to avoid switching to ‘cloud native’ just because it’s a trendy buzzword. So, as we examine the technology, we also take a look at who can benefit the most from this particular mail server approach.
TL;DR
If you’re a veteran Kubernetes user, this article may not add much value to your know-how.
However, the main objective is to bring together the typical steps that a user needs to follow when deploying a Dockerized mail server application to a Kubernetes cluster.
You can think of this article as a reference that you can get back to whenever you need a quick refresher about the most common resources used when deploying mail server applications with Docker and Kubernetes.
Through our discussion, we visit the conversion of the RPM to a Docker image, publishing to Docker Hub, creating and clustering in Google Cloud Platform, and deploying our image.
The Sample Application — Axigen Mail Server
For this demonstration, we’re going to be using the Axigen Docker Mail Server as a sample application. Why? I find that, while it’s specific to mail server technology, it shares a lot of modern web application requirements:
- A front end part where user requests are received.
- A backend that stores state.
- A static component that displays a nice user interface and connects to the front end.
This experiment can, therefore, be replicated with other applications as well.
Note: Axigen already provides a fully functional, ready-to use Docker image in Docker Hub. You can find all the information about deploying & running Axigen in Docker and try it yourself here.
A Kubernetes Helm chart will also be available very soon. Both are intended for actual production use and are adjusted as such. Stay tuned for more.
Why Run a Cloud Native Mail Server
As a stateful application, the benefits of turning a mail server into a cloud native, container-based app have only recently become known:
1. Weak coupling with the underlying operating system (running the container)
- the container can be easily moved without repackaging;
- relinquishes control of the underlying operating system, in terms of monitoring, software upgrades & more; (unlike the ‘Rehost’ model where the customer still needs to manage an operating system image provided by cloud providers)
- independence from the provider itself (no more application reinstall and data migration when switching providers).
2. Significantly simplified scale-up
- while vertical scaling is fairly simple in a virtualized environment (add CPUs, Memory and Disk capacity), horizontal scaling is considerably more limited;
- the container paradigm forces the application developer/packager to ‘think cloud native’, thus separating compute from storage from the model itself;
- due mainly to overwhelmingly lower overhead of a container versus a virtual machine, the number of instances can scale from tenths to thousands, thus lowering the burden on the software itself to handle too high of a concurrency level.
Who can benefit the most from a Cloud Native mail server approach?
The question is: why not just ‘rehost’? After all, there are numerous platforms out there (like AWS, Azure, IBM Cloud, to name a few) that offer cloud machines on which the same product package can be installed and operated just as easy (if not easier) than on premises.
Since going ‘cloud native’ is a significant initial investment in research and training, and most often yields benefits further down the road, it might make more sense to be embraced by users for whom the ‘cloud native’ benefits are higher:
- software developers, that can provide their customers with ‘cloud native’ benefits;
- medium to large companies, harnessing the resources needed for the initial investment;
- service providers, for whom scalability and maintenance costs reduction are important business factors.
Now that we’ve answered the why and for whom questions, let’s get started on the how.
The Cloud Native Approach — Replatform
We’ve already touched on the ‘rehost’ or 'lift and shift’ approach - aka virtualizing a physical machine and importing it to a cloud provider, or simply migrating an existing on-prem virtual machine towards a cloud service.
With the steps below, we’re closing in significantly on our holy grail, cloud nativeness, via the ‘replatform’ approach.
Creating an Email Server Environment with Docker
Here’s the simplest way to achieve a container image based on a legacy packaged application (RPM, DEB).
For starters, since we need to run this application on Kubernetes, the first step we need to take is to Dockerize it. That is, to enable it to run on Docker.
While you can choose between several container software providers, Docker still reigns king, with a staggering 79% market share.
As for the standard package to start from, we used the RPM distribution, this time based on what we know best and use most (RedHat vs Debian / Ubuntu).
Creating a docker image is quite similar to installing a package on a ‘real’ operating system. We assumed that the user has a basic knowledge of using the command line and has Docker installed. The goal, as stated above, is to exemplify how to obtain a container from a CentOS image.
Note: An important takeaway is the difference between image and instance (‘container’ in docker ‘speak' — we shall use the term ‘instance’, to attain a clear distinction from the ‘container’ as a concept).
An ‘instance’ (or container) is an equivalent of a machine; it has an IP, one can run commands in a shell in it, and so on. An ‘image’ is the equivalent of a package; you always use an ‘image’ to create an ‘instance’ (or container).
1. Creating a CentOS Docker Instance
Let’s go ahead and create this Docker instance from the CentOS image:
[root@7294b716163d /]#
From another terminal, we can observe the newly created instance:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7294b716163d centos:latest "/bin/bash" 20 seconds ago Up 20 seconds zen_austin
Next, we perform OS updates, as we would with any regular operating system instance:
Failed to set locale, defaulting to C.UTF-8
CentOS-8 - AppStream 4.6 MB/s | 7.0 MB 00:01
CentOS-8 - Base 2.0 MB/s | 2.2 MB 00:01
CentOS-8 - Extras 9.7 kB/s | 5.9 kB 00:00
Dependencies resolved.
================================================================================================================================================================================================================================================
Package Architecture Version Repository Size
================================================================================================================================================================================================================================================
Upgrading:
audit-libs x86_64 3.0-0.13.20190507gitf58ec40.el8 BaseOS 116 k
binutils x86_64 2.30-58.el8_1.2 BaseOS 5.7 M
centos-gpg-keys noarch 8.1-1.1911.0.9.el8 BaseOS 12 k
centos-release x86_64 8.1-1.1911.0.9.el8 BaseOS 21 k
centos-repos x86_64 8.1-1.1911.0.9.el8 BaseOS 13 k
coreutils-single x86_64 8.30-6.el8_1.1 BaseOS 630 k
glibc x86_64 2.28-72.el8_1.1 BaseOS 3.7 M
glibc-common x86_64 2.28-72.el8_1.1 BaseOS 836 k
glibc-minimal-langpack x86_64 2.28-72.el8_1.1 BaseOS 48 k
kexec-tools x86_64 2.0.19-12.el8_1.2 BaseOS 482 k
libarchive x86_64 3.3.2-8.el8_1 BaseOS 359 k
openldap x86_64 2.4.46-11.el8_1 BaseOS 352 k
openssl-libs x86_64 1:1.1.1c-2.el8_1.1 BaseOS 1.5 M
python3-rpm x86_64 4.14.2-26.el8_1 BaseOS 156 k
rpm x86_64 4.14.2-26.el8_1 BaseOS 539 k
rpm-build-libs x86_64 4.14.2-26.el8_1 BaseOS 153 k
rpm-libs x86_64 4.14.2-26.el8_1 BaseOS 336 k
sqlite-libs x86_64 3.26.0-4.el8_1 BaseOS 579 k
systemd x86_64 239-18.el8_1.5 BaseOS 3.5 M
systemd-libs x86_64 239-18.el8_1.5 BaseOS 562 k
systemd-pam x86_64 239-18.el8_1.5 BaseOS 232 k
systemd-udev x86_64 239-18.el8_1.5 BaseOS 1.3 M
Installing dependencies:
xkeyboard-config noarch 2.24-3.el8 AppStream 828 k
kbd-legacy noarch 2.0.4-8.el8 BaseOS 481 k
kbd-misc noarch 2.0.4-8.el8 BaseOS 1.4 M
openssl x86_64 1:1.1.1c-2.el8_1.1 BaseOS 686 k
Installing weak dependencies:
libxkbcommon x86_64 0.8.2-1.el8 AppStream 116 k
diffutils x86_64 3.6-5.el8 BaseOS 359 k
glibc-langpack-en x86_64 2.28-72.el8_1.1 BaseOS 818 k
kbd x86_64 2.0.4-8.el8 BaseOS 392 k
openssl-pkcs11 x86_64 0.4.8-2.el8 BaseOS 64 k
Transaction Summary
================================================================================================================================================================================================================================================
Install 9 Packages
Upgrade 22 Packages
[………………………………]
Upgraded:
audit-libs-3.0-0.13.20190507gitf58ec40.el8.x86_64 binutils-2.30-58.el8_1.2.x86_64 centos-gpg-keys-8.1-1.1911.0.9.el8.noarch centos-release-8.1-1.1911.0.9.el8.x86_64 centos-repos-8.1-1.1911.0.9.el8.x86_64
coreutils-single-8.30-6.el8_1.1.x86_64 glibc-2.28-72.el8_1.1.x86_64 glibc-common-2.28-72.el8_1.1.x86_64 glibc-minimal-langpack-2.28-72.el8_1.1.x86_64 kexec-tools-2.0.19-12.el8_1.2.x86_64
libarchive-3.3.2-8.el8_1.x86_64 openldap-2.4.46-11.el8_1.x86_64 openssl-libs-1:1.1.1c-2.el8_1.1.x86_64 python3-rpm-4.14.2-26.el8_1.x86_64 rpm-4.14.2-26.el8_1.x86_64
rpm-build-libs-4.14.2-26.el8_1.x86_64 rpm-libs-4.14.2-26.el8_1.x86_64 sqlite-libs-3.26.0-4.el8_1.x86_64 systemd-239-18.el8_1.5.x86_64 systemd-libs-239-18.el8_1.5.x86_64
systemd-pam-239-18.el8_1.5.x86_64 systemd-udev-239-18.el8_1.5.x86_64
Installed:
libxkbcommon-0.8.2-1.el8.x86_64 diffutils-3.6-5.el8.x86_64 glibc-langpack-en-2.28-72.el8_1.1.x86_64 kbd-2.0.4-8.el8.x86_64 openssl-pkcs11-0.4.8-2.el8.x86_64 xkeyboard-config-2.24-3.el8.noarch kbd-legacy-2.0.4-8.el8.noarch
kbd-misc-2.0.4-8.el8.noarch openssl-1:1.1.1c-2.el8_1.1.x86_64
Complete!
Great - everything is up to date now.
2. Installing Axigen in the Container Instance
First, get the RPM:
[root@7294b716163d app]# curl -O https://www.axigen.com/usr/files/axigen-10.3.1/axigen-10.3.1.x86_64.rpm.run
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 386M 100 386M 0 0 9.7M 0 0:00:39 0:00:39 --:--:-- 9.8M
Then install it:
Please accept the terms of the license before continuing
Press ENTER to display the license
(after reading it press 'q' to exit viewer)
q
Do you accept the terms of the license? (yes/no): y
======================================
RPM Package for x86_64 Installer for AXIGEN Mail Server 10.3.1-1
======================================
Detecting OS flavor... CentOS 8.1
Installer started
Axigen embedded archive extracted successfully
Please select one of the options displayed below:
==== Main options
1. Install axigen-10.3.1-1
9. Exit installer
0. Exit installer without deleting temporary directory
===== Documentation for axigen-10.3.1-1
4. Show the RELEASE NOTES
5. Show the README file
6. Show other licenses included in the package
7. Show manual install instructions
8. Show manual uninstall instructions
Your choice: 1
Verifying... ################################# [100%]
Preparing... ################################# [100%]
Updating / installing...
1:axigen-10.3.1-1 ################################# [100%]
Thank you for installing AXIGEN Mail Server.
In order to configure AXIGEN for the first time, please connect
to WebAdmin by using one of the URLs below:
https://172.17.0.2:9443/
https://[2a02:2f0b:a20c:a500:0:242:ac11:2]:9443/
Starting AXIGEN Mail Server...Axigen[336]: INFO: Starting Axigen Mail Server version 10.3.1.5 (Linux/x64)
Axigen[336]: SUCCESS: supervise ready... (respawns per minute: 3)
Axigen[336]: INFO: supervise: spawning a new process to execute Axigen Mail Server version 10.3.1.5 (Linux/x64)
[ OK ]
Installer finished.
Now we have Axigen installed in the container. It’s even already running (the installer starts it automatically):
336 ? Ss 0:00 /opt/axigen/bin/axigen --max-respawns 3 -W /var/opt/axigen
337 ? SNl 0:01 /opt/axigen/bin/axigen --max-respawns 3 -W /var/opt/axigen
351 ? Sl 0:00 axigen-tnef
375 pts/0 S+ 0:00 grep --color=auto axigen
Let’s see what happens when we leave the shell:
exit
ion@IN-MBP ~ %
Is the instance still running?
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ion@IN-MBP ~ %
No; that’s because the CentOS image runs ‘bash’ as the main container process; when bash exists, the container stops as well.
This is in fact the entire nature of experimentation: trying out different means of achieving the desired results and seeing exactly where that gets us. Good or bad.
This is a crucial difference between a container and a classical Linux ‘host’ - there are no ‘daemons’ — in other words, no need to fork in the background (as, usually, programs started by SystemV-style — and Systemd, as well — init scripts work).
We can take advantage of this when creating the Axigen image. Start the container again:
7294b716163d
And check if it’s running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7294b716163d centos:latest "/bin/bash" 13 minutes ago Up 41 seconds zen_austin
Attach to it and check if Axigen is still running:
[root@7294b716163d /]# ps ax
PID TTY STAT TIME COMMAND
1 pts/0 Ss 0:00 /bin/bash
14 pts/0 R+ 0:00 ps ax
It’s not — and the reason for this shouldn’t come as a surprise: the Axigen process (and all subprocesses it forks / threads it starts) are stopped along with the original bash — 'the grandfather of them all’.
Nonetheless, Axigen is still installed:
total 288
drwxr-xr-x 16 axigen axigen 4096 May 25 15:07 .
drwxr-xr-x 1 root root 4096 May 25 15:07 ..
-rw-r----- 1 axigen axigen 2969 May 25 15:07 axigen_cert.pem
-rw-r----- 1 axigen axigen 245 May 25 15:07 axigen_dh.pem
drwxr-xr-x 2 axigen axigen 4096 May 25 15:07 aximigrator
-rw------- 1 axigen axigen 215556 Feb 7 12:57 cacert_default.pem
drwxr-x--x 2 axigen axigen 4096 May 25 15:07 cyren
drwx--x--- 2 axigen axigen 4096 May 25 15:07 filters
drwx--x--- 3 axigen axigen 4096 May 25 15:07 kas
drwx--x--- 4 axigen axigen 4096 May 25 15:07 kav
drwx------ 2 axigen axigen 4096 May 25 15:07 letsencrypt
drwxr-x--- 2 axigen axigen 4096 May 25 15:07 log
-rw------- 1 axigen axigen 121 Feb 7 12:57 mobile_ua.cfg
drwxr-x--- 67 axigen axigen 4096 May 25 15:07 queue
drwxr-x--- 2 axigen axigen 4096 May 25 15:07 reporting
drwxr-x--- 2 axigen axigen 4096 May 25 15:07 run
drwxr-x--- 2 axigen axigen 4096 May 25 15:07 serverData
drwx--x--- 5 axigen axigen 4096 May 25 15:07 templates
drwx--x--- 8 axigen axigen 4096 May 25 15:07 webadmin
drwx--x--- 3 axigen axigen 4096 May 25 15:07 webmail
[root@7294b716163d /]# ls -la /opt/axigen/bin/
total 135028
drwxr-x--x 2 root root 4096 May 25 15:07 .
drwxr-x--x 5 root root 4096 May 25 15:07 ..
-rwxr-xr-x 1 root root 81771736 Feb 7 12:57 axigen
-rwxr-xr-x 1 root root 12824731 Feb 7 12:57 axigen-migrator
-rwxr-xr-x 1 root root 11838532 Feb 7 12:57 axigen-tnef
-rwxr-xr-x 1 root root 1049336 Feb 7 12:57 cyren.bin
-rwxr-xr-x 1 root root 205632 Feb 7 12:58 kasserver
-rwxr-xr-x 1 root root 180992 Feb 7 12:58 kavserver
-rwxr-xr-x 1 root root 663136 Feb 7 12:57 mqview
-rwxr-xr-x 1 root root 29704280 Feb 7 12:57 sendmail
Good. We have a container with Axigen installed. However, our goal was an image, not a container.
3. Creating an Image from a Container
Stop the container again by leaving the shell:
ion@IN-MBP ~ % docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ion@IN-MBP ~ %
And then summon the Docker magic:
sha256:e7ca09e1933bff546d7acbd7090543e2a4f886ee3aa60b7cbf04eefd70fcbe3b
Excellent; from the existing container, we’ve created a new image (my_new_and_shiny_axigen_image), that we may now use to create another container, with the OS updates already applied and Axigen installed:
[root@479421167f8d /]# dnf update
Last metadata expiration check: 1:01:28 ago on Mon 25 May 2020 03:03:43 PM UTC.
Dependencies resolved.
Nothing to do.
Complete!
[root@479421167f8d /]# rpm -qa | grep axigen
axigen-10.3.1-1.x86_64
We still have to configure the container to start the Axigen binary on instantiation.
As the newly created image has inherited the entrypoint of the CentOS image, which is ‘bash’. We could, of course, start it:
Starting AXIGEN Mail Server...Axigen[29]: INFO: Starting Axigen Mail Server version 10.3.1.5 (Linux/x64)
Axigen[29]: SUCCESS: supervise ready... (respawns per minute: 3)
Axigen[29]: INFO: supervise: spawning a new process to execute Axigen Mail Server version 10.3.1.5 (Linux/x64)
[ OK ]
[root@479421167f8d /]#
[root@479421167f8d /]# ps ax | grep axigen
29 ? Ss 0:00 /opt/axigen/bin/axigen --max-respawns 3 -W /var/opt/axigen
30 ? Sl 0:00 /opt/axigen/bin/axigen --max-respawns 3 -W /var/opt/axigen
42 ? Sl 0:00 axigen-tnef
66 pts/0 S+ 0:00 grep --color=auto axigen
But this is not the proper way to have a container running an app. The correct way is to configure the binary that will be used when the container is started, directly in the image.
4. Setting the Entrypoint in the Container
To do that, we must revisit the image creation step:
sha256:ef7ce0fd9a47acb4703e262c4eb64c3564a54866b125413c17a63c1f832d1443
ion@IN-MBP ~ %
In the image configuration, add the name of the command and arguments to be executed when the container is started:
Remember that the main process of the container must not fork in the background; it must continue to run, otherwise the container will stop. This is the reason the ‘--foreground’ argument is needed.
Like Axigen, most Linux servers have such an argument, instructing them to run in foreground instead of forking in background.
5. Running the Created Image
Let’s check the updated image:
fd1b608174c402787152f5934294f370dfdb4d9b0f0b25e4edf4725dbe4c5700
ion@IN-MBP ~ %
We’ve changed the -it ‘docker run’ parameter to ‘-dt’; without diving too much into details, this instructs docker to detach from the process. Axigen is the main process, hence an interactive mode does not make sense, as it would for a bash shell.
Docker allows us to run another process (not main, but a secondary process) by using ‘exec’. We shall run a bash, in interactive mode, so that we may review what happens in the container.
[root@fd1b608174c4 /]# ps ax
PID TTY STAT TIME COMMAND
1 pts/0 Ss+ 0:00 /opt/axigen/bin/axigen --foreground
7 pts/0 SNl+ 0:00 /opt/axigen/bin/axigen --foreground
19 pts/0 Sl+ 0:00 axigen-tnef
39 pts/1 Ss 0:00 bash
54 pts/1 R+ 0:00 ps ax
Ok, so Axigen is running. Is the WebAdmin interface (port 9000) running as well?
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.1
Host: localhost
HTTP/1.1 303 Moved Temporarily
Server: Axigen-Webadmin
Location: /install
Connection: Close
It is, and it redirects us to the initial setup flow (/install).
Now, is the 9000 WebAdmin port also available from outside the container?
Trying ::1...
Connection failed: Connection refused
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
We need to instruct the container, upon instantiation, to map the 9000 port to the host, so it may be accessed from the outside.
1dcc95e912bafc97ba63484abfeb7e2d1983d524b8834a5ccc62928796259818
ion@IN-MBP ~ %
Notice the ‘-p 9000:9000’ parameter. This instructs Docker to make the 9000 container port available in the host as well, on the same port number. And now, voilà:
Trying ::1...
Connected to localhost.
Escape character is '^]'.
GET / HTTP/1.1
Host: localhost
HTTP/1.1 303 Moved Temporarily
Server: Axigen-Webadmin
Location: /install
Connection: Close
Wrapping Up
So what have we learned from this little experiment?
1. Converting an existing RPM / DEB package to a container image is fairly simple:
- instantiate a container with an OS of your preference, and for which you have the target software package;
- from the container, install the software (and optionally, perform some configurations);
- stop the container and convert it into an image, making sure to set the appropriate entrypoint (CMD);
- create as many containers as desired, using the new image;
- publish the image, if you need to make it available to others as well.
2. The image we’ve created above is not yet ready for production. Here’s what it would take for it to become ready:
- implement persistent data storage (it’s an email service, it stores mailboxes, so some data needs to be persistent);
- create network communication definitions:
- Docker will, by default, through NAT, allow the processes to communicate with outside (initiate connections);
- we need, though, to be able to receive connections (email routing, client access) on specific ports.
- define options to preconfigure the Axigen instance upon startup (we may want to deploy it using a specific license.
Now that a basic image is available, we would need to do some further digging into addressing the issues above, as well as automate the creation of the image (what happens when an updated CentOS image is available? How about when a new Axigen package is available?).
These topics above go way outside the scope of this article, but here are a few hints into what it would take:
- use a Dockerfile to create the image instead of instantiating the base then manually installing the Axigen software;
- define and map, in the Dockerfile, the required ENV, EXPOSE, VOLUME, and CMD directives;
- make use of Docker push / pull to share your image with others, through a public or private docker registry (image repository).
Now that we’ve created a containerized version of our Axigen package, what we have is a re-packaging of the app that allows deployment in the cloud.
Part 2 of this series is now up! Read on to see how we address actually creating and running this mail server environment with Kubernetes on Google Cloud.
Note: Axigen already provides a fully functional, ready for use Docker image in Docker Hub. You can find all the information and try it out for yourself here.
We also offer an Axigen Helm chart for anyone who wants to deploy a clustered Axigen Mail server on Kubernetes platforms.
For help in gathering prerequisites, performing the installation, configuring your settings, and upgrading or uninstalling your deployment, please visit our dedicated documentation page for the Axigen AxiHelm.
Both AxiHelm and the Axigen Docker Image are intended for actual production use and are adjusted as such.