Allegra Task Management Software in Docker

Docker has become popular for deploying applications quickly and hassle-free on self hosted servers or in a cloud like Microsofts Azure or Amazons EC2. With Docker you bundle your application with additional required components like a database server and a minimalistic Linux operating system into one or more “containers”. These containers can be run on any server that has Docker installed on it.  For example, for the Allegra  task management software we have assembled two containers: one with a MySQL database server, and another one with a Tomcat 8 servlet container and Allegra itself. Both containers together require less than 1 Gbyte disk space for the images, including the Track application. This is much less than you would need for a bare bones virtual machine, and it poses no problem to install 4 Allegra instances on a small server with 8 Gbyte main memory.

We can “dockerize” Allegra by pouring everything needed into a single image. A smarter approach is to run Tomcat with Allegra in a dedicated container, and provide the database server in a separate image. This has a number of advantages:

  1. We can use any database system and we are not bound by the RDBMS bundled with the application image.
  2. We can run the database container on a different machine to provide more performance.
  3. We could share a single database container with a number of Allegra instances to build a fail-safe, high performance cluster.

Let us first get an overview over our files. The allegra.war file you can get from the Allegra download area.

linux-compose

To compose a service from a number of other services we use the Docker composer. Here is the template for the docker-compose.yml file called template.yml. We create the docker-compose.yml from this template because the Docker variable substitution doesn’t seem to work for the last “networks” statement.

# docker-compose.yml template for Allegra on Tomcat 8 and MySQL
version: '2'
services:
  tomcat:
#    image: tomcat:8
    build:
      context: .
      dockerfile: Dockerfile-tc
    ports:
     - "${HTTP_PORT}:8080"
     - "${AJP_PORT}:8443"
    volumes:
#     - ./webapps:/usr/local/tomcat/webapps
     - "./home:/home/trackplus"
    environment:
      TRACKPLUS_HOME: /home/trackplus
    networks:
     - ${NETWORK}
    depends_on:
     - mysql
    links:
     - mysql
  mysql:
#    image: mysql:5.7
#    volumes:
#     - ./db:/var/lib/mysql
    build:
      context: .
      dockerfile: Dockerfile-mysql
    networks:
     - ${NETWORK}
    expose:
     - "3306"
    environment:
      MYSQL_ROOT_PASSWORD: tissi

networks:
  %NETWORK%:

And here is the Dockerfile for the database container:

# Dockerfile for Allegra on Debian
FROM mysql:5.7
ADD initdb.d/start.sh /docker-entrypoint-initdb.d/
ADD initdb.d/init.sql /docker-entrypoint-initdb.d/

These two files are located in the same directory where the docker-compose.yml resides, subdirectory initdb.d. They are executed upon first startup of the MySQL container. The shell file is empty for the time being. The sql file creates an empty Allegra database:

create database track default character set utf8;
grant all on track.* to 'trackp'@'%' identified by 'tissi';

And here is the Dockerfile for the Tomcat/Allegra container:

# Dockerfile for Allegra on Debian
FROM tomcat:8
RUN mkdir /home/trackplus
# RUN chown tomcat:tomcat /home/trackplus
ENV TRACKPLUS_HOME /home/trackplus
ADD ./home/Torque.properties /home/trackplus
ADD ./webapps/track.war /usr/local/tomcat/webapps

And here is a little shell script that fires everything up:

#!/bin/bash
# Run a new Allegra  / Docker instance
#
if [ "$1" == "" ]
then
  echo "usage: start.sh  [AJP port no.]"
  exit 1
fi
export AJP_PORT=$2
if [ "$2" == "" ]
then
  echo "Defaulting AJP port to 8009"
  export AJP_PORT=8009
fi
export HTTP_PORT=$1
export COMPOSE_PROJECT_NAME=track-$HTTP_PORT
export NETWORK=localnet$HTTP_PORT

sed -e s/%NETWORK%/$NETWORK/ template.yml > docker-compose.yml
docker-compose up -d

We can start everything typing “./start.sh 8080“. This will make our installation visible at http://localhost:8080/track.

We can look into the running containers by attaching a shell to it:

docker exec -i -t track8080_tomcat_1 /bin/bash

we can stop a container with

docker stop track8080_tomcat_1

This gives us a good starting point. There are a couple of things that need to be improved:

  1. The tomcat service does not wait for the mysql service to be ready. However, it looks like Tomcat starts much slower than mysql, so this is no problem in non-production environments.
  2. We have not provisioned our logging yet.
  3. We have not talked about backup yet.

So stay tuned!

[Total: 0    Average: 0/5]