TLS-terminated Bitlbee with custom protocols

Five years ago I started a small GitHub project aimed to run Bitlbee seamlessly in a container.

Why Bitlbee?

Back in the day, I was relying heavily on IRC for my daily communications and the plethora of other protocols that were starting to get traction was too much: I wanted to have a bridge between my IRC client and the other protocols to be able to communicate only by using my IRC client without installing any resource consuming monster (enough said).

Bitlbee was and still is the perfect tool to implement that bridge: every protocol is consumable via IRC, provided that a Bitlbee server has been set up and a bridge between Bitlbee and the protocol is available and installed into the Bitlbee server.

I decided to roll my server of Bitlbee running in a Docker container, and I decided to integrate into the build a list of custom protocols that were available as plugins for Bitlbee. By packaging everything into a container, running a ready to use Bitlbee server with custom protocols was only a docker pull away.

The container, called docker-bitlbee and published to Docker Hub, started to get traction (who wants to compile all the plugins nowadays?) and in 2018 I reached 100k downloads on Docker Hub.
It is also the first result for the SERP “docker bitlbee” on DuckDuckGo and Google.

With time, contributors started to submit pull requests to enable new custom protocols, reporting problems and asking for new features.

Now the container has been downloaded more than 500k times on Docker Hub and I am still using it in my infrastructure to access some protocols over IRC (a notable example: Gitter).

The latest feature that I just added, based on a user request, is TLS termination to Bitlbee via stunnel. There has been some constructive discussion, and I am glad that the community is supportive and confrontational.

So far, I am very proud of the work that contributed to this side project.

Send an email from a Docker container through an external MTA with ssmtp

I packaged a standard application (think of it as a standard PHP or <insert your preferred framework here>) into a Docker container. So far, it was working flawlessly, but then a problem arose: send an email from the Docker container (the event is triggered within the container).

As you may know, a good Docker container is a container with only one process running: the naive solution for our case would be to have, in addition to having our PHP process running, another process to manage the email interexchange (an MTA, i.e. Postfix). As we are following the best practices for Docker containers, this path is discouraged.

There are many solutions to this problem.

The common ground for all of the solutions is to rely on ssmtp when sending emails from the container. ssmtp is a simple relayer to deliver local emails to a remote mailhub that will take care of delivering the emails.

Provided that the container distribution ships ssmtp, the installation is straightforward: just add the package during the install phase of the Dockerfile. ssmtp must be configured to relay every email an SMTP host, e.g.:

# cat /etc/ssmtp/ssmtp.conf

# The user that gets all the mails (UID < 1000, usually the admin)
root=postmaster

# The place where the mail goes. The actual machine name is required
# no MX records are consulted. Commonly mailhosts are named mail.domain.com
# The example will fit if you are in domain.com and you mailhub is so named.

# Use SSL/TLS before starting negotiation
UseTLS=Yes
UseSTARTTLS=Yes

# Fill the following with your credentials (if requested) 
AuthUser=postmaster@mycompany.biz
AuthPass=supersecretpassword

# Change or uncomment the following only if you know what you are doing

# Where will the mail seem to come from?
# rewriteDomain=localhost
# The full hostname
# hostname="localhost"
# The address where the mail appears to come from for user authentication.
# rewriteDomain=localhost
# Email 'From header's can override the default domain?
# FromLineOverride=yes

All the three solutions that I am going to illustrate rely on having a custom mailhub that must be configured accordingly.

Let’s review each solution.

An external SMTP relay host

If an external SMTP relay host is available, the solution is to point mailhub option of ssmtp to the external SMTP host.

Another container running the MTA

The proper way to solving this problem would be to run a Docker container just for the MTA itself (personal preference: Postfix). One caveat of this solution: some Linux distributions come with an MTA running out of the box. If the container host is already running an MTA, the container cannot publish the port 25/tcp from the Postfix container [the address is already in use by the MTA running on the host].

By searching on GitHub, a promising and an up-to-date container is the eea.docker.postfix. After you deploy the Postfix container, link every container that needs an MTA to it. E.g.

# docker run --link=postfix-container my-awesome-app-that-needs-an-mta 

The container must configure ssmtp to use postfix-container(or the name defined as the link) in the mailhub option in ssmtp.conf.

Relying on the host MTA

Premise: the Docker daemon exposes an adapter to all the containers running on the same host. This adapter is usually named as the docker0 interface:

# ip a show docker0
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 11:22:33:44:55:66 brd ff:ff:ff:ff:ff:ff
    inet 172.17.42.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::11:22ff:fff0:3344/64 scope link 
       valid_lft forever preferred_lft forever

If the host MTA is listening on the docker0 interface, then the containers can relay email to the host MTA. There is not an extra configuration on the container itself, just configure ssmtp to use the docker0 IP as the mailhub.

EXTRA: HOW TO CONFIGURE POSTFIX TO LISTEN ON DOCKER INTERFACE (LIKE DOCKER0) AS WELL

To use the solution described above, the MTA on the host must be configured to listen on the docker0 inteface as well. In case that the MTA in case is Postfix, the configuration is straightforward:

On the host, open /etc/postfix/main.cf and add the docker0 IP to the inet_interfaces option and add the subnetwork block range of the containers that need to use the host MTA to the mynetwork option:

# cat /etc/postfix/main.cf 

[...]
inet_interfaces = 127.0.0.1, 172.17.42.1 
mynetworks = 127.0.0.0/8 172.17.42.0/24
[...]

If Postfix is set to be started at boot by systemd, we need to take care of the dependency: Docker daemon must be started before the Postfix daemon, as Postfix needs to bind on the docker0 IP address.

In order to express this dependency, and luckily for us, systemd already ships with a service that detects when an interface is up:

# systemctl | grep docker0
  sys-devices-virtual-net-docker0.device                                                                      loaded active plugged   /sys/devices/virtual/net/docker0

Postfix must be started after the docker0 inteface has been brought up, and to express the dependency we must override Postfix’s service units (this may vary based on the host distribution):

# systemctl | grep postfix
  postfix.service                                                                                             loaded active exited    Postfix Mail Transport Agent                                                                          
  postfix@-.service                                                                                           loaded active running   Postfix Mail Transport Agent (instance -)    

in this case it is enough to override only the Postfix instance service with:

# systemctl edit postfix@-.service

Override the unit service file by declaring the dependency explicitely:

[Unit]
Requires=sys-devices-virtual-net-docker0.device
After=sys-devices-virtual-net-docker0.device

Reload systemd with systemctl daemon-reload and restart Postfix with systemctl restart postfix.

Relying on the host MTA by using host network driver on Docker

When a container is set to use host networking interface, the container can access the host networking and thus its services. If the container host already has an MTA configured, then the containers can use it by just pointing to localhost.The syntax to use host networking interface for the application that needs to use the host MTA is:

# docker run --net=host my-awesome-app-that-needs-an-mta

To configure ssmtp, just point the mailhub to localhost.

NOTE: Using the host networking interface has obviously security drawbacks, because containers do not have their networking containerized by Docker but rather rely on the host networking; this can guarantee to the Docker container to have access to the whole networking stack (in read-only mode) and open low-numbered ports like any other root process. Use this networking option by carefully weigh pro and cons.

Preventing Docker from manipulating iptables rules

By default, Docker manipulates iptables rules to provide network isolation:

Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER all -- 0.0.0.0/0 0.0.0.0/0

[...]

Chain DOCKER (1 references)
target prot opt source destination

I don’t mind having my iptables rules for forwarding manipulated, but there is a caveat: when you expose a container (with -p), then the port will be exposed to every network interface (which means the whole Internet too). Let’s make an example:

% docker run -d -p 6667:6667 mbologna/docker-bitlbee
5d0b6eeaec6863151d71b95b53139f9f0818726a0eb3056b39c2e0444f3fbd83
% docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5d0b6eeaec68 mbologna/docker-bitlbee "/usr/local/sbin/b..." 4 seconds ago Up 4 seconds 0.0.0.0:6667->6667/tcp eloquent_nightingale
% iptables -L -n
[...]
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:6667
[...]

The container is listening on 0.0.0.0 and the DOCKER chain will be accepting every connection, from every interface (think eth0 on a public facing server).
This works even if you have a DROP policy on the INPUT chain.

You now have two choices:

  • bind the container to any specific interface you wanted the container to listen on (e.g. localhost):
% docker run -d -p 127.0.0.1:6667:6667 mbologna/docker-bitlbee

This approach might fit some applications and it solely depends on your expected usage of the container.

  • prevent Docker from manipulating iptables rules. NOTE: you need to have knowledge about iptables and its chain filters.
    Docker has, in fact, the option "iptables": false to achieve this target. You just need to create (or edit) the file /etc/docker/daemon.json and type:
{
"iptables": false
}

(of course, skip the curly bracket if you’re just adding the option among the others you already have).
NOTE: unlike what many articles you can find, adding this option to /etc/default/docker or to Docker’s systemd unit file will have no result.
Restart the Docker daemon and voila: your containers will not be exposed to every possible interface but you will need to explicitly manipulate your iptables rules if you want the traffic to pass through, e.g.: this is needed to NAT your containers:


-A POSTROUTING -s 172.17.0.0/24 -o eth0 -j MASQUERADE

Automatically update your Docker base images with watchtower

I’m an avid user of Docker containers, using base images pulled from the public registry DockedHub. As you may know, Docker containers are based on Docked base images, e.g. I run postgres containers that are based on Postgres base image.

It occurs that base images could get updated by their respective author (in our case Postgres team) and pushed to DockerHub. But your container does not benefit from this update unless:

  • you pull the new image
  • stop and delete your container
  • spawn another container using the new base image (of course I’m considering a very simple setup without clusters and Kubernetes).

What if I tell you that there is a way to automate the process for you?

Enter watchtower: a Docker container (inception!) to automatically restart your Docker container to use the most recent published base image; of course, watchtower checks regularly for any updates of the base image and pulls the new version if necessary.

Configuration is none existent as you just have to follow watchtower’s instructions and launch the container: after that, you are all set!

Anybody said drawbacks? Yes, there might be drawbacks. What if your container is restarted during a transaction? What if the new base image is unstable?

These are all factors that you should take into account if you want watchtower to update your containers or not. In my case, for some applications that I run in containers, I value the comfort of having watchtower handle the updates enormously compared to the problems it may generate (so far: none).

Reverse engineer a Docker run command from an existing container

During my usual backup routine, I wanted to gather how a Docker container I started a while ago was run, especially the docker run command; this is required in case I need to re-run that container and I want to preserve the options (e.g. env variables, ports, etc.).

Let’s make an example. I run a mysql docker container with:

docker run -m 100M --name testbed-mysql --restart=always -e MYSQL_ROOT_PASSWORD=foo -e MYSQL_DATABASE=bar -e MYSQLPASSWORD=foo -e MYSQL_USER=foo -v /tmp/etc:/etc/mysql/conf.d -v /tmp/mysql:/var/lib/mysql -p 127.0.0.1:7308:3306 -d mysql

If I just list running containers, there is no way to display these options:

% docker ps -a 
CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS                       NAMES
[...]
a32bdcbb36c7        mysql:latest              "docker-entrypoint..."   2 days ago          Up 2 days           127.0.0.1:7308->3306/tcp    testbed-mysql
[...]

Display this options is possible with docker inspect and some sorcery. Luckily, that sorcery is already packaged for you in two projects (alternatives):

Both of them work really well in reverse engineering the options from the running containers, as you can see:

% runlike testbed-mysql
docker run --name=testbed-mysql --hostname=a73900fe9af6 --env="MYSQL_ROOT_PASSWORD=foo" --env="MYSQL_DATABASE=bar" --env="MYSQLPASSWORD=foo" --env="MYSQL_USER=foo" --env="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" --env="GOSU_VERSION=1.7" --env="MYSQL_MAJOR=5.7" --env="MYSQL_VERSION=5.7.20-1" --volume="/tmp/etc:/etc/mysql/conf.d" --volume="/tmp/mysql:/var/lib/mysql" --volume="/var/lib/mysql" -p 127.0.0.1:7308:3306 --restart=always --detach=true mysql:latest mysqld

The only option which has not been recovered is the resource constraint options I used (see -m 100M above).

Playing with Docker: tips and tricks to write effective Dockerfiles

Recently I have been playing with Docker containers, and I am sure you already know what Docker is. In this post I will describe what I have learnt while using Docker containers and preparing Dockerfiles.

What is Docker?

In a few words: Docker is a software to manage and run Linux containers in which you can deploy an application using Dockerfiles. The main concept here is divide-et-impera concept: a Docker container is just like a virtual machine, except that is very lightweight (it is only a container, so it is not virtualizing an entire machine).

How do I run a Docker container?

  1. Install Docker
  2. docker run container command (e.g. docker run -ti fedora bash)

Docker will fetch the container (from DockerHub, a central repo for Docker containers) and run the specified command.

What is a Dockerfile?

A Dockerfile is a set of instructions to prepare a Docker container on your own. It basically declare a base image (like a Linux distribution: for example Fedora, Ubuntu, etc.) and apply modifications on that container (fetch the software you want to run on container, compile it, run, etc). There is a basic set of instruction for Dockerfiles. For all Vagrant users: that’s right, a Dockerfile is just like a Vagrantfile: it states the steps to prepare a machine (except that in this case we are preparing a container).

How do I use a Dockerfile?

  1. Install Docker
  2. Download a Dockerfile
  3. (if applicable): edit any files that will be copied from host to container. This is useful for configuration files if you want to customize your Dockerized application
  4. docker build -t containername .

Docker will reproduce the steps to create the container, using the instructions found on the Dockerfile. After it has finished, you can run the Docker container as specified above.

Docker: from theory to practice

Ok, theory aside. I decided to create a Dockerfile for two applications because:

  • the application was not available from the official repos (e.g. alice)
  • the version in the official repos is outdated (e.g. bitlbee)

Basically, we will declare two Docker containers in which we fetch our software, customize to our needs and run it inside the container. Both of them will declare a service, and the container will serve as a server for the application (alice/http and bitlbee/irc).

bitlbee Dockerfile

In this case we are using my preferred base image which is Fedora, we customize it to be able to fetch and compile the source code of bitlbee and then proceed to compile it. In this Dockerfile we also ADD two configuration files from the host to the Dockerfile. Again, we launch the service as daemon user and expose the 6667/tcp port. The final size of the Docker container image is 359MB.

To use it, connect your IRC client to localhost:6667 (remember to map the correct port, see below).

bitlbee Dockerfile on GitHub.

Tips and caveats

Docker

First of all, some tips I learnt:

  • When running a container, it is always best to launch it with a name (it easier to reference the container afterwards): docker run --name bitlbee_container mbologna/docker/bitlbee
  • If you want to detach a container, supply -d option when running
  • You can inspect a running container by attaching to it: docker exec -ti bitlbee bash
  • Remember to clean up docker images and docker containers: show them with docker images and docker ps -a. Remove them with docker rmi and docker rm
  • If you are running Docker containers as a service (like in this example), you should remember to set the option --restart=always to make sure that your Docker container is started at boot and whenever it exits abnormally

Everything on the docker container makes it apart from the host machine under all points of view (network, fs, etc.). Thus:

  • When using Docker containers (in particular you are running a service inside a Docker container), you can access your container ports by mapping the ports on the containers to ports on your host using the -p option: docker run -p 16667:6667 mbologna/docker-bitlbee (container maps 16667 port on the host machine to 6667 port on the container, so it can be accessed at 16667/tcp on the host machine)
  • When a container is restarted, everything on the container is reset (speaking of file-system too). In order to write non-volatile files, you should supply -v option that declares a volume; as with ports we have seen above, you specify first the directory on host and then the corresponding directory on the container. This is useful for config files (you want to keep them, right?): docker run -v /home/mbologna/docker-bitlee/var/lib/bitlbee:/var/lib/bitlbee mbologna/docker-bitlbee

Dockerfiles

  • If you define a VOLUME in the Dockerfile:
    • if user is launching Docker container without specifying a volume, VOLUME directory will typically resides under /var/lib/docker/volumes (you can discover it using docker inspect <container>)
    • otherwise, VOLUME directory will resides on the specified directory using -v option.

    This exposes an issue of permissions on the VOLUME directory. I basically solved it by chowning twice the volume directory, otherwise either one of the two cases described above wouldn’t have the correct permissions: chown -R daemon:daemon /var/lib/bitlbee* # dup: otherwise it won't be chown'ed when using volumes
    VOLUME ["/var/lib/bitlbee"]
    chown -R daemon:daemon /var/lib/bitlbee* # dup: otherwise it won't be chown'ed when using volumes

  • When a final user pulls a container, it basically downloads your container from DockerHub. That’s why we want to minimize Docker container size. How can we do that when preparing a Dockerfile?

    • Every command you launch on a Dockerfile creates a new (intermediate) Docker container (the final result will be the application of every instruction on top of the instruction above it!) => minimize steps and group commands under RUN commands using &&. E.g.:

      RUN touch /var/run/bitlbee.pid && \
      chown daemon:daemon /var/run/bitlbee.pid && \
      chown -R daemon:daemon /usr/local/etc/* && \
      chown -R daemon:daemon /var/lib/bitlbee*

    • After you compiled/installed your software, be sure to remove it if unnecessary to clean up space: apt-get clean && \
      apt-get autoremove -y --purge make \
      rm -fr /var/lib/apt/lists/*

  • Every command you launch on the Docker container is run as root: be sure, before launching your software, to launch it with as minimal privileges as possible Principle of least privilege. For example, I launch alice and bitlbee daemons with the daemon user: USER daemon
    EXPOSE 8080
    CMD ["/home/alice/alice/bin/alice", "-a", "0.0.0.0"]

Contributing

You can pull my Docker containers on DockerHub:

You can browse my Dockerfiles on GitHub:

Future work

Two interesting concepts I came across during my research and I will investigate in the future:

  • CoreOS, a Linux distribution in which every application is launched on a separate Docker container
  • Kubernetes, an orchestration layer for Docker containers