Author: Michele Bologna

  • Send an email from a Docker container through an external MTA with ssmtp

    Send an email from a Docker container through an external MTA with ssmtp

    I packaged a standard application (think of it as a standard PHP or <insert your preferred framework here>) into a Docker container. So far, it was working flawlessly, but then a problem arose: send an email from the Docker container (the event is triggered within the container).

    As you may know, a good Docker container is a container with only one process running: the naive solution for our case would be to have, in addition to having our PHP process running, another process to manage the email interexchange (an MTA, i.e. Postfix). As we are following the best practices for Docker containers, this path is discouraged.

    There are many solutions to this problem.

    The common ground for all of the solutions is to rely on ssmtp when sending emails from the container. ssmtp is a simple relayer to deliver local emails to a remote mailhub that will take care of delivering the emails.

    Provided that the container distribution ships ssmtp, the installation is straightforward: just add the package during the install phase of the Dockerfile. ssmtp must be configured to relay every email an SMTP host, e.g.:

    # cat /etc/ssmtp/ssmtp.conf
    
    # The user that gets all the mails (UID < 1000, usually the admin)
    root=postmaster
    
    # The place where the mail goes. The actual machine name is required
    # no MX records are consulted. Commonly mailhosts are named mail.domain.com
    # The example will fit if you are in domain.com and you mailhub is so named.
    
    # Use SSL/TLS before starting negotiation
    UseTLS=Yes
    UseSTARTTLS=Yes
    
    # Fill the following with your credentials (if requested) 
    AuthUser=postmaster@mycompany.biz
    AuthPass=supersecretpassword
    
    # Change or uncomment the following only if you know what you are doing
    
    # Where will the mail seem to come from?
    # rewriteDomain=localhost
    # The full hostname
    # hostname="localhost"
    # The address where the mail appears to come from for user authentication.
    # rewriteDomain=localhost
    # Email 'From header's can override the default domain?
    # FromLineOverride=yes

    All the three solutions that I am going to illustrate rely on having a custom mailhub that must be configured accordingly.

    Let’s review each solution.

    An external SMTP relay host

    If an external SMTP relay host is available, the solution is to point mailhub option of ssmtp to the external SMTP host.

    Another container running the MTA

    The proper way to solving this problem would be to run a Docker container just for the MTA itself (personal preference: Postfix). One caveat of this solution: some Linux distributions come with an MTA running out of the box. If the container host is already running an MTA, the container cannot publish the port 25/tcp from the Postfix container [the address is already in use by the MTA running on the host].

    By searching on GitHub, a promising and an up-to-date container is the eea.docker.postfix. After you deploy the Postfix container, link every container that needs an MTA to it. E.g.

    # docker run --link=postfix-container my-awesome-app-that-needs-an-mta 

    The container must configure ssmtp to use postfix-container(or the name defined as the link) in the mailhub option in ssmtp.conf.

    Relying on the host MTA

    Premise: the Docker daemon exposes an adapter to all the containers running on the same host. This adapter is usually named as the docker0 interface:

    # ip a show docker0
    5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
        link/ether 11:22:33:44:55:66 brd ff:ff:ff:ff:ff:ff
        inet 172.17.42.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
        inet6 fe80::11:22ff:fff0:3344/64 scope link 
           valid_lft forever preferred_lft forever

    If the host MTA is listening on the docker0 interface, then the containers can relay email to the host MTA. There is not an extra configuration on the container itself, just configure ssmtp to use the docker0 IP as the mailhub.

    EXTRA: HOW TO CONFIGURE POSTFIX TO LISTEN ON DOCKER INTERFACE (LIKE DOCKER0) AS WELL

    To use the solution described above, the MTA on the host must be configured to listen on the docker0 inteface as well. In case that the MTA in case is Postfix, the configuration is straightforward:

    On the host, open /etc/postfix/main.cf and add the docker0 IP to the inet_interfaces option and add the subnetwork block range of the containers that need to use the host MTA to the mynetwork option:

    # cat /etc/postfix/main.cf 
    
    [...]
    inet_interfaces = 127.0.0.1, 172.17.42.1 
    mynetworks = 127.0.0.0/8 172.17.42.0/24
    [...]

    If Postfix is set to be started at boot by systemd, we need to take care of the dependency: Docker daemon must be started before the Postfix daemon, as Postfix needs to bind on the docker0 IP address.

    In order to express this dependency, and luckily for us, systemd already ships with a service that detects when an interface is up:

    # systemctl | grep docker0
      sys-devices-virtual-net-docker0.device                                                                      loaded active plugged   /sys/devices/virtual/net/docker0

    Postfix must be started after the docker0 inteface has been brought up, and to express the dependency we must override Postfix’s service units (this may vary based on the host distribution):

    # systemctl | grep postfix
      postfix.service                                                                                             loaded active exited    Postfix Mail Transport Agent                                                                          
      postfix@-.service                                                                                           loaded active running   Postfix Mail Transport Agent (instance -)    

    in this case it is enough to override only the Postfix instance service with:

    # systemctl edit postfix@-.service

    Override the unit service file by declaring the dependency explicitely:

    [Unit]
    Requires=sys-devices-virtual-net-docker0.device
    After=sys-devices-virtual-net-docker0.device

    Reload systemd with systemctl daemon-reload and restart Postfix with systemctl restart postfix.

    Relying on the host MTA by using host network driver on Docker

    When a container is set to use host networking interface, the container can access the host networking and thus its services. If the container host already has an MTA configured, then the containers can use it by just pointing to localhost.The syntax to use host networking interface for the application that needs to use the host MTA is:

    # docker run --net=host my-awesome-app-that-needs-an-mta

    To configure ssmtp, just point the mailhub to localhost.

    NOTE: Using the host networking interface has obviously security drawbacks, because containers do not have their networking containerized by Docker but rather rely on the host networking; this can guarantee to the Docker container to have access to the whole networking stack (in read-only mode) and open low-numbered ports like any other root process. Use this networking option by carefully weigh pro and cons.

  • A comparison between browser features on desktop and mobile iOS

    I am a long time user of Firefox and Chrome on desktops (GNU/Linux and macOS), while I rely on Chrome on my iOS devices.
    Recently there has been some valid critics of Chrome and its privacy choices and I began to look around for an alternative of Chrome.

    I identified a list of features that I consider a must-have in the browser I use and I decided to give a try to other browsers as well.

    To present the results, I designed this table:

    [table id=1 /]

    [1] On Opera iOS, even if “Block ads” was enabled, I could still see AdSense ads.
    [2] Obviously, Safari is not released for GNU/Linux, hence this limitation.
    [3] Via uBlock Origin extension (desktop only).
    [4] Via Firefox Focus WebView extension (iOS only).
    [5] Via Reader View extension (desktop only).

    Considering all my needs expressed in the table, I will be continuing using Firefox on desktops, and I will switch my mobile browser of choice to Firefox.

    What are your experiences with browsers on desktop and mobiles? What browser do you use?

  • Linux: using bind mount to move a subset of root subdirectories to another partion or disk

    I was in the situation dealing with a Linux box with two hard disks:

    • /dev/sda: fast hard drive (SSD), small size (~200 GB)
    • /dev/sdb: very big hard drive (HDD), large size (~4 TB)

    The operating system was installed on /dev/sda, so I had /dev/sdb empty. I knew I could create a mount point (e.g. /storage) and mount it to /dev/sdb, but after reading Intelligent partitioning and the recommended Debian partitioning scheme I thought about moving:

    • /var
    • /home
    • /tmp

    to the big hard drive /dev/sdb

    The process described here is completely different from just putting a mount point to a partition in /etc/fstab: in our solution, we will use one disk (or one partition) to store multiple root subdirectories (/var, /home, /tmp). With the fstab “usual” method, you put one subdirectory in one disk or partition or volume.

    The solution to this problem is a bind mount: the three original directories will exist in the root disk (/dev/sda) but they will be empty. Those directories will live into the second disk (/dev/sdb) and, upon mounting, a bind will be created between the root filesystem and the directories in the second disk.

    The process is easy:

    1. Backup your data
    2. Boot from a live distribution (e.g. KNOPPIX)
    3. Mount your hard drives:mkdir /mnt/sd{a,b}1
      mount /dev/sda1 /mnt/sda1
      mount /dev/sdb1 /mnt/sdb1
    4. Copy the directories from sda to sdb:cp -ax /mnt/sda1/{home,tmp,var} /mnt/sdb1/
    5. Rename the old directories you just copied and create the new mount points:mv /mnt/sda1/home /mnt/sda1/home.old
      mv /mnt/sda1/tmp /mnt/sda1/tmp.old
      mv /mnt/sda1/var /mnt/sda1/var.old
      mkdir /mnt/sda1/{home,tmp,var}
    6. Update your fstab with the new locations:Mount the second hard drive:

      /dev/sdb1 /mnt/sdb1 ext4 defaults 0 2

      Then create the bind mounts for the 3 subdirectories you moved:

      /mnt/sdb1/home /home none defaults,bind 0 0
      /mnt/sdb1/tmp /tmp none defaults,bind 0 0
      /mnt/sdb1/var /var none defaults,bind 0 0

    7. umount your hard drives and reboot
    8. Check that everything under /home, /var and /tmp is working as expected. You may also want to clean up and delete /home.old, /var.old, and /tmp.old.

    This process can be repeated for any subdirectory you want to move (except, obviously, /boot).

    Closing notes: if you are brave enough:

    • you are not required to boot from a live distribution, just boot into single-user mode (adapt the paths of the following guide though!)
    • you can also skip booting into single user mode if you are using LVM: just create a new logical volume and copying subdirectories into it
  • Preventing Docker from manipulating iptables rules

    By default, Docker manipulates iptables rules to provide network isolation:

    Chain FORWARD (policy DROP)
    target prot opt source destination
    DOCKER all -- 0.0.0.0/0 0.0.0.0/0
    
    [...]
    
    Chain DOCKER (1 references)
    target prot opt source destination
    
    

    I don’t mind having my iptables rules for forwarding manipulated, but there is a caveat: when you expose a container (with -p), then the port will be exposed to every network interface (which means the whole Internet too). Let’s make an example:

    % docker run -d -p 6667:6667 mbologna/docker-bitlbee
    5d0b6eeaec6863151d71b95b53139f9f0818726a0eb3056b39c2e0444f3fbd83
    % docker ps -a
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    5d0b6eeaec68 mbologna/docker-bitlbee "/usr/local/sbin/b..." 4 seconds ago Up 4 seconds 0.0.0.0:6667->6667/tcp eloquent_nightingale
    % iptables -L -n
    [...]
    Chain DOCKER (1 references)
    target prot opt source destination
    ACCEPT tcp -- 0.0.0.0/0 172.17.0.2 tcp dpt:6667
    [...]
    

    The container is listening on 0.0.0.0 and the DOCKER chain will be accepting every connection, from every interface (think eth0 on a public facing server).
    This works even if you have a DROP policy on the INPUT chain.

    You now have two choices:

    • bind the container to any specific interface you wanted the container to listen on (e.g. localhost):
    % docker run -d -p 127.0.0.1:6667:6667 mbologna/docker-bitlbee
    

    This approach might fit some applications and it solely depends on your expected usage of the container.

    • prevent Docker from manipulating iptables rules. NOTE: you need to have knowledge about iptables and its chain filters.
      Docker has, in fact, the option "iptables": false to achieve this target. You just need to create (or edit) the file /etc/docker/daemon.json and type:
    {
    "iptables": false
    }
    

    (of course, skip the curly bracket if you’re just adding the option among the others you already have).
    NOTE: unlike what many articles you can find, adding this option to /etc/default/docker or to Docker’s systemd unit file will have no result.
    Restart the Docker daemon and voila: your containers will not be exposed to every possible interface but you will need to explicitly manipulate your iptables rules if you want the traffic to pass through, e.g.: this is needed to NAT your containers:

    
    -A POSTROUTING -s 172.17.0.0/24 -o eth0 -j MASQUERADE
    
    
  • Automatically add SSH keys to SSH agent with GNOME and macOS

    Automatically add SSH keys to SSH agent with GNOME and macOS

    I am using passwordless login via SSH on every box that I administer.
    Of course, my private SSH key is protected with a password that must be provided when accessing the key.
    Modern operating systems incorporate the usage of ssh-agent to “link” the user account to the SSH key(s), in order to unlock the SSH key as soon as the user is logged in. In this way, they avoid nagging the user asking for the SSH key password every time the key needs to be used.
    In my case, I am running GNU/Linux with GNOME and macOS:

    • GNOME, via its Keyring, supports the automatic unlocking of SSH keys upon user login. Starting from GNOME 3.28, ed25519 keys are supported as well as RSA keys (I do not other use any other type of SSH keys). To add your keys, just invoke ssh-add and supply your key path:
    ssh-add ~/.ssh/[your-private-key]
    

    you will be asked for your SSH key password. It will be put in the GNOME Keyring (remember it if you update your SSH password!).

    • macOS supports associating your SSH key password into the Keychain. You can add your key(s) with:
    ssh-add -K ~/.ssh/[your-private-key]
    

    Starting from Sierra, though, you need to change your ~/.ssh/config to persist the key between reboots and add:

    Host *
      UseKeychain yes
      AddKeysToAgent yes
      IdentityFile ~/.ssh/[your-private-key-rsa]
      IdentityFile ~/.ssh/[your-private-key-ed25519]
    

    Now, if you share the same ~/.ssh/config file between GNU/Linux and macOS you would encounter an error: how ssh on Linux is supposed to know about UseKeychain option (which is compiled only in macOS’ ssh)?
    A special instruction, IgnoreUnkown, comes to the rescue:

    IgnoreUnknown UseKeychain
    UseKeychain yes
    

    Eventually, my ~/.ssh/config looks like:

    Host *
      IgnoreUnknown UseKeychain
      UseKeychain yes
      AddKeysToAgent yes
      IdentityFile ~/.ssh/id_rsa
      IdentityFile ~/.ssh/id_ed25519
      Compression yes
      ControlMaster auto
    [...]
    
  • Accessing remote libvirt on a non-standard SSH port via virt-manager

    Scenario: you are using a remote host as a virtualization host with libvirt and you want to manage it via ”Virtual machine manager” (virt-manager) over SSH.

    But SSH is listening on a non-standard port, and virt-manager does not offer you to connect to a remote libvirt instance on a non-standard port.

    Fear not, the option to connect to your libvirt command is just this command away:

    virt-manager -c 'qemu+ssh://root@:/system?socket=/var/run/libvirt/libvirt-sock'
    

    (make sure to have passwordless login to the remote host already setup, for example with SSH keys).

    Plus note: you can install virt-manager even on macOS (obviously only remote support) with homebrew-virt-manager

  • Replacing Xmarks cross-browser sync service with Eversync

    I have a huge collection of bookmarks I collected over the years and I always have had the need to sync my bookmarks between my browsers of choice; Xmarks has always been one of the browser extensions I used for this need.

    Unfortunately, Xmarks is shutting down on May 1, 2018. I looked for some alternatives, like:

    I finally found Eversync: just signup (take a look at their pricing) and install their extension on all your browsers and keep your bookmarks in sync. The concept is the same as in Xmarks. They also offer a mobile app.

  • Automatically update your Docker base images with watchtower

    I’m an avid user of Docker containers, using base images pulled from the public registry DockedHub. As you may know, Docker containers are based on Docked base images, e.g. I run postgres containers that are based on Postgres base image.

    It occurs that base images could get updated by their respective author (in our case Postgres team) and pushed to DockerHub. But your container does not benefit from this update unless:

    • you pull the new image
    • stop and delete your container
    • spawn another container using the new base image (of course I’m considering a very simple setup without clusters and Kubernetes).

    What if I tell you that there is a way to automate the process for you?

    Enter watchtower: a Docker container (inception!) to automatically restart your Docker container to use the most recent published base image; of course, watchtower checks regularly for any updates of the base image and pulls the new version if necessary.

    Configuration is none existent as you just have to follow watchtower’s instructions and launch the container: after that, you are all set!

    Anybody said drawbacks? Yes, there might be drawbacks. What if your container is restarted during a transaction? What if the new base image is unstable?

    These are all factors that you should take into account if you want watchtower to update your containers or not. In my case, for some applications that I run in containers, I value the comfort of having watchtower handle the updates enormously compared to the problems it may generate (so far: none).

  • Incubo PuntoCom Shop: NON acquistate su questo sito!

    Recentemente ho acquistato un iPhone 8 sul sito PuntoCom Shop (linkato in nofollow).
    Dopo una settimana dall’acquisto, il telefono ha presentato un difetto con la fotocamera, scattando foto con un alone rosa; il difetto è conosciuto da Apple, tant’è che non sono l’unico ad essere stato vittima di questo difetto hardware Apple.

    Recandomi in un Apple Store mi è stata offerta la possibilità di poter riparare il telefono in garanzia; visto che il telefono era stato attivato pochi giorni prima, il Genius di Apple mi ha anche suggerito la possibilità di effettuare il recesso e ottenere un telefono nuovo in cambio, ma tale possibilità era gestita solo nel caso in cui io avessi acquistato il telefono su Apple Store.

    Sulla base del suggerimento, rifiuto la riparazione e chiedo a PuntoCom Shop di restituire il telefono e ottenerne uno nuovo, o in alternativa di effettuare il recesso, ma mi è stato risposto che:

    “non è possibile effettuare il recesso di un prodotto malfunzionante”
    “l’attivazione del prodotto Apple fa decadere la possibilità di avvalersi del recesso del bene”

    (come testimoniato dal Genius Apple, quando si acquista su Apple Store siete garantiti come un vero consumatore).

    Se ancora avete dubbi, vi consiglio di leggere la parte sul recesso che è pubblicata su PuntoCom Shop e le loro condizioni di utilizzo.

    E a quanto pare non sono l’unico:

    Il mio consiglio è quindi quella di NON COMPRARE DA QUESTO SITO perché nel caso di qualche problema hardware (imputabile al produttore in questo caso, Apple) NON AVRETE ALCUN DIRITTO DI RECESSO.

  • Reverse engineer a Docker run command from an existing container

    During my usual backup routine, I wanted to gather how a Docker container I started a while ago was run, especially the docker run command; this is required in case I need to re-run that container and I want to preserve the options (e.g. env variables, ports, etc.).

    Let’s make an example. I run a mysql docker container with:

    docker run -m 100M --name testbed-mysql --restart=always -e MYSQL_ROOT_PASSWORD=foo -e MYSQL_DATABASE=bar -e MYSQLPASSWORD=foo -e MYSQL_USER=foo -v /tmp/etc:/etc/mysql/conf.d -v /tmp/mysql:/var/lib/mysql -p 127.0.0.1:7308:3306 -d mysql
    

    If I just list running containers, there is no way to display these options:

    % docker ps -a 
    CONTAINER ID        IMAGE                     COMMAND                  CREATED             STATUS              PORTS                       NAMES
    [...]
    a32bdcbb36c7        mysql:latest              "docker-entrypoint..."   2 days ago          Up 2 days           127.0.0.1:7308->3306/tcp    testbed-mysql
    [...]
    

    Display this options is possible with docker inspect and some sorcery. Luckily, that sorcery is already packaged for you in two projects (alternatives):

    Both of them work really well in reverse engineering the options from the running containers, as you can see:

    % runlike testbed-mysql
    docker run --name=testbed-mysql --hostname=a73900fe9af6 --env="MYSQL_ROOT_PASSWORD=foo" --env="MYSQL_DATABASE=bar" --env="MYSQLPASSWORD=foo" --env="MYSQL_USER=foo" --env="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" --env="GOSU_VERSION=1.7" --env="MYSQL_MAJOR=5.7" --env="MYSQL_VERSION=5.7.20-1" --volume="/tmp/etc:/etc/mysql/conf.d" --volume="/tmp/mysql:/var/lib/mysql" --volume="/var/lib/mysql" -p 127.0.0.1:7308:3306 --restart=always --detach=true mysql:latest mysqld
    

    The only option which has not been recovered is the resource constraint options I used (see -m 100M above).