Tag: linux

  • Encrypt an existing Linux installation with zero downtime (LUKS on LVM)

    Encrypt an existing Linux installation with zero downtime (LUKS on LVM)

    During the bi-yearly review of my setup, I realized I was running a Linux machine without full disk encryption. The encryption of the disk needed to be done ASAP, but I was not willing to reinstall the whole operating system to achieve that.

    Solution? I came up with an interesting way to encrypt my existing Linux installation without reinstalling it. And with zero downtime too: while I was moving my data and encrypting it, I was still able to use my computer productively. In other words, the process works on the fly!

    Requirements

    There are three requirements for this guide:

    1. The Linux installation already lives in an unencrypted LVM setup
    2. Some space to store your data (in another partition or on an external disk) with equal or more capacity than the LVM partition you are trying to encrypt
    3. Do a backup of the hard drive and store it somewhere (another disk, NFS, S3… I suggest using Clonezilla for this purpose). And don’t forget to test your backup.

    Initial situation

    As a starting point, let’s visualize the partitions of my hard disk:

    The interesting part is the red one: the root volume of the Linux operating system. Windows is already encrypted with BitLocker, so these two partitions should not be touched.

    /boot will remain a separate partition for the time being (we will discuss it later).

    After we will be finished, the resulting hard disk layout is referenced as LVM on top of an encrypted LUKS encrypted partition.

    Install the required tools

    Since I am already using LVM, the only package I am missing is cryptsetup: find it in your distribution repositories and install it.

    Encryption of existing Linux LVM partition

    In a nutshell, what we are going to do in LVM terms:

    1. Add the external disk (/dev/sdb1 in my case) to the VG
    2. Move the PE from the internal disk PV to the external disk PV
    3. Remove the internal disk PV from the VG
    4. Create a LUKS encrypted container in the internal disk, a PV in it, and add the created PV to the VG
    5. Move the PE from the external disk PV to the internal disk PV
    6. Remove the external disk PV from the VG
    7. Configure the bootloader to access the encrypted PV

    In the following sections, we are going to describe every step in detail.

    1. Add the external disk (/dev/sdb1 in my case) to the volume group

    Let’s create a physical volume in the external disk and add it to the volume group (in my case this is called ‘system’):

    # pvcreate /dev/sdb1
    # vgextend system /dev/sdb1
    
    We did not lvresize the two LVs, so they kept the same size. Our data is still in /dev/sda5

    2. Move the physical extents from the internal disk physical volume to the external physical volume

    This is a time-consuming operation: we will transfer all the physical extents from the internal disk physical volume to the external disk physical volume:

    # pvmove /dev/sda5 /dev/sdb1
    

    The command will periodically output the percentage of completion.
    The speed of the process depends on multiple factors, overall others: hard disk transfer throughput and size of the data to move.

    3. Remove the internal disk physical volume from the volume group

    Now the physical volume in the internal disk is empty: we can remove it from the volume groups and remove the physical volume from it:

    # vgreduce system /dev/sda5
    # pvremove /dev/sda5
    

    Now our data is all on the /dev/sdb1 (external disk) PV

    4. Create a LUKS encrypted container in the internal disk, a physical volume in it, and add the created physical volume to the volume group

    Our data is completely stored on the physical volume that is in the external disk: we are halfway through.

    Let’s wipe the internal disk partition that was holding our unencrypted data:

    # cryptsetup open --type plain /dev/sda5 container --key-file /dev/urandom
    

    Now we need to create an encrypted LUKS container to hold the new internal disk PV.
    Different options can be selected, depending on the distribution you are running, the bootloader, and the version of cryptsetup you are using (e.g. LUKS2 works only with cryptsetup ≥ 2.1.0).

    I choose:

    • XTS cipher
    • 512 bits key size
    • LUKS1 (so I can remove the separate /boot partition later)

    A password will be asked (do not lose it):

    # cryptsetup -v --verify-passphrase -s 512 luksFormat /dev/sda5
    

    This password will be used for every subsequent mounting of the root volume (e.g. on boot, so choose carefully).

    Let’s now create a physical volume into the container and then add the physical volume to the volume group:

    # cryptsetup luksOpen /dev/sda5 dm_crypt-1
    # pvcreate /dev/mapper/dm_crypt-1
    # vgextend system /dev/mapper/dm_crypt-1
    

    5. Move the physical extents from the external disk physical volume to the internal disk physical volume

    We are now going to reverse the direction of the data flow: the physical volume in the internal disk is now ready to hold our data again.
    Let’s move the physical extents from the external disk PV to the internal disk physical volume. Again, this is a time-consuming operation that depends on the same factors outlined above:

    # pvmove /dev/sdb1 /dev/mapper/dm_crypt-1
    

    As stated before, the command will periodically output the percentage of completion.

    6. Remove the external disk physical volume from the volume group

    Our data is now entirely on the internal disk physical volume (this time encrypted, though). We need to remove the external disk physical volume from the volume group and remove the physical volume on it:

    # vgreduce system /dev/sdb1
    # pvremove /dev/sdb1
    

    Success! Our data is safely encrypted in the LUKS encrypted /dev/sda5 (internal disk) PV

    It is considered good practice to completely wipe /dev/sdb1 now, as it was containing our unencrypted data.

    7. Configure the bootloader to access the encrypted physical volume

    The final step is to inform the bootloader that the root file-system is now on an encrypted partition.
    Depending on your distribution, there are different ways to inform the bootloader.

    My distribution of choice (openSUSE) features GNU GRUB and initrd. In this case, the specific instructions are:

    • Create /etc/crypttab and insert the name of the encrypted LUKS container with the UUID of the partition on the disk (check which one with ls /dev/disk-by-uuid):
    # ls -l /dev/disk/by-uuid/ | grep sda5
    lrwxrwxrwx 1 root root 10 Nov 13 21:27 45a4cbf0-da55-443f-9f2d-70752b16de8d -> ../../sda5
    # echo "dm_crypt-1 UUID=45a4cbf0-da55-443f-9f2d-70752b16de8d" > /etc/crypttab
    
    • Regenerate initrd with:
    # mkinitrd
    • Reinstall GRUB with:
    grub2-mkconfig -o /boot/grub2/grub.cfg && grub2-install /dev/sda
    

    Success! /boot is still unencrypted, though.

    initrd will now ask at every boot the same password you used to create the LUKS container.

    Right now our root volume is encrypted, except for /boot which is left unencrypted. Leaving /boot unencrypted brings some benefits:

    • Unattended LUKS unlock via keyfile (stored, for example, in a USB key)
    • LUKS unlock via the network (authenticate via SSH to provide the LUKS password as implemented in dropbear-initramfs)

    One big drawback: having /boot unencrypted is vulnerable to the evil maid attack. But simple remediation can be put in place: let’s discover it in the next section.

    Optional: remove the separate /boot partition and achieve full disk encryption (FDE)

    Depending on your security model, on the bootloader you are using, on the LUKS version your container is using, it might be more secure to make /boot part of the encrypted volume.
    In my case, I decided that I wanted full disk encryption so I moved /boot into the encrypted volume.

    The idea here is to:

    • Create a copy of /boot into the LVM volume
    # cp -rav /boot /boot-new
    # unmount /boot
    # mv /boot /boot.old
    # mv /boot-new /boot
    
    • Remove the /boot partition from /etc/fstab:
    # grep -v /boot /etc/fstab > /etc/fstab.new && mv /etc/fstab.new /etc/fstab
    • Modify GRUB to load the boot loader from an encrypted partition:
    # echo "GRUB_ENABLE_CRYPTODISK=y" >>/etc/default/grub
      • Provision a keyfile to avoid typing the unlocking password twice.
        We are now in a particular situation: GRUB needs a password to unlock the second stage of the bootloader (we just enabled it). After the initrd has loaded, it needs the same password to mount the root device.

        To avoid typing the password twice, there is a handy explanation in the openSUSE Wiki: avoid typing the passphrase twice with full disk encryption.
        Be sure to follow all the steps.

      • Install the new bootloader:
    # grub2-mkconfig -o /boot/grub2/grub.cfg && grub2-install /dev/sda
    Full disk encryption: mission accomplished!

    Everything now is in place: all the data is encrypted at rest.
    Only one password will be asked: the password that you used to create the LUKS container. GRUB will ask it every time you boot the system, while initrd will use the keyfile and will not ask for it.

  • How a Terraform + Salt + Kubernetes GitOps infrastructure enabled a zero downtime hosting provider switch

    The switch

    It has been a busy weekend: I switched the hosting provider of my whole cloud infrastructure from DigitalOcean to Hetzner.
    If you are reading this it means that the switch is completed and you are being served by the Hetzner cloud.

    The interesting fact about the switch is that I managed to complete the transition from one hosting provider to another with zero downtime.

    The Phoenix Server

    One of the underlying pillars that contributed to this success story is the concept of Phoenix Server. In other words, at any moment in time, I could recreate the whole infrastructure in an automated way.

    How?

    • The resource infrastructure definition is declared using Terraform. By harnessing the Terraform Hetzner provider, I could simply terraform apply my infrastructure up.
    • The configuration definition is powered and makes use of Salt, versioned in Git.

    At some point in time, I made the big effort of translating all the configurations, the tweakings and the personalization I made to every part of the infrastructure and prepare a repository of Salt states that I kept updated.

    Two notable examples: I am picky about fail2ban and ssh.

    The result is that, after provisioning the infrastructure, I could configure every server exactly how I want it by simply applying the Salt highstate.

    • The application stack relies on containers: every application runs in its container to be portable and scalable. The orchestration is delegated to Kubernetes.

    After all the steps above were applied and I have an identical infrastructure running on Hetzner, the old infrastructure was still working and serving the users.

    DNS switching

    At this point, I had just prepared a specular environment running in Hetzner cloud. But this environment was not serving any client.

    Why?
    Let’s consider an example to explain the next step.

    This website, www.michelebologna.net, is one of the services running by the infrastructure.
    Each user was still resolving www.michelebologna.net using the old address: the old infrastructure was still serving it.

    To test the new infrastructure, I fiddled with my /etc/hosts and pointed www.michelebologna.net to the new reverse proxy IP (Note: this is required to bypass the load balancers): I verified it was working and that meant I was ready for the switch.

    The switch happened at the DNS level: I simply changed the CNAME for the www record from the old reverse proxy to the new one. Thanks to the proper naming scheme for servers I have been using, the switch was effortless.
    After the switch, I quickly opened a tail in the logs of the reverse proxy: as soon as the upstream DNSes were updating the record, users were accessing the website via Hetzner, success!

    Trivia: after 5.5 years, the old reverse proxy was shut down. In memory of it, its uptime records with an astonishing availability at 99.954%!

         #               Uptime | System                
    ----------------------------+-----------------------
         1   112 days, 18:33:34 | Linux 4.4.0-tuned
         2   104 days, 21:00:22 | Linux 4.15.0-generic
         3    85 days, 19:08:32 | Linux 3.13.0-generic
         4    78 days, 19:04:49 | Linux 4.4.0-tuned
         5    71 days, 13:01:09 | Linux 4.13.0-lowlaten
         6    66 days, 04:42:44 | Linux 4.15.0-generic
         7    62 days, 15:49:14 | Linux 3.19.0-generic
         8    62 days, 00:52:09 | Linux 4.15.0-generic
         9    56 days, 22:21:20 | Linux 3.19.0-generic
        10    53 days, 16:34:11 | Linux 4.2.0-highmem
    ----------------------------+-----------------------
        up  1989 days, 03:46:34 | since Tue Oct 28 14:28:05 2014
      down     0 days, 22:00:33 | since Tue Oct 28 14:28:05 2014
       %up               99.954 | since Tue Oct 28 14:28:05 2014
    

    After updating the DNS records for all other services, I was still checking if any service was still being accessed using the old infrastructure. After some days with minimal activity in the old infrastructure, I decided to destroy the old infrastructure.

    Caveats with DNS

    There are some things that I learned while doing these kinds of transitions. Or maybe, that I learned last time but I did not write down, and I am using this space as a reminder for the next time.

    • A DNS wildcard record (*.michelebologna.net) that gets resolved to a hostname (a catch-all record) can generate weird results if you are running a machine that has search michelebologna.net in its resolv.conf
    • Good hosting providers offer the ability to set a reverse DNS for every floating or static IP address for every cloud instance. A reverse DNS must reflect the mail server hostname (in Postfix)
    • With email hosting, set up DKIM and publish SPF, DKIM, and DMARC records in the DNS
    • The root record (@) must not be a CNAME record, but it must be an A/AAAA record
  • Linux: using bind mount to move a subset of root subdirectories to another partion or disk

    I was in the situation dealing with a Linux box with two hard disks:

    • /dev/sda: fast hard drive (SSD), small size (~200 GB)
    • /dev/sdb: very big hard drive (HDD), large size (~4 TB)

    The operating system was installed on /dev/sda, so I had /dev/sdb empty. I knew I could create a mount point (e.g. /storage) and mount it to /dev/sdb, but after reading Intelligent partitioning and the recommended Debian partitioning scheme I thought about moving:

    • /var
    • /home
    • /tmp

    to the big hard drive /dev/sdb

    The process described here is completely different from just putting a mount point to a partition in /etc/fstab: in our solution, we will use one disk (or one partition) to store multiple root subdirectories (/var, /home, /tmp). With the fstab “usual” method, you put one subdirectory in one disk or partition or volume.

    The solution to this problem is a bind mount: the three original directories will exist in the root disk (/dev/sda) but they will be empty. Those directories will live into the second disk (/dev/sdb) and, upon mounting, a bind will be created between the root filesystem and the directories in the second disk.

    The process is easy:

    1. Backup your data
    2. Boot from a live distribution (e.g. KNOPPIX)
    3. Mount your hard drives:mkdir /mnt/sd{a,b}1
      mount /dev/sda1 /mnt/sda1
      mount /dev/sdb1 /mnt/sdb1
    4. Copy the directories from sda to sdb:cp -ax /mnt/sda1/{home,tmp,var} /mnt/sdb1/
    5. Rename the old directories you just copied and create the new mount points:mv /mnt/sda1/home /mnt/sda1/home.old
      mv /mnt/sda1/tmp /mnt/sda1/tmp.old
      mv /mnt/sda1/var /mnt/sda1/var.old
      mkdir /mnt/sda1/{home,tmp,var}
    6. Update your fstab with the new locations:Mount the second hard drive:

      /dev/sdb1 /mnt/sdb1 ext4 defaults 0 2

      Then create the bind mounts for the 3 subdirectories you moved:

      /mnt/sdb1/home /home none defaults,bind 0 0
      /mnt/sdb1/tmp /tmp none defaults,bind 0 0
      /mnt/sdb1/var /var none defaults,bind 0 0

    7. umount your hard drives and reboot
    8. Check that everything under /home, /var and /tmp is working as expected. You may also want to clean up and delete /home.old, /var.old, and /tmp.old.

    This process can be repeated for any subdirectory you want to move (except, obviously, /boot).

    Closing notes: if you are brave enough:

    • you are not required to boot from a live distribution, just boot into single-user mode (adapt the paths of the following guide though!)
    • you can also skip booting into single user mode if you are using LVM: just create a new logical volume and copying subdirectories into it
  • Automatically add SSH keys to SSH agent with GNOME and macOS

    Automatically add SSH keys to SSH agent with GNOME and macOS

    I am using passwordless login via SSH on every box that I administer.
    Of course, my private SSH key is protected with a password that must be provided when accessing the key.
    Modern operating systems incorporate the usage of ssh-agent to “link” the user account to the SSH key(s), in order to unlock the SSH key as soon as the user is logged in. In this way, they avoid nagging the user asking for the SSH key password every time the key needs to be used.
    In my case, I am running GNU/Linux with GNOME and macOS:

    • GNOME, via its Keyring, supports the automatic unlocking of SSH keys upon user login. Starting from GNOME 3.28, ed25519 keys are supported as well as RSA keys (I do not other use any other type of SSH keys). To add your keys, just invoke ssh-add and supply your key path:
    ssh-add ~/.ssh/[your-private-key]
    

    you will be asked for your SSH key password. It will be put in the GNOME Keyring (remember it if you update your SSH password!).

    • macOS supports associating your SSH key password into the Keychain. You can add your key(s) with:
    ssh-add -K ~/.ssh/[your-private-key]
    

    Starting from Sierra, though, you need to change your ~/.ssh/config to persist the key between reboots and add:

    Host *
      UseKeychain yes
      AddKeysToAgent yes
      IdentityFile ~/.ssh/[your-private-key-rsa]
      IdentityFile ~/.ssh/[your-private-key-ed25519]
    

    Now, if you share the same ~/.ssh/config file between GNU/Linux and macOS you would encounter an error: how ssh on Linux is supposed to know about UseKeychain option (which is compiled only in macOS’ ssh)?
    A special instruction, IgnoreUnkown, comes to the rescue:

    IgnoreUnknown UseKeychain
    UseKeychain yes
    

    Eventually, my ~/.ssh/config looks like:

    Host *
      IgnoreUnknown UseKeychain
      UseKeychain yes
      AddKeysToAgent yes
      IdentityFile ~/.ssh/id_rsa
      IdentityFile ~/.ssh/id_ed25519
      Compression yes
      ControlMaster auto
    [...]
    
  • Accessing remote libvirt on a non-standard SSH port via virt-manager

    Scenario: you are using a remote host as a virtualization host with libvirt and you want to manage it via ”Virtual machine manager” (virt-manager) over SSH.

    But SSH is listening on a non-standard port, and virt-manager does not offer you to connect to a remote libvirt instance on a non-standard port.

    Fear not, the option to connect to your libvirt command is just this command away:

    virt-manager -c 'qemu+ssh://root@:/system?socket=/var/run/libvirt/libvirt-sock'
    

    (make sure to have passwordless login to the remote host already setup, for example with SSH keys).

    Plus note: you can install virt-manager even on macOS (obviously only remote support) with homebrew-virt-manager

  • Secure your SSH server against brute-force attacks with Fail2ban

    The problem: SSH can be brute-forced

    I usually leave an SSH server on a dedicated port on every server I administer and, as you may recall, I even linked two well-written guides to properly configure and harden SSH services.

    Now, Internet is a notoriously bad place: scanners and exploiters have always been there, but brute-forcers are on the rise and SSH is one of the services that is heavily targeted by them. Let’s gather some data:

    quebec:/var/log # ag "Invalid user" auth.log.2 auth.log.3 | wc -l
    4560
    quebec:/var/log # head -n 1 auth.log.3 | cut -d " " -f 1-3
    May  8 07:39:01
    quebec:/var/log # tail -n 1 auth.log.2 | cut -d " " -f 1-3
    May 21 07:39:01
    

    So, even if my SSH is allowing only PubkeyAuthentication, in the timespan of two weeks there has been 4560 brute-force attemps (~325 attempts per day).

    This is annoying and potentially insecure, depending on your configuration. What can we do?

    The solution: using Fail2ban

    I have recently read some posts about this problem, and luckily (for us) there are multiple solutions to this problem: the most popular one is Fail2ban. To cut a long story short, the idea behind Fail2ban is to monitor log files of the monitored services and keep track of which IP addresses are trying to use a brute-force attack to use the service. If the same IP address causes a number of bad events in the specified time frame, Fail2ban bans that IP (using netfilter/iptables) for a configure d time amount.

    So I just need to install Fail2ban and I am ready to go:

    # zypper in fail2ban
    

    NO. NO. NO. You will not be protected against brute-force attacks if you just install it without configuring it.

    You MUST configure it!

    Let’s take a step back. The core of Fail2ban is the configuration:

    # "bantime" is the number of seconds that a host is banned.
    bantime  = 600
    
    # A host is banned if it has generated "maxretry" during the last "findtime"
    # seconds.
    findtime = 600
    maxretry = 5
    

    These are the default values. And brute-forcers know them, so they can time accordingly their attempts not to break these limits (or to begin again their attempts after the bantime). Again, let’s gather some data. I noticed a frequent brute-forcer IP and follow his data:

    quebec:/var/log # ag "Invalid user" auth.log.2 auth.log.3 | ag <IP> | cut -d " " -f 1-3
    
    auth.log.2:8106:May 21 03:34:22
    auth.log.2:8112:May 21 03:43:41
    auth.log.2:8116:May 21 03:53:00
    auth.log.2:8120:May 21 04:02:18
    auth.log.2:8126:May 21 04:11:47
    auth.log.2:8132:May 21 04:21:08
    

    Can you believe it? It was just on the edge of the 600 seconds between every attempt!

    Key concept: outsmarting brute-forcers

    The key concept here is to provision a personalized version of your specific choosing of these values, in order to outsmart the brute-forcers (which is easily done). In order to do so, do not modify Fail2ban’s default config file (/etc/fail2ban/fail2ban.conf) but rather just override the defaults in another config file that you can create with:

    quebec:/etc/jail2ban # awk '{ printf "# "; print; }' /etc/fail2ban/jail.conf | sudo tee /etc/fail2ban/jail.local
    

    Your customizations have to be defined in /etc/fail2ban/jail.local, so your selected bantime, findtime and maxretry must go there.

    Further customization

    Some useful findings:

    • you can selectively whitelist a group of IPs, hosts, subnets, etc. in order to not being banned when accessing services from whitelisted IPs (e.g. ignoreip = <VPN subnet>/8 <another VPN subnet>/16)
    • you can receive an email everytime someone is banned with their whois (with good relief from the system administrator)
    • if you are using WordPress, you can extend Fail2ban to monitor failed WordPress login attempts with a WordPress plugin (did I mention that Fail2ban not only monitors sshd? It also monitors nginx, apache, vsftpd and other services)
    • you can have Fail2ban automatically sends the banned IPs to a shared blacklist (I have never used this)

    Disadvantages

    Everything seems perfect now, but what are the disadvantages of it? Given that Fail2ban reasons in terms of banning single IPs, it cannot protect you against distributed brute-force attacks. In this case Fail2ban is pretty useless and other solutions should be implemented that depends from case to case.

  • Packaging software for Debian/Ubuntu: eclipse

    Eclipse is my (Java, Python, Ruby, XML, <insert any other text format here) editor of choice, and it has been for many years. One thing that bothers me is that Eclipse package is outdated in Ubuntu: so, instead of using apt, I should resort to download/unpack/copy/create links to install it. These days are finished, though.

    In fact, I have been introduced to Debian packaging and I contributed to the Debian package of the latest version of the Eclipse IDE (4.5.1). EDIT: Repository has been removed as obsolete.

    This package is really simple (and in fact I used it to learn the packaging process for Debian/Ubuntu). How did I learn it? Recommended reading: How to package for Debian.

    In the following days I will try to publish a PPA with the built package. In the meanwhile, if you want to try to build the package on your own, just: 1. git clone -b eclipse_4.5.1
    2. cd eclipse-ide-java
    3. cd eclipse-ide-java_4.5.1
    4. debuild -i -us -uc -b
    5. cd ..

    Now you have a *.deb package waiting for you to be installed (via dpkg -i): upon installing it will fetch (via wget) the latest version of Eclipse, unpack, copy and create links.

  • Playing with Docker: tips and tricks to write effective Dockerfiles

    Recently I have been playing with Docker containers, and I am sure you already know what Docker is. In this post I will describe what I have learnt while using Docker containers and preparing Dockerfiles.

    What is Docker?

    In a few words: Docker is a software to manage and run Linux containers in which you can deploy an application using Dockerfiles. The main concept here is divide-et-impera concept: a Docker container is just like a virtual machine, except that is very lightweight (it is only a container, so it is not virtualizing an entire machine).

    How do I run a Docker container?

    1. Install Docker
    2. docker run container command (e.g. docker run -ti fedora bash)

    Docker will fetch the container (from DockerHub, a central repo for Docker containers) and run the specified command.

    What is a Dockerfile?

    A Dockerfile is a set of instructions to prepare a Docker container on your own. It basically declare a base image (like a Linux distribution: for example Fedora, Ubuntu, etc.) and apply modifications on that container (fetch the software you want to run on container, compile it, run, etc). There is a basic set of instruction for Dockerfiles. For all Vagrant users: that’s right, a Dockerfile is just like a Vagrantfile: it states the steps to prepare a machine (except that in this case we are preparing a container).

    How do I use a Dockerfile?

    1. Install Docker
    2. Download a Dockerfile
    3. (if applicable): edit any files that will be copied from host to container. This is useful for configuration files if you want to customize your Dockerized application
    4. docker build -t containername .

    Docker will reproduce the steps to create the container, using the instructions found on the Dockerfile. After it has finished, you can run the Docker container as specified above.

    Docker: from theory to practice

    Ok, theory aside. I decided to create a Dockerfile for two applications because:

    • the application was not available from the official repos (e.g. alice)
    • the version in the official repos is outdated (e.g. bitlbee)

    Basically, we will declare two Docker containers in which we fetch our software, customize to our needs and run it inside the container. Both of them will declare a service, and the container will serve as a server for the application (alice/http and bitlbee/irc).

    bitlbee Dockerfile

    In this case we are using my preferred base image which is Fedora, we customize it to be able to fetch and compile the source code of bitlbee and then proceed to compile it. In this Dockerfile we also ADD two configuration files from the host to the Dockerfile. Again, we launch the service as daemon user and expose the 6667/tcp port. The final size of the Docker container image is 359MB.

    To use it, connect your IRC client to localhost:6667 (remember to map the correct port, see below).

    bitlbee Dockerfile on GitHub.

    Tips and caveats

    Docker

    First of all, some tips I learnt:

    • When running a container, it is always best to launch it with a name (it easier to reference the container afterwards): docker run --name bitlbee_container mbologna/docker/bitlbee
    • If you want to detach a container, supply -d option when running
    • You can inspect a running container by attaching to it: docker exec -ti bitlbee bash
    • Remember to clean up docker images and docker containers: show them with docker images and docker ps -a. Remove them with docker rmi and docker rm
    • If you are running Docker containers as a service (like in this example), you should remember to set the option --restart=always to make sure that your Docker container is started at boot and whenever it exits abnormally

    Everything on the docker container makes it apart from the host machine under all points of view (network, fs, etc.). Thus:

    • When using Docker containers (in particular you are running a service inside a Docker container), you can access your container ports by mapping the ports on the containers to ports on your host using the -p option: docker run -p 16667:6667 mbologna/docker-bitlbee (container maps 16667 port on the host machine to 6667 port on the container, so it can be accessed at 16667/tcp on the host machine)
    • When a container is restarted, everything on the container is reset (speaking of file-system too). In order to write non-volatile files, you should supply -v option that declares a volume; as with ports we have seen above, you specify first the directory on host and then the corresponding directory on the container. This is useful for config files (you want to keep them, right?): docker run -v /home/mbologna/docker-bitlee/var/lib/bitlbee:/var/lib/bitlbee mbologna/docker-bitlbee

    Dockerfiles

    • If you define a VOLUME in the Dockerfile:
      • if user is launching Docker container without specifying a volume, VOLUME directory will typically resides under /var/lib/docker/volumes (you can discover it using docker inspect <container>)
      • otherwise, VOLUME directory will resides on the specified directory using -v option.

      This exposes an issue of permissions on the VOLUME directory. I basically solved it by chowning twice the volume directory, otherwise either one of the two cases described above wouldn’t have the correct permissions: chown -R daemon:daemon /var/lib/bitlbee* # dup: otherwise it won't be chown'ed when using volumes
      VOLUME ["/var/lib/bitlbee"]
      chown -R daemon:daemon /var/lib/bitlbee* # dup: otherwise it won't be chown'ed when using volumes

    • When a final user pulls a container, it basically downloads your container from DockerHub. That’s why we want to minimize Docker container size. How can we do that when preparing a Dockerfile?
      • Every command you launch on a Dockerfile creates a new (intermediate) Docker container (the final result will be the application of every instruction on top of the instruction above it!) => minimize steps and group commands under RUN commands using &&. E.g.:

        RUN touch /var/run/bitlbee.pid && \
        chown daemon:daemon /var/run/bitlbee.pid && \
        chown -R daemon:daemon /usr/local/etc/* && \
        chown -R daemon:daemon /var/lib/bitlbee*

      • After you compiled/installed your software, be sure to remove it if unnecessary to clean up space: apt-get clean && \
        apt-get autoremove -y --purge make \
        rm -fr /var/lib/apt/lists/*

    • Every command you launch on the Docker container is run as root: be sure, before launching your software, to launch it with as minimal privileges as possible Principle of least privilege. For example, I launch alice and bitlbee daemons with the daemon user: USER daemon
      EXPOSE 8080
      CMD ["/home/alice/alice/bin/alice", "-a", "0.0.0.0"]

    Contributing

    You can pull my Docker containers on DockerHub:

    You can browse my Dockerfiles on GitHub:

    Future work

    Two interesting concepts I came across during my research and I will investigate in the future:

    • CoreOS, a Linux distribution in which every application is launched on a separate Docker container
    • Kubernetes, an orchestration layer for Docker containers
  • Hardening services: let’s review our config files

    It’s hardening Sunday here: I reviewed the config files of my main daemons (nginx, openvpn, tinc, sshd) with the help of two resources that I want to share with you, fellow readers.

    First of all, a guide dedicated exclusively to hardening ssh: from using public key authentication only (I strictly encourage it!) to the selection of which ciphers ssh should use (there is theory behind, so read it!).

    The second guide, is a guide for hardening all services, from web servers to VPN concentrators (divided by vendor): a worth reading guide. Every option is very well detailed and discussed, for all you nitpickers like me.

    So, take aside 2 hours, read the theory, then adopt the changes you think would benefit your setup. Happy hardening!

    Update: if you are using OSX, do not use default ssh toolset that ships with OSX: it is not updated and it does not have ssh-copy-id to distribute your public key among your ssh servers. More that that, OSX default ssh does not support ecdsa which is the main crypto algorithm that the linked guides are using.
    Solution: brew install homebrew/dupes/openssh and adjust your PATH accordingly.

  • HP 6730b and fan at full speed after suspend (Fedora, Ubuntu, openSUSE)

    It seems that with kernels 3.9 onwards there are some issues with fan speed and the 6730b model of HP notebook. I tried with Fedora 22 (my main distribution of choice), openSUSE Tumbleweed and Ubuntu 15.04.

    The problem occurs only when the system is woken up after a sleep/suspend: fans spin at full speed indefinitely, without resort (apart from reboot/shutdown). This is a problem with ACPI. In order to solve this problem, if you’re using Fedora like me, create the file /etc/pm/sleep.d/99fancontrol.sh and fill it with the following:

    case "$1" in
    hibernate|suspend)
    # Stopping is not required.
    ;;
    thaw|resume)
    # In background.
    ls /sys/devices/virtual/thermal/cooling_device*/cur_state | while read A; do echo 1 > $A; echo 0 > $A; done
    ;;
    *) exit $NA
    ;;
    esac

    and of, course, set it executable with chmod +x.

    It basically flips on and off the current state of the cooling devices: with this script you won’t have the noisy fans at full speed on resume.