Say you’re the kind of person who hates to throw away equipment, or just happens to have a few laptops lying around for some reason. Say you also want to experiment with running your own little data centre, for whatever reason. Maybe you want to try the things you’ve learned at work, or read about on the internet. In any case, if you have a spare laptop / NUC / whatever and the will to experiment, this guide is for you.

Setup

There are a lot of ways to set up your microservices at home, and they really depend on the available equipment, and your use case. I would not recommend Kubernetes for someone who is only using a single machine, unless the plan is to scale, and you just want to get the initial overhead out of the way. While working on my setup, I tried k3s, microk8s and docker-compose. For me, docker-compose ended up being the sweet-point in terms of flexibility and simplicity. It only requires a single configuration file, and websites like https://linuxserver.io have thorough documentation for docker-compose, but not Kubernetes.

So, where to start?

Goal

Not knowing where you want to be… something… something else… doesn’t matter where you go.

  • Some cat

The initial goal of this exercise is to have a typical microservice architecture running on your home network. To do this, you need a gateway, which is a service that receives all the traffic to your IP address and decides where to route it depending on the request it receives. There are plenty of gateways you can use. The biggest players I know are nginx, Caddy and traefik. nginx is probably the oldest and most used of the three. As such, it is probably the best option if you want something that is reliable, battle-tested and supports the most obscure use cases. It can be a pain to configure though, and is pretty verbose. Caddy is a neat alternative which leans to the minimal side of the amount of required configuration. It can be configured through the command line, or via the Caddyfile. It supports SSL certificates through Let’s Encrypt out of the box - you just have to provide your domain in the configuration. For a while, I used Caddy as my gateway.

However, I ended up using Traefik. While it may not be as battle-tested as nginx, or as easy to set up as Caddy, it is very nice for microservices. You see, it can automatically detect which services are running, and can be configured using container labels. This allowed me to put almost all of my configuration in my docker-compose.yml file, and run my gateway as its own container. When I want the gateway to forward some request to a new service, I can just edit that file, where I add the service definition and some labels, and I’m done. No need to find the reverse-proxy configuration and add the service manually.

If you want a setup like this, you will have to begin by choosing what you want to run. Typically, I would set up the gateway first. Here is an example from my own docker-compose.yml file.

# docker-compose.yml
version: "2.4"

services:

  traefik:
    image: "traefik:v2.4"
    container_name: "traefik"
    networks:
      - default
    command:
      - "--api.insecure=true"
      - "--accesslog=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedByDefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.web.http.redirections.entryPoint.to=websecure"
      - "--entrypoints.web.http.redirections.entryPoint.scheme=https"
      - "--entrypoints.web.http.redirections.entryPoint.permanent=true"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.http.acme.tlschallenge=true"
      - "--certificatesresolvers.http.acme.email=my_email@example.com"
      - "--certificatesresolvers.http.acme.storage=/letsencrypt/acme.json"
    ports:
      - "80:80/tcp"
      - "443:443/tcp"
      - "8080:8080/tcp"
    volumes:
      - "./logs:/logs"
      - "./letsencrypt:/letsencrypt"
      - "/var/run/docker.sock:/var/run/docker.sock:ro"

This configuration defines a single service, which is our gateway. You will see that it exposes a few ports. Remember, in docker-compose files, we are defining a mapping from our host port to the container port, i.e., HOST:CONTAINER/protocol. Port 80 is the default HTTP port on web servers, and port 443 is the default HTTPS ports. Port 8080 is how I access the control panel of my instance.

This image allows us to configure which flags we pass to traefik on startup. --api.insecure=true is what allows me to access the control panel on port 8080. If you enable this option yourself, make sure that your router does not forward this traffic outside your network, or you’re in for a heap of trouble. When using docker, you have to turn on the docker provider. The --providers.docker.exposedByDefault=false option just tells traefik that it should only make services available if I explicitly say it should. This is a small way in which you can increase the security of your local network by not exposing something on accident. Note that I’m also mapping the docker.sock of the host machine to a read-only (:ro) docker.sock in the container. This allows traefik to monitor the docker images and read their labels, so that it can adjust its rules on the fly.

The entrypoints define the ways that traefik can be accessed externally. You’ll note that I forward all traffic from port 80 to port 443, and that I set up the Let’s Encrypt service ACME. This is only necessary if you want SSL certificates for your services, which I do, because I want to be able to Chromecast my local Jellyfin instance to my TV.

Logs and letsencrypt data is also mapped to the host machine for persistence, as these files are deleted whenever the running containers need to be recreated, such as when they are updated.

That’s it! You now have a gateway which reverse-proxies traffic from specific requests to whatever you want. Unfortunately, you don’t have any services to forward requests to.

Jellyfin

I can only share configurations which I know work. Jellyfin is a nice media server that you can use to play music, and videos, and which I use for my personal media at home.

I used the following configuration to set it up:

# docker-compose.yml

networks:
  docker_vlan:
    driver: macvlan
    driver_opts:
      parent: eth0
    ipam:
      driver: default
      config:
        - subnet: 192.168.1.0/24
          ip_range: 192.168.1.0/24
          gateway: 192.168.1.1

volumes:
  jellyfin-config: {}
  jellyfin-cache: {}

services:

  # traefik...

  jellyfin:
    image: ghcr.io/linuxserver/jellyfin
    container_name: jellyfin
    env_file:
        - /etc/homelab/env.default
    volumes:
      - jellyfin-config:/config
      - jellyfin-cache:/cache
      - /srv/media:/media:ro
    restart: unless-stopped
    ports:
      - "8096:8096/tcp"
    networks:
      default:
      docker_vlan:
        ipv4_address: 192.168.1.240
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.jellyfin.rule=Host(`jellyfin.example.com`)"
      - "traefik.http.routers.jellyfin.entrypoints=websecure"
      - "traefik.http.routers.jellyfin.tls.certresolver=http"

First, we add support for macvlan, which allows us to treat the container as if it’s its very own host on the local network. We have to do this because of the technical limitations of Jellyfin’s implementation, which requires that it has host network access. Since we don’t want to pollute the ports on our actual host network, we use this workaround instead of treat it as a separate network entity. However, it also needs to share a network with traefik in order for the forwarding to work, which is why we set it as a member of both the default network and the docker_vlan, which we configure in the beginning of the file.

We use the linuxserver jellyfin image. They recommend that you explicitly set user, group and time zone information, which I put into my default.env file for sharing across several services. The user and group information helps set appropriate permissions for the filesystem, especially on volumes which are mounted on the host system and accessible from within the container. Jellyfin needs at least read access to the media files, which are stored on the host computer.

Docker-compose as a Service

The docker service is lazy, which means that it won’t start until you’ve made a request, such as docker ps. To make sure that your containers start even if you reboot the computer, you can use systemd as described in this stackoverflow answer. This is especially useful if you just want to keep it running in your closet.

Closing the Lid

Similarly, if you are running your server on a laptop, and want to be able to close it whenever you’re not actively using it, you can follow the instructions in this stackoverflow answer.