Table of contents:
Self-hosting made easy with Wireguard and Docker
Self-hosting tends to be this kind of topic that many people find extremely frustrating to deal with. Be that due to the misconception, that hosting stuff would require you to have a good internet connection with port forwarding and good hardware or that configuring services is difficult. Well, in this article I want prove those misconceptions wrong by showcasing how a 10 year old Thinkpad behind a NAT firewall can successfully host a bunch of online services, including a personal website, Nextcloud, Forgejo and an email server.
Let’s start with the elephant in the room: how can you self-host a bunch of web services without exposing ports from your home network? And the answer is, you can’t, unless you have a separate machine somewhere, which can do port forwarding to the Internet for you, in which case, you would simply have to “tunnel” the network traffic from your home server to a separate, port-forwarded, machine. Effectively making resources from your home server accessible to the outside world.
This technique is called reverse port forwarding and it makes use of the fact that NAT firewalls usually don’t block outgoing connections, meaning that as long the incoming packets are part of the same outgoing connection, they will get through the NAT firewall.
Home server setup
In most use-cases, you don’t even need powerful hardware to be able to self-host. For instance, I have been successfully using a Thinkpad T420 laptop from 2011 to host all my web services just fine. My specific setup even uses WiFi to connect to my home network, which makes this whole thing even more ghetto :)
Now, let’s assume that you have the hardware. The next important part is installing an operating system and required software.
If you have the hardware and want to be extra fancy, you could use something like Proxmox, but for simpler setups, bare Debian
works just fine. For hostname, I suggest setting it to some subdomain that you own, for instance server.example.org, but
it should work with anything really. When the Debian installer asks for software to install, select SSH server and
standard system utilities to keep things simple, as shown in the picture below.
Once the installation has been successfully completed, you can boot into your fresh Debian system ^^
Must have configuration for laptops
Closing the lid behaviour
When using a laptop as a server, you would most likely want to keep its lid closed. In order to do that,
open /etc/systemd/logind.conf file in your favourite text editor and following configuration values.
[Login]
HandleLidSwitch=ignore
HandleLidSwitchExternalPower=ignore
HandleLidSwitchDocked=ignore
After saving the configuration, run systemctl restart systemd-logind. This will then ensure that the
laptop will ignore lid closure effects and doesn’t, for instance, suspend your system.
Capped battery charge
In a server setup, you would most likely keep your laptop connected to the AC power 24/7. When the laptop still has a battery connected (e.g. to have a power backup in cases of power outage), you might want to cap battery charge to something like 80% in order to prolong battery life.
One way of achieving it, is by using a tlp utility. This utility is mainly
meant for optimizing Linux laptop battery life, but it also features a way to cap battery charge in order to prolong
the overall battery health. Install the utility with sudo apt install tlp and open /etc/tlp.conf.
To cap battery charge add following configuration lines
START_CHARGE_THRESH_BAT0=75
STOP_CHARGE_THRESH_BAT0=80
Depending on the specific laptop, the primary battery might be considered as BAT1 instead, in which case
use START_CHARGE_THRESH_BAT1 and STOP_CHARGE_THRESH_BAT1 values accordingly.
When the configuration is done, restart tlp with sudo systemctl restart tlp and you should be good to go.
Wifi connection
Despite the recommended way of connecting to the internet would be via Ethernet for server setup, in some cases you may still be stuck using Wifi (especially for some laptop configurations or simply when Ethernet is not an option).
The easiest and most user friendly approach would be to use NetworkManager, which also provides
a nice nmtui utility for managing Wifi connections.
Install it with sudo apt install network-manager, then open nmtui and activate your Wifi connection.
Installing necessary software
In order to host anything on the home server, you should install all the required software, including
Docker, nginx, iptables and wireguard. First let’s install Docker. By default, Debian repositories have
a really outdated version of Docker available, which is why, for a more modern version of Docker, you
should add Docker’s APT repository to your /etc/apt/sources.list This process is pretty well documented
on Docker’s website but tl;dr is:
# Add Docker's official GPG key:
sudo apt update
sudo apt install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/debian
Suites: $(. /etc/os-release && echo "$VERSION_CODENAME")
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF
sudo apt update
After the repository has been added successfully, you can install all the required software.
$ sudo apt install docker-ce \
docker-ce-cli \
containerd.io \
docker-buildx-plugin \
docker-compose-plugin \
wireguard \
nginx \
apache2-utils \
iptables
Network setup
Once you have successfully done the basic setup for your home server, let’s get started with the fun part — making your home server accessible from the internet, assuming that port forwarding from your home network is not an option.
We will need at least two things, a domain and a VPS that will act as a proxy/VPN server. Some domain registrars you could use are following:
- GoDaddy
- NameCheap
- porkbun
- spaceship
- Zone (if Estonia or EU based)
Epik(known for providing services to websites that host far-right and neo-Nazi content. Also they got breached in 2021)
It shouldn’t really matter which domain registrar you pick as long as they are reputable enough and don’t have a bad history of data breaches. Use enough due diligence and do some research about specific registrar you want to pick.
Next, you should pick a VPS provider. For a home server setup, go for a VPS that is geographically close to you in order to minimize latency. Again, there are many providers available with each of them having their own pros and cons but I advice to look out for following:
- data collection policy
- IPv6 support
- port forwarding on ports 25, 465, 587 and 993 (required for email servers)
- possibility for setting custom PTR records (required for email servers)
- where does the company operate at and which laws apply to them
- are their servers close enough for you
- pricing ofc
Personally I can recommend considering following options:
- Zone
- Based in Estonia
- Server locations in Estonia, the Netherlands and Finland
- VPS plans from 6.5€ + VAT a month
- Ports 25, 465, 587 and 993 are open by default
- Setting reverse DNS can be done by contacting support
- no IPv6 support
- Hostinger
- Based in Lithuania
- Server locations in Lithuania, Germany, France, the UK, the US, Brazil, Indonesia, Malaysia and India.
- VPS plans from 4.99$ a month
- Ports 25, 465, 587 and 993 are open by default
- Setting reverse DNS can be done in their management panel
- IPv6 support
Setting up the VPS
Once you have picked up a suitable VPS provider, it is time to create a new VPS instance. For instance, when using Zone, the registration page looks something like this:
Pick the cheapest plan, because for a proxy server that’s all you really need, since this VPS acts as a gateway to
more powerful home server anyways. For operating system, pick the latest Debian release and for the SSH public key section,
paste your SSH public key. If you don’t have one, you can generate an SSH keypair with ssh-keygen -t ed25519.
Once you have created your VPS instance, you should look at the administration panel for the IP aadress and default user, as well as credentials if required. In my case, I could SSH directly with
$ ssh debian@uvn-78-209.tll07.zonevs.eu
Basic configuration and software installation
For security reasons, you should make sure that SSH password authentication is disabled for your VPS. This can be done by editing
/etc/ssh/sshd_config file. In that file, modify the line containing PasswordAuthentication attribute.
PasswordAuthentication no
Some VPS providers (such as Hostinger) use SSH password authentication for web console access and thus, have their own separate
configuration file in /etc/ssh/sshd_config.d directory, which will override your own configuration. In this case, you might want
to either delete, the file there, or modify it.
As a next step, you should install all the required software for further steps.
$ sudo apt install wireguard iptables nginx certbot certbot-nginx
Wireguard configuration
Once wireguard has been installed, you will need to generate a keypair for the server as well as create a configuration file. In this setup, the VPS acts as a VPN server and the home server and other clients as peers. By allowing IP forwarding on the server side, clients can connect to the home server.
First, on the VPS side, use following commands to create a Wireguard keypair.
sudo mkdir /etc/wireguard/keys
sudo bash -c "wg genkey > /etc/wireguard/keys/server_priv.key"
sudo bash -c "wg pubkey < /etc/wireguard/keys/server_priv.key > /etc/wireguard/keys/server_pub.key"
sudo chmod 600 /etc/wireguard/keys/server_priv.key
Now that a keypair has been generated, create a new VPN configuration to /etc/wireguard/infra.conf. You
can name this configuration file to whatever you want but this will determine the name of the Wireguard
network interface.
[Interface]
PrivateKey = <server-private-key-here>
Address = 10.200.200.1/24
ListenPort = 51820
# Firewall configuration
PostUp = iptables -t nat -A POSTROUTING -s 10.200.200.0/24 -o eth0 -j MASQUERADE
PostDown = iptables -t nat -D POSTROUTING -s 10.200.200.0/24 -o eth0 -j MASQUERADE
PostUp and PostDown configurations here configure the firewall to allow postrouting to eth0 interface. This is needed for accessing the Internet when full VPN tunneling is used (e.g. by the home server).
Additionally, to make IP forwarding work on the VPS, you will need to add a kernel configuration value.
$ sysctl -w net.ipv4.ip_forward=1
To make this change permanent, create a new file /etc/sysctl.d/99-ipv4-forward.conf and add following value there:
net.ipv4.ip_forward=1
After IPv4 forwarding has been enabled, enabling the VPN server can be done with sudo systemctl enable --now wg-quick@<interface-name>.
Connecting peers
The first and most important peer to add, is the home server itself. Similarily to the previous section, generate a Wireguard key pair for the home server and create a configuration file such as /etc/wireguard/infra.conf.This time configured as:
[Interface]
Address = 10.200.200.2/32
PrivateKey = <home-server-private-key>
DNS = 1.1.1.1,9.9.9.9
[Peer]
PublicKey = <vps-public-key>
Endpoint = <vps-ip>:51820
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 25
The configured AllowedIPs property here means, that the peer tunnels all network traffic through Wireguard. When split tunneling is needed (only for intranet IP addresses) then set this value to 10.200.200.0/24. PersistentKeepalive property is needed here since assumingly, the home server is behind a NAT, thus in order to not lose connection, you should have it present.
Enable and start the VPN with sudo systemctl enable --now wg-quick@infra, however this will not work yet. You still need to add home server as a peer to Wireguard server’s configuration. Going back to the VPS, edit /etc/wireguard/infra.conf and add following lines:
# Previous configuration lines ...
[Peer]
PublicKey = <home-server-public-key>
AllowedIPs = 10.200.200.2/32
Restart the VPN with sudo systemctl restart wg-quick@infra and try pinging 10.200.200.2 from your VPS. If packets reach its destination, then everything is good.
Next, for you client, generate key pair and a write a similar config as for the home server. Except:
- Set
Addressto10.200.200.3/32 - Don’t set DNS value
- I advice to use partial tunneling, i.e. set
AllowedIPsto10.200.200.0/24 PersistentKeepaliveis not needed
Then on the VPS side, assign IP 10.200.200.3 to your client, restart Wireguard and start the VPN connection on your client device with sudo wg-quick up infra. If you can ping 10.200.200.1 and 10.200.200.2 then everything works as needed.
Domain configuration
Assuming, that you have registered a domain, you should add some DNS records. First, let’s make the domain (e.g. example.org) resolve to gateway VPSs IP address. For this, you should add an A record for your domain, e.g:
example.org. A <vps-ipv4-address>
If your VPS supports IPv6, then also add an AAAA record:
example.org. AAAA <vps-ipv6-address>
Once this has been done, I advice you to allocate subdomains for intranet (i.e. domain names for services that are not available on the Internet but require a VPN connection to access). One way of doing it is to make a wildcard A record, which points to your home server’s Wireguard IP address. Something like this:
*.infra.example.org. A 10.200.200.2
This will allow you to comfortably deploy intranet services without having to modify DNS records and also use the same TLS certificate for all intranet services (see TLS certificate using certbot for more details).
Once DNS records have been updated, you can, for instance, ssh to your home server from the VPN using ssh user@home.infra.example.org. Similarily, sshing to your VPS can be done with ssh user@example.org.
Firewall
A good idea overall, is to setup a proper firewall for your server, especially for the VPS gateway, since you probably don’t want to accidentally expose some ports to potentially vulnerable services. On Linux, there are many firewalling solutions available such as ufw, developed by Canonical and firewalld, developed by Red Hat. However, these solutions, despite promising simplicity, actually might make things more difficult in the long run. A truely simple and trivial approach to configuring firewall on Linux is by modifying iptables rules directly.
I assume the reader to have some familiarity with iptables and thus I’m not gonna explain each rule or how iptables work in too much detail. A guide for explaining iptables in more detail is planned for the future.
For configuring iptables rules, create a new shell script called iptables.sh. A good starter ruleset would be something like that:
#!/bin/sh
# Ensure that the script is run as root
if [ $(id -u) -ne 0 ]; then
echo "Firewall script must be run as root!"
exit 1
fi
# Flush all previously created rules and start from all over again
iptables -F
iptables -t nat -F
## Don't forget IPv6, if supported :)
ip6tables -F
ip6tables -t nat -F
# Rules applicable for both IPv4 and IPv6 traffic
general_rules() {
IPTABLES_CMD=$1
IP_TYPE=$2
# Set the default FORWARD policy to DROP
$IPTABLES_CMD -P FORWARD DROP
# For incoming packets, drop everything except:
# 1. Traffic to lo interface or localhost
# 2. Incoming ICMP packets
# 3. Packets with ESTABLISHED or RELATED states (i.e. part of some outgoing connection)
# 4. SSH connections (tcp port 22)
# 5. HTTP traffic (tcp port 80 for http and 443 for https)
# 6. VPN traffic (udp port 51820)
$IPTABLES_CMD -A INPUT -i lo -j ACCEPT
$IPTABLES_CMD -A INPUT -p icmp -j ACCEPT
$IPTABLES_CMD -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
$IPTABLES_CMD -A INPUT -p tcp --dport 22 -j ACCEPT
$IPTABLES_CMD -A INPUT -p tcp --dport 80 -j ACCEPT
$IPTABLES_CMD -A INPUT -p tcp --dport 443 -j ACCEPT
$IPTABLES_CMD -A INPUT -p udp --dport 51820 -j ACCEPT
$IPTABLES_CMD -A INPUT -j LOG --log-prefix "($LOG_PREFIX) input-dropped: "
$IPTABLES_CMD -A INPUT -j DROP
# FORWARD rules for Wireguard
# No other peers should be reachable by others than 10.200.200.1 and 10.200.200.2
$IPTABLES_CMD -i infra -m state --state ESTABLISHED,RELATED -j ACCEPT
$IPTABLES_CMD -i infra -d 10.200.200.1/32,10.200.200.2/32 -j ACCEPT
$IPTABLES_CMD -i infra -d 10.200.200.0/24 -j LOG --log-prefix "(infra/$LOG_PREFIX) forward-dropped: "
$IPTABLES_CMD -i infra -d 10.200.200.0/24 -j DROP
$IPTABLES_CMD -i infra -j ACCEPT
$IPTABLES_CMD -o infra -j ACCEPT
# For outgoing packets, allow everything
$IPTABLES_CMD -A OUTPUT -j ACCEPT
}
general_rules iptables ipv4
general_rules ip6tables ipv6
To temporarily test out firewall configuration, use something like
$ sudo ./iptables && sleep 120 && sudo iptables -F && sudo ip6tables -F
This will apply your firewall rules for 120 seconds before reverting it. Please use this when testing firewall rules, otherwise you risk locking yourself out.
For permanently persisting firewall changes:
sudo ./iptables
sudo iptables-save > /etc/iptables/rules.v4
sudo ip6tables-save > /etc/iptables/rules.v6
TLS certificate using certbot
Let’s assume you want to host your really cool website and for that you have created an nginx configuration for it:
server {
listen 80;
listen [::]:80;
root /var/www/myawesomewebsite;
server_name example.org;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
But sadly, when accessing example.org, your web browser gives a big scary warning. This happens because your website doesn’t have TLS, meaning that all HTTP requests and responses are sent over the internet completely unencrypted. Luckily, Letsencrypt provides free TLS certificates and with certbot utility, deploying a TLS certificate is quite trivial.
$ sudo certbot --nginx
This command, will then ask you to select a domain, for which you would like a certificate. Select your domain and voila, it is done. Just restart nginx with sudo systemctl restart nginx and now the big scary browser warning is gone and website traffic is encrypted.
Back to the home server
Now, let’s come back to the home server and configure some things there.
Firewall
Similarly to the VPS, it might be a good idea to firewall your home server as well, even though it is not directly exposed to the internet. You can use the same blueprint for iptables as for the VPS as well, except this time, more specific and strict.
The overall idea when configuring a firewall and exposing ports is “as much as necessary but as little as possible”.
When applying this idea, you should first consider a couple of things. The first thing to consider is Docker, which in this tutorial is going to be used for service deployment. Docker creates its own iptable chains and rules, which might even bypass rules in INPUT and OUTPUT chains in filter table. Another thing is that, for a functional Docker compose environment, containers might need to have network access to each other, thus it must be ensured that your rules wouldn’t accidentally block inter-container traffic. Simplest way to ensure this, is by assigning a specific IP range for Docker containers, which then can be used in custom firewall rules.
Open /etc/docker/daemon.json file and add following configuration:
{
"default-address-pools": [
{"base": "172.17.0.0/16", "size": 24}
],
"ipv6": false
}
This configuration, will then ensure that Docker assigns IP addresses for containers from this IP range. The size property defines the netmask for a subnet, which each container host in the same bridge network will be assigned with. If this range is used (e.g. by your local home network), then use something else. Be creative, there are pleanty of IPv4 ranges reserved for private use :) (see Wikipedia). Property ipv6: false, will ensure that Docker wouldn’t use IPv6 for its networking. This will get explained shortly.
Restart Docker daemon to apply the configuration with sudo systemctl restart docker and you’re good to go.
Another thing to consider, is that the VPN is currently only configured for IPv4 traffic. Assuming that your home network is not IPv6 only, you can comfortably disable IPv6 altogether, which will make configuring the firewall a bit easier.
$ sudo sysctl -w net.ipv6.conf.all.disable=1
To make those changes permanent, add following to /etc/sysctl.d/99-disable-ipv6.conf:
net.ipv6.conf.all.disable=1
Once Docker has been configured to have a specific IP address pool and IPv6 has been successfully disabled, the firewall script might look something like this:
#!/bin/sh
# Check if the script is run as root
if [ $(id -u) -ne 0 ]; then
echo "Firewall script must be run as root"
exit 1
fi
# Briefly stop Docker and Wireguard services to ensure that the rule order is correct
systemctl stop docker
systemctl stop wg-quick@infra
# Flush all previous iptables rules
iptables -F
# For incoming packets, block everything with following exceptions:
# 1. Allow incoming packets to lo interface or localhost
# 2. Allow ICMP to everywhere
# 3. Allow packets with ESTABLISHED or RELATED state (part of outgoing connection)
# 4. Allow incoming traffic to Docker containers
# 5. Allow incoming SSH traffic from VPN and from your home network (port 22)
# 6. Allow incoming HTTP traffic from VPN interface (tcp ports 80 and 443)
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allow incoming traffic to Docker containers
# if a different IP range is used, then replace the subnet in this rule
iptables -A INPUT -d 172.17.0.0/16 -j ACCEPT
# Assuming wlan0 is used as the primary network interface and
# that your home network IP range is 192.168.88.0/24
iptables -A INPUT -i wlan0 -s 192.168.88.0/24 -p tcp --dport 22 -j ACCEPT
iptables -A INPUT -i infra -p tcp --dport 22 -j ACCEPT
iptables -A INPUT -i infra -p tcp -m multiport --dports 80,443 -j ACCEPT
iptables -A INPUT -j LOG --log-prefix "input-dropped: "
iptables -A INPUT -j DROP
# For outgoing packets, block everything with following exceptions:
# 1. Allow outgoing packets from lo interface or localhost
# 2. Allow outgoing packets with state ESTABLISHED or RELATED (part of some incoming connection)
# 3. Allow outgoing traffic from Docker containers
# 4. Allow outgoing connection from VPN network interface
# 5. Allow outgoing connection to VPN server (udp example.org:51820)
iptables -A OUTPUT -o lo -j ACCEPT
iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A OUTPUT -d 172.17.0.0/16 -j ACCEPT
iptables -A OUTPUT -o infra -j ACCEPT
iptables -A OUTPUT -d example.org -p udp --dport 51820 -j ACCEPT
iptables -A OUTPUT -j LOG --log-prefix "output-dropped: "
iptables -A OUTPUT -j DROP
# Custom rules for DOCKER-USER chain
# In this chain, block all incoming packets that are not part
# of some outgoing connection and they didn't come from VPN network interface
iptables -A DOCKER-USER -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A DOCKER-USER -i infra -j ACCEPT
iptables -A DOCKER-USER -j LOG --log-prefix "(docker) input-dropped: "
iptables -A DOCKER-USER -j DROP
A simple web server reverse proxy
Before explaing Docker services, I want to show how you can use the gateway VPS to make home server services accessible to the Internet. For that, let’s assume that you want to host some kind of web service on your home server and make it accessible to the world. On the home server, you have an nginx configuration with something like this:
server {
listen 80;
server_name service1.infra.example.org;
# Other configuration stuff
}
When configuring nginx on Debian-based systems, use the convention of writing your website configurations to /etc/nginx/sites-available and then, to enable it, create a symlink to /etc/nginx/sites-enabled.
When enabling the VPN on your client device, you can successfully access this website on http://service1.infra.example.org, but it is still not accessible to the Internet. In order to make this work, you could setup nginx on your gateway VPS to act as a reverse proxy for the home server.
Before you modify the nginx configuration on the gateway, change the server_name property on your home server first to server_name service1.infra.example.org service1.example.org (assuming that the domain for the service on the Internet is service1.example.org). Reload the nginx service daemon with sudo systemctl restart nginx and that’s it on the home server side.
On the VPS side, create a new configuration to /etc/nginx/sites-available/service1.conf and add following contents:
upstream service1 {
server service1.infra.example.org:80 max_fails=0;
}
server {
listen 80;
listen [::]:80;
server_name service1.example.org;
access_log /var/log/nginx/service1-example-org_access.log;
error_log /var/log/nginx/service1-example-org_error.log;
location / {
proxy_pass http://service1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Create a symlink of that file to /etc/nginx/sites-enabled, verify with nginx -t, reload the service and deploy TLS with certbot. After that has been done, you can access your service from the Internet.
TLS certificates for the intranet
By now you should have successfully created a VPN network, which allows you to communicate with your home server from other networks. You can successfully access services from VPN under *.infra.example.org domain, but one issue still remains. Some services, such as Nextcloud, don’t work properly without TLS, which makes it somewhat difficult for you to create private VPN-only instances of it.
You could, of course, self-sign a certificate for this purpose, but self signed certificates have their own issues such has the client having to manually install it for it to be recognized by their web browser or operating system.
An easier approach would be to request a wildcard certificate from Letsencrypt for your intranet domains such as *.infra.example.org. This of course, is a bit more complicated than just doing certbot --nginx, because you can’t solve the web-server based ACME challenge, due to the IP address being private. Instead, you can prove the ownership of a domain with a DNS-based ACME challenge instead, and you can successfully use certbot for doing so!
$ sudo certbot -d '*.infra.example.org' --manual --preferred-challenges dns certonly
When running this command, it will give you a token that you have to put into your domain’s TXT record under _acme-challenge subdomain:
_acme-challenge.infra.example.org TXT <token>
Login to your domain registrar and set this record. Keep in mind that deploying DNS records can take some time, so be patient before hitting Continue. Otherwise you will have to do this process from all over again. If the challenge verification is successful, you will have a TLS certificate for your intranet domains signed by Letsencrypt 🥳.
In order to use it, instead of having
listen 80;
in your nginx configuration, use
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/infra.example.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/infra.example.org/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
Optionally, if you want to redirect insecure HTTP traffic to HTTPS, you can add an additional server block to your service’s nginx configuration:
server {
if ($host = service1.infra.example.org) {
return 301 https://$host$request_uri;
}
server_name service1.infra.nyaa.ee;
listen 80;
return 404;
}
When using Gateway’s nginx as a reverse proxy to the Internet, modify the upstream from port 80 to 443 and use https scheme in proxy_pass instead of http. For example:
upstream service1 {
server service1.infra.example.org:443 max_fails=0;
}
server {
server_name service1.example.org;
access_log /var/log/nginx/service1-example-org_access.log;
error_log /var/log/nginx/service1-example-org_error.log;
location / {
proxy_pass https://service1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Certbot generated configuration
}
Dockerised services
After the VPN has been successfully setup, you can finally get to the fun part — deploying Docker services. This section will go over some dockerized services you could host on your home server. However, before proceeding, let’s make sure of a couple of things.
First, if not using root account on your home server to manage Docker, add your administrative user to docker group. This allows you to use Docker without having to use do it as root user.
$ sudo usermod -aG docker <myuser>
Second, I advice you to create a separate directory, reserved for Docker stuff. This directory will contain all Docker volumes, .env files and most importantly, your docker-compose.yml file.
$ mkdir ~/docker
Inside that directory, create a new docker-compose.yml file with following contents for getting started:
services:
# This is where you put all your awesome services that run in Docker
networks:
internal:
name: internal
internal: True
external:
name: external
internal: False
Networks internal and external are meant to describe different kinds of networks, containers could use. Internal network shall be used for containers that are meant to be isolated from the outside network (such as database containers, LDAP etc), while containers with external networks, have access to the Internet.
Docker registry
For deploying custom Docker images, I suggest setting up a Docker registry for easier image management and to also avoid building images on the server.
Add following to your docker-compose.yml file:
services:
# Other services that you might have ...
registry:
image: docker.io/registry:3.0.0
restart: always
container_name: registry
ports:
- 127.0.0.1:5000:5000
volumes:
- ./persistent/registry:/var/lib/registry
In this configuration the registry’s port 5000 is bound to localhost. The reason being, is that the registry, in its current configuration, doesn’t have any authentication mechanisms and neither does it have TLS. This can be fixed with setting up nginx to act as a reverse proxy to registry.
Create a new configuration /etc/nginx/sites-available/registry-infra-example-org.conf with following contents:
upstream docker-registry {
server 127.0.0.1:5000;
}
# In case the upstream doesn't provide Docker-Distribution-Api-Version,
# map it as registry/2.0
map $upstream_http_docker_distribution_api_version $docker_distribution_api_version {
'' 'registry/2.0';
}
server {
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/infra.example.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/infra.example.org/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
server_name registry.infra.example.org;
access_log /var/log/nginx/registry-infra-example-org_access.log;
error_log /var/log/nginx/registry-infra-example-org_error.log;
location /v2/ {
# I think limiting image uploading to 4GB is pretty sensible
# it can be modified anyways
client_max_body_size 4000m;
auth_basic "Registry realm";
auth_basic_user_file /etc/nginx/conf.d/nginx.htpasswd;
add_header 'Docker-Distribution-Api-Version' $docker_distribution_api_version always;
proxy_pass http://docker-registry;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 900;
}
location / {
return 404;
}
}
# TLS redirect
server {
if ($host = registry.infra.example.org) {
return 301 https://$host$request_uri;
}
server_name registry.infra.example.org;
listen 80;
return 404;
}
This nginx configuration provides your Docker registry with TLS and HTTP basic authentication. Before you can use that authentication, however, you will need to create the /etc/nginx/conf.d/nginx.htpasswd file describing users.
You can create this file with htpasswd command.
$ sudo htpasswd -c /etc/nginx/conf.d/nginx.htpasswd docker
This will then prompt you to enter a password for new docker user, that is going to be used for registry’s authentication. After this is done, link the nginx configuration to /etc/nginx/sites-enabled and restart the nginx service.
You can now login to your Docker registry from your client, using:
$ docker login registry.infra.example.org
PostgreSQL
For a lot of services, especially web related, will probably need some kind of relational database for them to function properly. PostgreSQL is great choice to use as a relational database because it is widely supported and pl/pgSQL allows you to create some pretty insane stuff (future article idea maybe?).
In your docker-compose.yml file, add a new service:
services:
# Other stuff ...
postgres:
image: docker.io/postgres:17.0-alpine3.20
restart: always
container_name: postgres
environment:
POSTGRES_PASSWORD: ${PG_PASSWORD}
networks:
- internal
# Only uncomment if you want to, for instance, manage database remotely
# ports:
# - 127.0.0.1:5432:5432
volumes:
- ./persistent/postgres:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
For this container, you will need to manage credentials. One way of doing it with docker-compose is by creating a separate .env file and putting credential values there.
# PostgreSQL environment variables
PG_PASSWORD=<postgres-password>
# For each service create a separate username, password and database name combo
PG_SERVICE1_USERNAME=u_service1
PG_SERVICE1_PASSWORD=<password-for-u_service1>
PG_SERVICE1_DB=db_service1
...
Then, create a new template file for init.sql, which should create users and databases for all services. Call this file something like init.sql.tmpl:
-- Service1 user and database
CREATE USER "${PG_SERVICE1_USERNAME}" WITH ENCRYPTED PASSWORD '${PG_SERVICE1_PASSWORD}';
CREATE DATABASE "${PG_SERVICE1_DB}" WITH OWNER "${PG_SERVICE1_USERNAME}";
...
For creating, the actual init.sql file, I suggest creating an helper script utilizing envsubst.
#!/bin/bash
source .env
export PG_SERVICE1_USERNAME=${PG_SERVICE1_USERNAME}
export PG_SERVICE1_PASSWORD=${PG_SERVICE1_PASSWORD}
export PG_SERVICE1_DB=${PG_SERVICE1_DB}
...
# Other services
...
envsubst < init.sql.tmpl > init.sql
This script, will then help you to create an appropriate init.sql for Postgres to use. However, this script only gets run, during Postgres initialization. When PostgreSQL has already been initialized (i.e. its data already exists), then you need to create new users and databases when the container is running.
This can be done by executing psql command inside the container.
$ source .env
$ docker compose exec postgres psql -U postgres -c "CREATE USER \"${PG_SERVICE_USERNAME}\" WITH ENCRYPTED PASSWORD '${PG_SERVICE_PASSWORD}';"
$ docker compose exec postgres psql -U postgres -c "CREATE DATABASE \"${PG_SERVICE_DB}\" WITH OWNER \"${PG_SERVICE_USERNAME}\";"
OpenLDAP
Having an LDAP server can be useful when setting up multiple services, that utilize authentication in some form or another. That’s because having a single authentication source, makes credential management easier and also provides you a way to make user profile changes reflect everywhere at once. For doing single sign-on (SSO) on the web, you could use something like OpenID (or OAuth2) but the downside of using such solution is that, it is pretty much limited to web and not everything supports it. Alternative approach would be LDAP authentication, which is something that this section covers in more detail.
I have created a custom Docker image for OpenLDAP 2.6.12 and you can find it on my Forgejo. Follow the build instructions and push the image to your Docker registry. After that is done, add following to your docker-compose.yml file:
services:
# Other stuff ...
openldap:
image: registry.example.org/openldap:2.6.12 # or whatever is your Docker registry domain/image name and tag
restart: always
container_name: openldap
environment:
LDAP_DOMAIN: ${LDAP_DOMAIN}
ADMIN_COMMON_NAME: ${ADMIN_COMMON_NAME}
ADMIN_PASSWORD: ${ADMIN_PASSWORD}
ORGANIZATION_NAME: ${ORGANIZATION_NAME}
volumes:
- ./persistent/openldap/config:/usr/local/etc/slapd.d
- ./persistent/openldap/data:/usr/local/var/openldap-data
ports:
- "127.0.0.1:389:389"
networks:
- internal
In this configuration, port mapping is necessary, because you will most likely want to do some LDAP administration from your home server using commands such as ldapadd, ldapsearch, ldappasswd etc. Environment variables should be written to a separate .env file and they are defined as follows:
LDAP_DOMAIN- domain for your organization users. When setting this to e.g.example.orgthen the distinguished name entry (DN) is set todc=example,dc=orgADMIN_COMMON_NAME- common name for administrator (or root) account. When empty, it gets set toadmin(sample full DN:cn=admin,dc=example,dc=org)ADMIN_PASSWORD- password for administrator accountORGANIZATION_NAME- organization name for your DN
Adding or modifying users
On your home-server’s host, make sure that ldap-utils package is installed (sudo apt install ldap-utils). This package provides you with all the necessary tools required for managing the LDAP server.
By default, the OpenLDAP container creates two organizational units: ou=people,dc=example,dc=org and ou=groups,dc=example,dc=org. For simple user management, you can ignore groups and instead focus on the people unit instead. Adding new users can be done by creating a new ldif file and then using ldapadd to add a new entry.
dn: uid=<username>,ou=people,dc=nyaa,dc=ee
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
objectClass: PostfixBookMailAccount
cn: <first-name>
sn: <last-name>
loginShell: /bin/sh
uidNumber: <posix-uid>
gidNumber: <posix-gid>
homeDirectory: /home/<username>
uid: <username>
mailAlias: <email-alias>
mail: <primary-email>
This specific configuration then defines a new LDAP entry for the user with some additional metadata. For instance, when using LDAP for Linux authentication, you could utilize fields provided by posixAccount and shadowAccount object classes. Similarily, mail and mailAlias fields come useful, when setting up an email server and assigning email addresses for users.
If this metadata is not needed, omit related fields, however, keep mail field because it is commonly used for many services.
Once the user’s ldif file has been created, add it to your LDAP directory using ldapadd utility:
$ ldapadd -x -W -D cn=admin,dc=example,dc=org -f <username>.ldif
Then, assign a password for newly created user:
$ ldappasswd -S -W -D cn=admin,dc=example,dc=org -x uid=<username>,ou=people,dc=example,dc=org
In order to modify the user’s LDAP entry, you could use ldapmodify utility.
$ ldapmodify -H ldap://127.0.0.1:389 -D cn=admin,dc=example,dc=org -W
For instance, to modify user’s email address, enter following:
dn: uid=<username>,dc=example,dc=org
changetype: modify
modify: mail
mail: <new-email-address>
Web interface for changing your LDAP password
In a multi-user environment, not everyone should have access to your home server so that they could change their account password. For this reason, I have developed a really stupidly simple web utility for modifying your LDAP user’s password. You can find it on my Forgejo.
Follow the README instructions for building the image, push the image to your registry and add a new service to your docker-compose.yml file.
services:
# Other stuff ...
image: registry.example.org/ldap-passwd:latest
restart: always
container_name: ldap-passwd
environment:
LDAP_HOST: openldap
LDAP_PORT: 389
USER_BIND_TMPL: uid=%s,ou=people,dc=nyaa,dc=ee
ports:
- "127.0.0.1:8080:8000"
networks:
- internal
depends_on:
- openldap
Next, setup nginx to act as a reverse proxy by creating a new configuration for it, restart the container(s) and nginx daemon and done.
Nextcloud
One useful web-service you could host, is Nextcloud. Nextcloud is an open-source solution for all your cloud storage needs. So instead of using something like Google Drive or OneDrive, you could have your own “cloud” on your own hardware that you control.
Nextcloud developers provide a nice AIO container to use that you could spin up relatively easily. However, at least in my experience, the Apache based Nextcloud image has its issues. For instance, it is quite difficult to set upload limits in the AIO container (and in general I find that Apache web server sucks in many ways), which is why I’ve decided to make my own, nginx-based image instead (Forgejo repository).
Follow the build instructions, tag and push the image to your registry. Then add following configuration to your docker-compose.yml file:
services:
# Other stuff ...
image: registry.example.org/nextcloud:latest
restart: always
container_name: nextcloud
volumes:
- ./persistent/nextcloud/data:/var/www/nextcloud
- ./persistent/nextcloud/logs/nginx:/var/log/nginx
- ./persistent/nextcloud/logs/php84:/var/log/php84
ports:
- "127.0.0.1:8081:80"
networks:
- internal
- external # Required for update checks
depends_on:
- openldap
- postgres
Create the nginx reverse-proxy configuration for Nextcloud.
upstream nextcloud {
server 127.0.0.1:8081;
}
server {
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/infra.example.org/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/infra.example.org/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# By default, this is the max body size limit anyways for the container
client_max_body_size 10G;
# If public instance is not required, omit `cloud.example.org`
# from the server_name directive
server_name cloud.infra.example.org cloud.example.org;
access_log /var/log/nginx/cloud-infra-example-org_access.log;
error_log /var/log/nginx/cloud-infra-example-org_error.log;
location / {
proxy_pass http://nextcloud;
proxy_redirect off;
proxy_set_header Host $host;
# For public instances, remove it, since the gateway already gives you X-Real-IP header
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# TLS redirect config (only for intranet domain)
server {
if ($host = cloud.infra.example.org) {
return 301 https://$host$request_uri;
}
server_name cloud.infra.example.org;
listen 80;
return 404;
}
If you haven’t already created PostgreSQL user and database for Nextcloud, do it now. Assuming, that database credentials are stored in the .env file under PG_NEXTCLOUD_* variables:
$ source .env
$ docker compose exec postgres psql -U postgres -c "CREATE USER \"${PG_NEXTCLOUD_USERNAME}\" WITH ENCRYPTED PASSWORD '${PG_NEXTCLOUD_PASSWORD}';"
$ docker compose exec postgres psql -U postgres -c "CREATE DATABASE \"${PG_NEXTCLOUD_DB}\" WITH OWNER \"${PG_NEXTCLOUD_USERNAME}\";"
Restart containers and nginx daemon.
The first time, you try to access your Nextcloud instance, you will get prompted to the setup page. Due to this reason, it is highly advised that you don’t immediately make your Nextcloud instance public.
On the setup page, you will have to create a (local) administrator account but don’t worry, you can assign an LDAP user as an administrator later on. You will also have to configure database connection, in which case, you will have to specify the PostgreSQL host, username, password and database.
Once you have successfully configured everything on the setup page and logged in as the new admin user, you can go to Apps section and make sure that LDAP user and group backend is installed.
Next, go to Administration settings -> LDAP/AD integration and add a new server. It should look something like this:
For User DN specify the DN of your LDAP server’s admin account and for base DN, set this to ou=people,dc=example,dc=org. Next, in Users section set the LDAP filter query to (|(objectclass=inetOrgPerson)(objectClass=posixAccount)). Click Verify settings and count users should now report some users to be found.
In the Login Attributes section, check LDAP/AD Username attribute for finding users. Groups section can be ignored and thus, set the LDAP query to something like (|).
Once LDAP authentication has been successfully set up, log out and try to log in as one of the LDAP users. It should work by now, however, your LDAP user is not admin yet. In order to change that, log out once again and login as the admin user for Nextcloud. Go to Accounts section, select the new LDAP user and under Add account to group mark it as admin. Now your LDAP user is Nextcloud administrator.
Making the instance public
By now, you should have successfully setup your Nextcloud instance on the intranet (i.e. to the VPN network). In order to make the instance public, you will need to do two things: configure Nextcloud to trust public Internet domain and setup nginx reverse proxy on the gateway VPS.
Open ./persistent/nextcloud/data/config/config.php file in your favourite text editor and modify the trusted_domains property:
<?php
$CONFIG = array (
# Other configuration values ...
'trusted_domains' =>
array (
0 => 'cloud.infra.example.org',
1 => 'cloud.example.org'
),
# Other configuration values ...
);
Save the configuration and restart Nextcloud container with docker compose restart nextcloud. Next, on the gateway server add a new nginx configuration.
upstream nextcloud {
server cloud.infra.example.org:443 max_fails=0;
}
server {
server_name cloud.example.org;
access_log /var/log/nginx/cloud-example-org_access.log;
error_log /var/log/nginx/cloud-example-org_error.log;
location / {
proxy_pass https://nextcloud;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
listen [::]:80;
listen 80;
}
Deploy a TLS certificate for the public instance with certbot --nginx, select the correct domain and done :). Restart the nginx daemon and now you have the public Nextcloud instance available on the Internet.
Forgejo
Forgejo is effectively a community fork of Gitea, which makes it ideal for self-hosting git repositories on your own hardware. Unlike some more basic frontends, such as cgit, Forgejo gives you the ability to manage issues, create pull requests and add collaborators. To get started add following configuration to your docker-compose.yml
services:
# Other stuff ...
forgejo:
image: codeberg.org/forgejo/forgejo:13
container_name: forgejo
environment:
- USER_UID=1001
- USER_GID=1001
- FORGEJO__database__DB_TYPE=postgres
- FORGEJO__database__HOST=postgres
- FORGEJO__database__NAME=${PG_FORGEJO_DB}
- FORGEJO__database__USER=${PG_FORGEJO_USERNAME}
- FORGEJO__database__PASSWD=${PG_FORGEJO_PASSWORD}
restart: always
volumes:
- ./persistent/forgejo:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- internal
ports:
- 127.0.0.1:8082:3000
depends_on:
- postgres
- openldap
Here, the database attributes are stored in PG_FORGEJO_* environment variables. Ensure that these values are present in .env file and, if necessary, create the PostgreSQL user along with a database for Forgejo.
$ source .env
$ docker compose exec postgres psql -U postgres -c "CREATE USER \"${PG_FORGEJO_USERNAME}\" WITH ENCRYPTED PASSWORD '${PG_FORGEJO_PASSWORD}';"
$ docker compose exec postgres psql -U postgres -c "CREATE DATABASE \"${PG_FORGEJO_DB}\" WITH OWNER \"${PG_FORGEJO_USERNAME}\";"
Next, create the home server’s nginx configuration, restart containers and the nginx daemon.
Once you try to access your Forgejo instance, you will have to go through the setup screen.
In the database section, you don’t need to modify anything, since all the necessary values are filled from the environment variables, set indocker-compose.yml.
The general section is pre-filled with some default values. You can modify the Instance title and slogan to whatever you want, but keep the rest of the values as default. Especially make sure that Disable self-registration is checked.
In the Server and third-party service settings make sure that you disable OpenID sign-in and self-registration.
In the Administrator account settings, create a new (local) administrator account.
Once everything has been filled out, click Install Forgejo. It will take some time, but once completed, your instance is successfully set up.
In order to setup LDAP authentication, click on your profile icon Site Administration -> Identity & access -> Authentication sources. Then click on the Add authentication source button, which will take you to a new page to configure your authentication source.
Once the form is filled out, click on Add authentication source, log out and try to log in as your LDAP user. If successful, the authentication should work now. Log out and reauthenticate yourself as the admin user once again. You can now go to Site administration -> Identity & access -> User accounts and make your new LDAP user as the administrator. When you login with your LDAP account, you can remove the local admin account.
SSH passthrough
If you want to use Git over SSH, you will need to map the SSH port from the container to the host. For example by doing something like this:
ports:
- 127.0.0.1:222:22
You can’t expose port 22 directly, because home server’s OpenSSH daemon is already listening on it. Thus, in order to make SSH work, you need to passthrough host’s OpenSSH connection to the container’s OpenSSH.
In order to make this work, you will first need to create a new user on the host with username git.
$ sudo useradd -m git
Then as the new user git, generate the host key for for Forgejo and copy the public key to /home/git/.ssh/authorized_keys.
$ sudo su git && cd
$ ssh-keygen -t rsa -b 4096 -C "Gitea Host Key"
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
Next, create a wrapper script, which will redirect SSH connections to the container. For this create a new file called /usr/local/bin/gitea.
#!/bin/sh
ssh -p 222 -o StrictHostKeyChecking=no git@127.0.0.1 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@"
Make this file an executable with chmod +x /usr/local/bin/gitea.
In docker-compose.yml add following volume to your Forgejo config:
volumes:
# Other volumes ...
- /home/git/.ssh:/data/git/.ssh
Restart the container and SSH passthrough should work now.
Email server
Self-hosting an email server using postfix and dovecot has been known to be notoriously painful to set up and get working properly. Luckily, there exists a really nice project called docker-mailserver, which makes the process of deploying and configuring your own self-hosted mailserver much easier.
Prerequisites
In order to make your mailserver work, you will need to make sure that your VPS provider doesn’t block ports 25, 465, 587 and 993. Especially outgoing connections on port 25 tend to be blocked by some providers due to the concerns about spam being sent from their servers. You should also make sure that you can set a custom PTR or reverse DNS record for your VPS because otherwise some more strict email providers (in practice big ones like Gmail and Outlook) might otherwise reject mail coming from your server.
To explain briefly about why all those ports are needed, let’s first understand the basic anatomy of an email delivery chain. As explained by docker-mailserver’s documentation, there are three main components to consider:
- MUA (Mail User Agent): the client program capable of sending emails to a mail server while also being capable of fetching emails from a mail server and presenting them to the user. This could be, for instance, Mozilla Thunderbird, Microsoft Outlook or Roundcube.
- MTA (Mail Transfer Agent): piece of software dedicated to accepting submitted emails and forwarding them to its destination (the so-called “mail-server” from MUAs perspective). In docker-mailserver’s context, this is Postfix.
- MDA (Mail Delivery Agent): is responsible for accepting emails from an MTA and dropping them into their recipients’ mailboxes. In docker-mailserver’s context, this is Dovecot.
Understanding these components, allows you to understand why all of those ports are necessary.
- MUA submits an email through a secure channel to an MTA either to port 465 (implicit TLS) or to port 587 (STARTTLS).
- MTA receives the email submission request from an MUA relays the email to another MTA, listening on port 25. The network traffic between two MTAs might be secured through STARTTLS, but by specification, it is not mandatory. Thus there is always a possibility that your email gets submitted in some part of the transfer process unencrypted. Which is something to keep in mind when sending confidential information.
- The receiving MTA listens on port 25 and receives mail from the sending MTA, filters it against its spam and virus filters and forwards the message to an MDA.
- The receiving client’s MUA listens on MDA’s port 993 for incoming mail (secured through TLS), and if anything is received, presents it to the user.
DNS records
First, go to your domain registrar and set the following DNS records:
mail.example.org. A <vps-ipv4-address>
If your gateway VPS supports IPv6 as well, then set an AAAA record as well:
mail.example.org. AAAA <vps-ipv6-address>
Next you will also need to set up an MX record so that your email domains can be resolved to a mail server domain:
example.org. MX mail.example.org.
Next access your Gateway VPS management panel (or contact support) and set VPS’s PTR record to mail.example.org.
DMARC, DKIM and SPF
Due to the nature of email protocols, one could very easily impersonate someone else and send emails on behalf of a domain that they do not own. In order to prevent spammers and other unauthorized parties from doing so, three email authentication methods have been developed: DMARC, DKIM and SPF.
DKIM (or DomainKeys Identified Mail) provides domain owners a way to automatically sign legitimate emails coming from their mail server. The DKIM record is used to store the domain’s public key, which can then be used by the receiving MTA to verify if the DKIM signature is valid and thus if the sending MTA is authorized to send emails for that domain.
SPF (or Sender Policy Framework) record defines a list of all servers from which, emails for that domain are authorized to be sent from. Mail servers that receive an email from your domain, can then check against the SPF record and thus decide along with DKIM verification result if the sender is authorized to send emails for given domain or not.
DMARC (or Domain-based Message Authentication Reporting and Conformance) records tell the receiving email server what to do after SPF and DKIM checks fail. DMARC policies can, for instance, instruct mail servers to quarantine, reject or still deliver emails that fails SPF and DKIM checks. Additionally it can also contain instructions to send reports to domain administrator about which emails are passing and failing these checks.
A good starting point would be to create DMARC and SPF records with following configuration:
example.org. TXT v=spf1 mx ~all
_dmarc.example.org. TXT v=DMARC1; p=none; sp=none; fo=0; adkim=r; aspf=r; pct=100; rf=afrf; ri=86400; rua=mailto:dmarc.reports@example.org; ruf=mailto:dmarc.reports@example.org
In the SPF record, ~all is used to tell that the receiving server should softfail emails, meaning that they shouldn’t be rejected but tagged instead, when SPF verification fails. DMARC record, in this example, tells the receiving mail server to not take any action when DKIM verification fails (p=none). Similarily, also telling to not take any action when SPF verification fails (sp=none). This configuration is good enough for testing out the mail server, but for a real production environment you should use more strict rules:
example.org. TXT v=spf1 mx -all
_dmarc.example.org. TXT v=DMARC1; p=reject; sp=reject; fo=0; adkim=s; aspf=s; pct=100; rf=afrf; ri=86400; rua=mailto:dmarc.reports@example.org; ruf=mailto:dmarc.reports@example.org
For additional information about DMARC and SPF configuration values, check out these resources:
- DMARC (RFC7489)
- SPF (RFC7208)
- CloudFlare: What is a DNS DMARC record?
- CloudFlare: What is a DNS SPF record?
Setting up a DKIM record requires a DKIM keypair, which will be generated in the next section, so for now, you can’t add that record yet.
Firewall
Make sure that iptables isn’t blocking incoming connections to ports 25, 465, 587 and 993 on your gateway VPS and home server. For this, modify your iptables.sh and whitelist ports with this configuration:
$IPTABLES_CMD -A INPUT -p tcp --dport 25 -j ACCEPT
$IPTABLES_CMD -A INPUT -p tcp -m multiport --dports 465,587 -j ACCEPT
$IPTABLES_CMD -A INPUT -p tcp --dport 993 -j ACCEPT
Setting up docker-compose and configuring the mail server
In your docker-compose.yml add a new service for mail server:
services:
mailserver:
image: ghcr.io/docker-mailserver/docker-mailserver:15.1
container_name: mailserver
hostname: mail.example.org
env_file: mailserver.env
ports:
- "25:25"
- "465:465"
- "587:587"
- "993:993"
volumes:
- ./persistent/mailserver/mail:/var/mail
- ./persistent/mailserver/mail-state:/var/mail-state/
- ./persistent/mailserver/mail-logs:/var/log/mail/
- ./persistent/mailserver/config:/tmp/docker-mailserver/
- /etc/localtime:/etc/localtime:ro
- /etc/letsencrypt:/etc/letsencrypt:ro
restart: always
stop_grace_period: 1m
healthcheck:
test: "ss --listening --ipv4 --tcp | grep --silet ':smtp' || exit 1"
timeout: 3s
retries: 0
networks:
- internal
- external
depends_on:
- openldap
Fetch the example mailserver.env configuration
$ wget https://raw.githubusercontent.com/docker-mailserver/docker-mailserver/master/mailserver.env
This configuration file, provides you a lot of options, which you can read about in docker-mailserver’s documentation page.
For a simple LDAP authentication based configuration with Postfix, Dovecot, SpamAssassin and Amavis, the configuration could be something like this:
# For getting email notifications about when update to docker-mailserver is available
ENABLE_UPDATE_CHECK=1
UPDATE_CHECK_INTERVAL=1d
# Allow each user to only send with their own or their alias addresses
SPOOF_PROTECTION=1
# Enable DKIM, DMARC and SPF
ENABLE_OPENDKIM=1
ENABLE_OPENDMARC=1
ENABLE_POLICYD_SPF=1
# Enable IMAP and disable POP3 (use POP3 only if you don't want to keep emails on your server)
ENABLE_POP3=0
ENABLE_IMAP=0
# Depending on your server's hardware, you might want to enable ClamAV for virus scanning emails
# but it can become quite resource intensive
# For this configuration, I disable it
ENABLE_CLAMAV=0
# Ensure that rspamd is disabled, since this configuration uses SpamAssassin instead
ENABLE_RSPAMD=0
# Enable SpamAssassin and Amavis
ENABLE_SPAMASSASSIN=1
ENABLE_SPAMASSASSIN_KAM=1
ENABLE_AMAVIS=1
# Move spam messages to Junk folder to avoid missing out on
# legitimate emails that might accidentally get flagged
SPAMASSASSIN_SPAM_TO_INBOX=1
MOVE_SPAM_TO_JUNK=1
MARK_SPAM_AS_READ=0
# Enable DNS block lists in Postscreen
ENABLE_DNSBL=1
POSTSCREEN_ACTION=enforce
# Make the server use Letsencrypt certificates
SSL_TYPE=letsencrypt
# Don't enforce Mailbox size limit (by default it is set to 128MB)
POSTFIX_MAILBOX_SIZE_LIMIT=0
# Enable mailservers to query mailbox for disk-space used
# and capacity limit
ENABLE_QUOTAS=1
# Set the maximum message size limit to 20MB
POSTFIX_MESSAGE_SIZE_LIMIT=20971520
# Make Postfix and Dovecot listen on only IPv4 interfaces
# since most likely the home server has IPv6 disabled
POSTFIX_INET_PROTOCOLS=ipv4
DOVECOT_INET_PROTOCOLS=ipv4
# Enable MTA-STS support for outbound mail to prevent downgrade attacks
ENABLE_MTA_STS=1
# LDAP configuration
ACCOUNT_PROVISONER=LDAP
LDAP_START_TLS=no
LDAP_SERVER_HOST=ldap://openldap:389
LDAP_SEARCH_BASE=ou=people,dc=example,dc=org
LDAP_BIND_DN=cn=admin,dc=example,dc=org
LDAP_QUERY_FILTER_USER=(mail=%s)
LDAP_QUERY_FILTER_GROUP=(|)
LDAP_QUERY_FILTER_ALIAS=(mailAlias=%s)
LDAP_QUERY_FILTER_DOMAIN=(mail=*@%s)
DOVECOT_TLS=no
DOVECOT_USER_FILTER=(&(objectClass=inetOrgPerson)(mail=%u))
DOVECOT_PASS_FILTER=
DOVECOT_MAILBOX_FORMAT=maildir
DOVECOT_AUTH_BIND=yes
DOVECOT_USER_ATTRS=homeDirectory=home,=uid=5000,=gid=5000,=mail=maildir:/var/mail/%u/Maildir
DOVECOT_PASS_ATTRS=mail=user,userPassword=password
Once the environment variable configuration has been successfully created, start the container with docker compose up mailserver -d.
Gateway proxy using nginx
Now the mailserver is up and running, but you still cannot receive nor send mail. That’s because in the DNS records, the MX record resolves to gateway VPS’s IP address, despite the fact that the mail server itself is configured to run on the home server. One way to reverse proxy email protocols to the home server is by wrapping packets into PROXY protocol, which can be done with nginx.
In order for the mail server to accept PROXY protocol, you’ll need to manually configure Postfix and Dovecot configuration files. Open ./persistent/mailserver/config/postfix-master.cf and add following lines:
smtp/inet/postscreen_upstream_proxy_protocol=haproxy
submission/inet/smtpd_upstream_proxy_protocol=haproxy
submissions/inet/smtpd_upstream_proxy_protocol=haproxy
For Dovecot to accept PROXY protocol, open ./persistent/mailserver/config/dovecot.cf and add following configuration:
haproxy_trusted_networks = 10.200.200.1
service imap-login {
inet_listener imaps {
haproxy = yes
}
}
Restart the mailserver with docker compose restart mailserver and move to gateway VPS.
First, in the VPS server, install the stream module with:
$ sudo apt install libnginx-mod-stream
Then, in the /etc/nginx/modules-enabled directory, create a symlink to module’s configuration:
$ cd /etc/nginx/modules-enabled
$ sudo ln -s /usr/share/nginx/modules-available/mod-stream.conf .
Open /etc/nginx/nginx.conf and add following line:
stream {
include /etc/nginx/streams-enabled/*;
}
Create directories for stream configurations:
$ sudo mkdir -p /etc/nginx/streams-available /etc/nginx/streams-enabled
In the /etc/nginx/streams-available create a new configuration with name email.conf with following contents:
# Email transfer
server {
listen 25;
proxy_pass mail.infra.example.org:25;
proxy_protocol on;
}
# SMTP with SSL/TLS
server {
listen 465;
proxy_pass mail.infra.example.org:465;
proxy_protocol on;
}
# SMTP with STARTTLS
server {
server 587;
proxy_pass mail.infra.example.org:587;
proxy_protocol on;
}
# IMAP with SSL/TLS
server {
listen 993;
proxy_pass mail.infra.example.org:993;
proxy_protocol on;
}
Symlink the new file to /etc/nginx/streams-enabled, check the configuration with sudo nginx -t and if successful, restart the daemon.
Email server proxy should now be successfully setup.
Testing the configuration
By now, your email server should be running and also able to accessible from the Internet. In order to test if everything is working properly, open an email client, such as Thunderbird or Outlook and try adding a new email account.
IMAP:
- Hostname:
mail.example.org - Port: 993
- Connection security: SSL/TLS
- Username:
<username>@example.org - Password:
<ldap-password>
SMTP:
- Hostname:
mail.example.org - Port: 465
- Connection security: SSL/TLS
- Username:
<username>@example.org - Password:
<ldap-password>
If the authentication is successful, try sending an email from another provider to your email address. If your client can successfully retrieve the email, then everything is working properly.
DKIM setup
By now, the mail server should be able receive and send emails. But without DKIM it is likely that those sent emails get flagged as spam by big email providers. In order to fix that, you should generate a DKIM keypair and set appropriate DNS records.
To do that, access your home server and run:
$ docker compose exec mailserver setup config dkim domain 'example.org' keysize 2048
Once that has been completed, open ./persistent/mailserver/config/opendkim/keys/example.org/mail.txt. The file should look something like this:
mail._domainkey IN TXT ( "v=DKIM1; h=sha256; k=rsa; "
"p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2scEeUfySGumi4l7uXwN38S8AvYyC6sWTAu99uZi150zTSrQ+8AZWa3gqPLHveLh5YobQgT/5gZp3jFezeoHR3xxwC3XGNeMC+v7EB0FcwryPDB2yanIxPp8JDCgKu42S2GifO8dHfZM76hyF0wSX2wXvXfC3qio7c8zVpt8peTOrlb8sn7WLW61VCsbvPzzW86XZjzwNe3miF"
"fTagCrNxH6M10bJkMwwve/JBYuACr7P7PrxzzyCTld8HU4tc/BZJjyYU/LfuhLJs3NZ3mgbIAl07ktNTXhh4gNmcSk8f0kIe/MoPzLhN8siFcNUgxBihlRdzCYgwIP0MWWL5rvWwIDAQAB" ) ; ----- DKIM key mail for example.org
Copy the TXT value and format it in your favourite text editor so that the key value p=... would be one-liner and the string wouldn’t contain quotation marks ("). Copy the formatted string and open your domain’s registrar. Create a new TXT entry:
mail._domainkey.example.org. TXT "v=DKIM1; h=sha256; k=rsa; p=..."
The value itself should be the formatted DKIM string without any quotation marks.
To verify if DKIM record got correctly set up and DNS records updated, use something like MXToolbox DKIM checker. If everything is good then you can move on to the next section, where you can test your email’s deliverability.
Testing email deliverability
Once the DKIM record has been published, you can check if outgoing emails are properly signed by using one of those tools:
- dkimvalidator
- mail-tester (rate-limited so use it carefully)
If DKIM signature checks pass, check your overall message deliverability with mail-tester. This gives you a slightly better overview of how “good” your emails seem to spam filters.
When everything seems good, you can try sending emails to large service providers such as Gmail and Outlook. Gmail tends to be notoriously strict with their spam filtering and if it happens that your emails still get flagged as spam, try asking your friends to send emails from their gmail addresses to your email address. Assuming that everything else, such as PTR record of your gateway VPS, DKIM, DMARC and SPF are properly setup then this would likely make your emails seem more legitimate to Google.
Ideas and best practices
Almost anything can be hosted using Docker but that necessarily doesn’t mean that doing so is a good idea. One reason being is that due to the nature of Docker networking, you are going to lose client’s IP address, unless you have some kind of reverse proxy set up before, which defeats the purpose of dockerizing everything.
Despite that, there are still many readily available docker images on Docker hub that you could deploy with minimal ease, such as services like Nextcloud, Forgejo, Matrix, Roundcube or even a whole email server. And all things considered, for more complex services, using Docker makes a lot of sense.
In order to make the process of using Docker as smooth as possible, I have compiled a list of best practices, you could apply when hosting dockerized services
- Technically anyone could push anything to docker.io registry. Thus be careful about what you pull and make sure you at least check out the Dockerfile used to compile the image (or even better make your own image with software retrieved from offical sources).
- Use appropriate tags and not
latestfor everything. Docker doesn’t automatically repull your images when updates are available. Having appropriate tags helps you to keep track of versions of the software running on your server. - Only expose ports that are needed (this one is pretty obvious)
- Keep your images updated. Don’t just deploy and forget about them.
- Use
composefor easier management.
Final words
Self-hosting seems scary and impossible to some but this article showcased how with a gateway VPS and Wireguard you can self-host pretty much anything you want, even if your ISP doesn’t allow port forwarding.
I hope that this article gave some people confidence to start their own self-hosting journey and become more independent from the slavery of big online platforms.