Something I run into when at coffee shops, hotels, and so on - they often block most TCP ports and restrict your traffic to HTTP and HTTPS.

We can get around that by exposing multiple services over the same port. By the end of this blog post we’ll have HTTPS, SSH, and an IRC bouncer all running on port 443.

Software Stack

We’ll be using HAProxy to act as our multiplexer. The main benefit of HAProxy is we can use the PROXY protocol so our backend services still see and log the client IP address.

For web services, we’ll use nginx, and for our IRC bouncer we’ll use soju, paired with gamja as a client.

Pre-requisites

I’m assuming an Ubuntu system, so there may be some differences for package installation and system management depending on your distro, but things like the actual HAProxy config should remain the same.

You’ll need a server on the public internet and a domain, because we’ll be using Let’s Encryt for SSL.

In any shell script examples I’ll forgo using sudo and just indicate commands that need root privileges with a # character, and regular user privileges with a $ character.

For example, when I write:

# apt-get update

That particular command should be run as root. Whether it’s in a root shell or via sudo is up to you.

Set up DNS

Today I’ll be setting up a few hostnames:

  • example.com
  • bnc.example.com
  • irc.example.com
  • www.example.com

I’m going to make example.com a set of A and AAA records, and the rest are CNAME records to example.com. I’ll leave configuring DNS up to you.

Install Packages

We’ll need HAProxy, nginx, a go compilation environment for soju, and pre-reqs for dehydrated (my preferred Let’s Encrypt client), git, stow, and iptables-persistent.

I know, I know we’re supposed to use netfilter now or whatever but old habits die hard.

# apt-get install haproxy build-essential openssl \
  curl git stow iptables-persistent libsqlite3-dev scdoc

Note that I’m not installing golang (required to build soju) and nodejs (required to build gamja). We’ll grab newer versions of both of those.

Setup iptables rules

These are some basic rules to:

  • allow all traffic on localhost.
  • allow any related/established connections.
  • allow ICMP traffic but limit to to 15 queries/sec.
  • allow ports 22, 80, 443 (SSH, HTTP, HTTPS).
  • reject everything else.
# ip6tables -A INPUT -i lo -j ACCEPT
# ip6tables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
# ip6tables -A INPUT -p ipv6-icmp -m limit --limit 15/sec -j ACCEPT
# ip6tables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
# ip6tables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
# ip6tables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
# ip6tables -A INPUT -j REJECT
# ip6tables-save > /etc/iptables/rules.v6
# iptables -A INPUT -i lo -j ACCEPT
# iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
# iptables -A INPUT -p icmp -m limit --limit 15/sec -j ACCEPT
# iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
# iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
# iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
# iptables -A INPUT -j REJECT
# iptables-save > /etc/iptables/rules.v4

Install Dehydrated

This is probably somewhat overkill, placing dehydrated into a stow directory and linking it into /usr/local - but it’s how I like to manage compiled software, this way all my locally-installed packages are under /usr/local/stow.

# mkdir -p /usr/local/stow/dehydrated-git/{bin,share}
# git clone https://github.com/dehydrated-io/dehydrated /usr/local/stow/dehydrated-git/share/dehydrated
# ln -s ../share/dehydrated/dehydrated /usr/local/stow/dehydrated-git/bin/dehydrated
# stow -d /usr/local/stow dehydrated-git
# cd /usr/local/stow/dehydrated-git/share/dehydrated
# git checkout $(git tag --sort=committerdate | tail -1)
# mkdir /srv/acme

Now dehydrated should be in your path, and also available at /usr/local/bin/dehydrated. /srv/acme will be the working directory used for refreshing SSL certificates.

Create a temporary SSL cert

We’ll generate a temporary garbage SSL certificate to use with HAProxy so it can bind to port 443 without issue. You’ll be prompted for some info - just smash that enter key until you get through it.

# cd /etc/ssl/private
# openssl req -new -newkey rsa:2048 -days 1 -nodes -x509 -keyout temp.key -out temp.crt
# cat temp.crt temp.key > temp.pem

nginx global config

In our nginx config, we’ll listen on Unix sockets rather than TCP ports. If you want, you can use TCP ports instead. I just find it easier to read a path and immediately know “oh, this is nginx’s HTTP listener” as opposed to remembering “port 8000 is nginx’s plain HTTP listener, 8100 is HTTPS.”

So, assuming your nginx is setup to load files from /etc/nginx/conf.d/*.conf, create /etc/nginx/conf.d/map_connection_upgrade.conf:

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

We’ll be able to use the $connection_upgrade variable now, useful for proxying websocket connections (which we’ll do with soju + gamja later on).

We’ll also want to create a file to use IP info from PROXY protocol. Create /etc/nginx/conf.d/realip.conf:

set_real_ip_from unix:;
real_ip_header proxy_protocol;

This will allow all Unix sockets to set IP info using PROXY Protocol. If you want nginx to listen on TCP instead, update the set_real_ip_from directive to the IP or IP range of your HAProxy host.

By default, nginx will issue redirects using absolute URLs, and since we’re proxying over plain-text, it assumes it should use http for all redirects.

One option is to turn off absolute redirects. The original HTTP/1.1 specs required absolute redirects, but relative URLs have been allowed since 2014.

So create /etc/nginx/conf.d/absolute_redirect.conf:

absolute_redirect off;

Finally we’ll create our default sites. There should be an existing config file named /etc/nginx/sites-enabled/default, I usually delete this and move the default site config into a file under conf.d, since all it really does is define some listeners. This way, I know all these default listeners are defined first, since typically files under conf.d are included before anything under sites-enabled.

Create /etc/nginx/conf.d/default_sites.conf:

server {
  listen unix:/var/lib/haproxy/nginx_http proxy_protocol default_server;
  listen unix:/var/lib/haproxy/nginx_https proxy_protocol default_server;
  listen unix:/var/lib/haproxy/nginx_http2 http2 proxy_protocol default_server;
  server_name _;

  return 444;
}

nginx site config

Next we’ll create our first site for example.com - we’ll add more later, I like to get through this initial setup first with just one site.

This is a simple config that will serve files from /srv/http/example.com, and issue HTTPS redirects for everything besides cert renewal, as well as redirect requests for www.example.com to example.com:

So under /etc/nginx/sites-available/example.com:

# our server for www.example.com, plain HTTP
server {
  listen unix:/var/lib/haproxy/nginx_http proxy_protocol;
  server_name www.example.com;

  access_log /var/log/nginx/access.example.com.log;
  error_log /var/log/nginx/error.example.com.log;

  root /srv/http/example.com;
  index index.html;

  location /.well-known/acme-challenge {
    alias /srv/acme;
  }

  return 301 https://example.com$request_uri;
}

# our server for www.example.com, HTTPS and HTTP2
server {
  listen unix:/var/lib/haproxy/nginx_https proxy_protocol;
  listen unix:/var/lib/haproxy/nginx_http2 http2 proxy_protocol;

  server_name www.example.com;

  access_log /var/log/nginx/access.example.com.log;
  error_log /var/log/nginx/error.example.com.log;

  root /srv/http/example.com;
  index index.html;

  location /.well-known/acme-challenge {
    alias /srv/acme;
  }

  return 301 https://example.com$request_uri;
}

# example.com, plain HTTP
server {
  listen unix:/var/lib/haproxy/nginx_http proxy_protocol;
  server_name example.com;

  access_log /var/log/nginx/access.example.com.log;
  error_log /var/log/nginx/error.example.com.log;

  root /srv/http/example.com;
  index index.html;

  location /.well-known/acme-challenge {
    alias /srv/acme;
  }

  return 301 https://$host$request_uri;
}

# example.com, HTTPS and HTTP2
server {
  listen unix:/var/lib/haproxy/nginx_https proxy_protocol;
  listen unix:/var/lib/haproxy/nginx_http2 http2 proxy_protocol;

  server_name example.com;

  access_log /var/log/nginx/access.example.com.log;
  error_log /var/log/nginx/error.example.com.log;

  root /srv/http/example.com;
  index index.html;

  location /.well-known/acme-challenge {
    alias /srv/acme;
  }

  location = /robots.txt {
      allow all;
      log_not_found off;
      access_log off;
  }

}

Now we’ll delete the default site config and link our new site config in, and restart.

# rm /etc/nginx/sites-enabled/default
# ln -s ../sites-available/example.com /etc/nginx/sites-enabled/example.com
# systemctl restart nginx

Configure HAProxy

Now we’ll get a bit of HAProxy config going, using our garbage SSL certificate from earlier.

We’ll stick with the default config - we could change some defaults like, the default mode to TCP since that’s the only listener we’ll define anyway. But that really just saves us 1 line per frontend/backend, I’m not too worried about it.

At the end of the existing /etc/haproxy/haproxy.cfg you’ll want to add:

# real straightforward - bind to port 80, pass to the nginx backend. If
# we wanted we could also check for SSH and pass that to the SSH backend.

frontend port_80
        mode tcp
        bind :80
        default_backend nginx_http

# this is a little more interesting - we bind to port 443 but do NOT
# set up SSL. This allows us to detect if the incoming traffic is actually
# SSH first and to send to the SSH backend, otherwise we go to port_443_internal,
# which just directs to another "frontend" that adds SSL.

frontend port_443
        mode tcp
        bind :443

        acl client_attempts_ssh payload(0,7) -m bin 5353482d322e30

        tcp-request inspect-delay 2s
        tcp-request content accept if { req.ssl_hello_type 1 }
        tcp-request content accept if client_attempts_ssh
        tcp-request content reject

        use_backend ssh if client_attempts_ssh
        default_backend port_443_internal

# The abns@haproxy-https lets us "listen" without actually having to bind
# to a TCP port, allowing our backend to connect to an "internal frontend."

backend port_443_internal
        mode tcp
        timeout tunnel 1h
        server loopback-for-https abns@haproxy-https send-proxy-v2

# This frontend is where we enable SSL, then redirect to different backends
# based on whether we see HTTP/2, or via SNI hostname, or if we see SSH again.
# The above port_443 listener was for SSH on port 443, this allows us to also
# support SSH over TLS.

frontend https
        mode tcp
        bind abns@haproxy-https accept-proxy ssl crt combined.pem
        log-format "%ci:%cp [%t] %ft %b/%s %Tw/%Tc/%Tt %B %ts %ac/%fc/%bc/%sc/%rc %sq/%bq %sslc %sslv"

        acl content_present req.len gt 0
        acl client_attempts_ssh payload(0,7) -m bin 5353482d322e30

        tcp-request inspect-delay 10s
        tcp-request content accept if content_present

        # if it looks like HTTP it should go to nginx
        use_backend nginx_http2 if { ssl_fc_alpn -i h2 }
        use_backend nginx_https if { ssl_fc_alpn -i http/1.1 }
        use_backend nginx_https if { ssl_fc_alpn -i http/1.0 }
        use_backend nginx_https if HTTP

        use_backend bnc if { ssl_fc_alpn -i irc }
        use_backend bnc if { ssl_fc_sni -i bnc.example.com }

        use_backend ssh if client_attempts_ssh
        default_backend nginx_https

backend nginx_http2
        mode tcp
        option tcpka
        timeout tunnel 1h
        server server1 /nginx_http2 send-proxy

backend nginx_https
        mode tcp
        option tcpka
        timeout tunnel 1h
        server server1 /nginx_https send-proxy

backend nginx_http
        mode tcp
        option tcpka
        server server1 /nginx_http send-proxy

backend bnc
        mode tcp
        option tcpka
        timeout tunnel 1h
        server server1 127.0.0.1:6667 send-proxy

backend ssh
        mode tcp
        option tcpka
        timeout tunnel 1h
        server ssh 127.0.0.1:22

Restart HAProxy to load the new config:

# systemctl restart haproxy

And do a quick sanity test to make sure we can hit nginx:

$ curl http://example.com

Returning a 301 error is fine, that means we made it all the way through.

Setup Dehydrated

Now we’ll do a big of config for dehydrated.

# mkdir /etc/dehydrated
# echo "WELLKNOWN=/srv/acme" > /etc/dehydrated/config
# echo "HOOK=/etc/dehydrated/hook" >> /etc/dehydrated/config
# echo "example.com www.example.com" > /etc/dehydrated/domains.txt

Then we’ll create a hook script, this will be run whenever dehydrated does anything. We’ll use it to create a combined key/cert file for HAProxy, and to reload HAProxy.

So in /etc/dehydrated/hook write:

#!/bin/sh

if [ -z "$1" ] ; then
	exit 0
fi

if [ "$1" != "deploy_cert" ] ; then
	exit 0
fi

DOMAIN="$2"
KEYFILE="$3"
CERTFILE="$4"
FULLCHAINFILE="$5"

COMBINED="$(dirname ${FULLCHAINFILE})/combined.pem"

cat "${FULLCHAINFILE}" "${KEYFILE}" > "${COMBINED}"

systemctl reload haproxy.service

And make it executable:

# chmod +x /etc/dehydrated/hook

Now we’ll register our account:

# dehydrated --register --accept-terms

And now we can get our first set of certificates:

# dehydrated --cron

Reconfigure HAProxy

HAProxy doesn’t have this new cert yet so we’ll do a bit of config update.

In /etc/haproxy/haproxy.cfg look for the crt-base line, it should be set to /etc/ssl/private, change it to /etc/dehydrated/certs:

crt-base /etc/dehydrated/certs

This will make HAProxy prepend any SSL certificate path with this directory.

Then under the https frontend, change your bind line to use example.com/combined.pem.

bind abns@haproxy-https accept-proxy ssl crt example.com/combined.pem

Finally restart HAProxy and we’ll do one more sanity check. Returning a 404 is fine, all that matters is that the certificate is valid:

# systemctl restart haproxy
$ curl https://example.com

Try connecting with SSH

Now you should be able to connect with SSH! From some other host try running:

$ ssh -p 443 -l username example.com

Additionally say you’re behind a firewall that checks that 443 has TLS traffic. We can handle that also!

$ ssh -o ProxyCommand="openssl s_client -connect example.com:443 -quiet" example.com

This will wrap your SSH connection in TLS.

Compile and install soju

Right now, soju needs at least go 1.19 to compile. At the time of this writing, go 1.22 is the latest so I’ll just grab that.

$ curl -R -L -O https://go.dev/dl/go1.22.0.linux-amd64.tar.gz
$ tar xf go1.22.0.linux-amd64.tar.gz
$ export PATH="$(pwd)/go/bin:$PATH"
$ git clone https://git.sr.ht/~emersion/soju
$ cd soju
$ git checkout $(git tag --sort=committerdate | tail -1)
$ make GOFLAGS="-tags=libsqlite3"
# make install PREFIX=/usr/local/stow/soju-$(git tag --sort=committerdate | tail -1)
# stow -d /usr/local/stow soju-$(git tag --sort=committerdate | tail -1)

Next we’ll create a user for soju:

# useradd -r -s /usr/sbin/nologin -d /var/lib/soju soju

And we’ll create a config for soju that listens on IRC and Websocket ports. In /etc/soju/config:

db sqlite3 /var/lib/soju/main.db
message-store fs /var/lib/soju/logs/
listen ws+insecure://127.0.0.1:8002
listen irc+insecure://127.0.0.1:6667
listen unix+admin:///run/soju/admin
hostname bnc.example.com
title Soju
accept-proxy-ip localhost

Next we’ll define the soju service - create a systemd unit at /etc/systemd/system/soju.service.

Note - this is just copied from soju’s example service file, no changes.

[Unit]
Description=soju IRC bouncer service
Documentation=https://soju.im/
Documentation=man:soju(1) man:sojuctl(1)
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
User=soju
Group=soju
DynamicUser=yes
StateDirectory=soju
ConfigurationDirectory=soju
RuntimeDirectory=soju
AmbientCapabilities=CAP_NET_BIND_SERVICE
ExecStart=/usr/local/bin/soju
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure

[Install]
WantedBy=multi-user.target

Then enable and start the service

# systemctl start soju.service
# systemctl enable soju.service

Now we can create our first soju user:

# sojuctl user create -username (your username) -password (your password)

Add IRC nginx sites

We’ll create a few new sites in nginx.

  • bnc.example.com
  • irc.example.com - where we will host gamja.

In /etc/nginx/sites-available/bnc.example.com:

server {
  listen unix:/var/lib/haproxy/nginx_http proxy_protocol;
  server_name bnc.example.com;

  access_log /var/log/nginx/access.bnc.example.com.log;
  error_log /var/log/nginx/error.bnc.example.com.log;

  root /dev/null;
  index index.html;

  location /.well-known/acme-challenge {
    alias /srv/acme;
  }

  return 301 https://bnc.example.com$request_uri;
}

server {
  listen unix:/var/lib/haproxy/nginx_https proxy_protocol;
  listen unix:/var/lib/haproxy/nginx_http2 http2 proxy_protocol;

  server_name bnc.example.com;

  access_log /var/log/nginx/access.bnc.example.com.log;
  error_log /var/log/nginx/error.bnc.example.com.log;

  root /dev/null;
  index index.html;

  location /.well-known/acme-challenge {
    alias /srv/acme;
  }
}

In /etc/nginx/sites-available/irc.example.com:

server {
  listen unix:/var/lib/haproxy/nginx_http proxy_protocol;
  server_name irc.example.com;

  access_log /var/log/nginx/access.irc.example.com.log;
  error_log /var/log/nginx/error.irc.example.com.log;

  root /srv/http/irc.example.com;
  index index.html;

  location /.well-known/acme-challenge {
    alias /srv/acme;
  }

  return 301 https://irc.example.com$request_uri;
}

server {
  listen unix:/var/lib/haproxy/nginx_https proxy_protocol;
  listen unix:/var/lib/haproxy/nginx_http2 http2 proxy_protocol;

  server_name irc.example.com;

  access_log /var/log/nginx/access.irc.example.com.log;
  error_log /var/log/nginx/error.irc.example.com.log;

  root /srv/http/irc.example.com;
  index index.html;

  location /.well-known/acme-challenge {
    alias /srv/acme;
  }

  location /socket {
    proxy_pass http://127.0.0.1:8002;
    proxy_read_timeout 600s;
    proxy_buffering off;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    # note: we don't use $scheme, that will always be set to http since
    # HAProxy is terminating SSL for us. Instead we say it's HTTPS.
    proxy_set_header X-Forwarded-Proto https;
  }
}

Link in the new configs and reload nginx:

# ln -s ../sites-available/bnc.example.com /etc/nginx/sites-enabled/bnc.example.com
# ln -s ../sites-available/irc.example.com /etc/nginx/sites-enabled/irc.example.com
# systemctl reload nginx

Now update /etc/dehydrated/domains.txt and add the new hostnames to the end of the line, so the file should now read:

example.com www.example.com bnc.example.com irc.example.com

Run dehydrated:

# dehydrated --cron

You should now be able to configure your IRC client to connect to bnc.example.com, port 443 with TLS.

Build Gamja

At this time of writing, the latest LTS of Node.js is 20.11.0 so that’s what I’m grabbing.

# mkdir -p /srv/http/irc.example.com
# chown (YOUR-USER):(YOUR-USER) /srv/http/irc.example.com
$ curl -R -L -O https://nodejs.org/dist/v20.11.0/node-v20.11.0-linux-x64.tar.xz
$ tar xf node-v20.11.0-linux-x64.tar.xz
$ export PATH="$(pwd)/node-v20.11.0-linux-x64/bin:$PATH"
$ git clone https://git.sr.ht/~emersion/gamja /srv/http/irc.example.com
$ cd /srv/http/irc.example.com
$ git checkout $(git tag --sort=committerdate | tail -1)
$ npm install --production

Setup automatic certificate renewal

Just add dehydrated --cron to your root crontab somehow. For example:

# crontab -e
( insert "0 0 * * * /usr/local/bin/dehydrated --cron > /dev/null 2>&1", save)

More things you could do.

Transparent PROXY protocol for SSH.

Right now, if you connect to SSH your SSH server will only see the connection as being from localhost.

You could setup mmproxy or go-mmproxy to act as a transparent proxy to ssh. This way all your SSH logs would show the actual IP address, and tools like fail2ban will work correctly.

More services!

I think you could support multiplexing more services.

Basically any protocol where the client speaks first should be easy to do. For example, XMPP clients send an opening XML stanza like:

<?xml version='1.0'?>
      <stream:stream
          from='juliet@im.example.com'
          to='im.example.com'
          version='1.0'
          xml:lang='en'
          xmlns='jabber:client'
          xmlns:stream='http://etherx.jabber.org/streams'>

You could possibly have HAProxy just check for jabber within the first so many bytes and redirect to an XMPP server.

Using SNI works regardless of whether the protcol is a client-speaks-first or server-speaks-first protocol. Example, I could set up imap-proxy.example.com, and configure HAProxy to route to an IMAP server if the SNI matches (and I’d likely have to configure the IMAP server to accept the connection as secure, etc).