What is the best way to route all my outgoing network requests via wiregaurd

The Problem

I am trying to connect to a MongoDB Atlas Cluster from a Fly VM that is running a node app. The issue is that Atlas uses an IP whitelist and Fly, so far, does not support static/fixed IPs out of the box.

What I’ve done

  1. Create a Digital Ocean Droplet (with a fixed IP) and set up Wireguard to allow Fly and the Droplet to easily communicate.
  2. Whitelisted the Droplet’s IP so that it can connect to MongoDB.
  3. Locally on my development machine, set up Wireguard + set up a SOCKS proxy by SSH’ing to my Droplet e.g. ssh -D 5665 -N -f -q -i ./ssh/cert droplet. The node driver for mongo supports SOCKS proxying out of the box, which is why I chose it. And I can confirm that I am able to connect to MongoDB from my none whitelisted IP.

The Question

What is the simplest way to achieve a similar setup on a Fly VM?

I’ve considered running an SSH SOCKS proxy from within the Docker container, but that seems like a hassle/security issue with having to pass around SSH keys, etc.

Is there a way to setup a SOCKS proxy (or equivalent functionality) using Wireguard, or some other method?

Happy to clarify anything :slight_smile:

1 Like

To answer my own question, the easiest solution, was to run a Dante server SOCKS proxy on the Digital Ocean Droplet.

This allows me to run a proxy, that only accepts connections on the WireGuard interface. And authentication is via username/password - which I pass into my Fly.io container via Fly’s secrets

What does your setup look like the .toml file?

I can’t seem to even get WireGuard installed.

1 Like

@russellH This should help you install wire-guard on the droplet.

1 Like

@griff Thanks I am following that and the basic has given me a IPv6 connection, but the issue is IPv4 as my fly.io won’t connect to MongoDB Atlas which means I need to route all traffic out of Fly.io to Digital Ocean Droplet just to make a connection with MongoDB (as per many community posts)

Are you saying that all connections out are over IPV6? because when I did a curl command which showed IP address it returned an IPv4 address. So something must be happening on a network level.

Personally after hearing how many people are struggling with just remote connecting databases, surely someone has to think lets re-think our strategy and start supporting outbound remote databases like Atlas and Firebase.

I was able to set up a wiredguard on my droplet (digital ocean) I was able to connect to it from my Mac - both of these are IPv4.

additionally seems I froze the Fly container as it is stuck here.

Here is my Docker file

FROM node:alpine

# Set working directory
WORKDIR /usr/app

# Install PM2 globally
RUN npm install --global pm2

# Copy "package.json" and "package-lock.json" before other files
# Utilize Docker cache to save re-installing dependencies if unchanged
COPY ./package*.json ./

# Install dependencies
RUN npm install --production

# Install WireGuard tools and required packages
RUN apk update
RUN apk add --no-cache wireguard-tools ffmpeg

# Copy WireGuard configuration file
COPY wg0.conf /etc/wireguard/wg0.conf

# Copy all files
COPY ./ ./

# Build app
RUN npm run build

# Expose the listening port

# Run container as non-root (unprivileged) user
# The "node" user is provided in the Node.js Alpine base image
USER node

# Start WireGuard and launch the app with PM2
CMD wg-quick up wg0 && pm2-runtime start npm -- start

and here is the wg0.conf

PrivateKey = (REMOVED)
Address = fdaa:1:b91b:a7b:9076:0:a:102/120
DNS = 2001:4860:4860::8888

PublicKey = c3/0W8YBNGlekGafBuVeDnNvSQQ4tYLf/QOvHspJRhY=
AllowedIPs = fdaa:1:b91b::/48
Endpoint = syd2.gateway.6pn.dev:51820
PersistentKeepalive = 15

PublicKey = vsNMvNdCmUiVzwp0v6bkeO0pcMhy4pak2E2Z/CF2ywA=
AllowedIPs = fdaa:1:b91b::/48
Endpoint = lax1.gateway.6pn.dev:51820
PersistentKeepalive = 15

PublicKey = eCP0xi9xD62HqHSEEa3skpxpFUTxhjvubgDlLfVZyFk=
AllowedIPs = fdaa:1:b91b::/48
Endpoint = lhr1.gateway.6pn.dev:51820
PersistentKeepalive = 15
1 Like

We are aware this is an important feature, and it’s definitely new feature radar.
I’m not sure why that stage hangs. No promises but i’ll see if I can replicate this setup myself.

For what it’s worth, i tried using tailscale exit_nodes but that didn’t work the way I wanted it. My FLY machine kept crashing anytime I run tailscale up --exit-nodes=<ip-of-droplet>, not entirely sure why.

1 Like

It’s strange because as soon as I put my code onto digital ocean and did the exact same set up it works out of the box. The only bonus is I know which server has what external IP address.

It would be great to see an improved fly.io that allows external database connections even if it’s part of the dedicated plans that allow the server to show the IP address.

I don’t know exactly how your fly’s system works but unless your changing IP addresses every 6 months, surely each data centre is given a number of /24 based IP addresses.

Developers could approve those /24 IP addresses to their MongoDB or remote DB. Not ideal as it means any fly.io container in that same area could potentially have access to our remote server but they would need to know url links and username and passwords.

So not fully worried to much.

For now, I’ll manually roll out our servers in AU, EU and US and put a load balancer in front of that.

But fly does make it easier, so would love to see this feature supported.