Fly.io + RDS & Elasticache

My app currently runs in AWS using EC2, RDS and Elasticache (elasticsearch and redis). I’m trying to figure out how to connect a Fly.io app to our AWS RDS, AWS Redis and AWS Elasticsearch. I’m struggling to find documentation on this. We’ve got a production app running in AWS, so I’m trying to test everything out before I consider moving our databases over. We’re in us-east-1 in AWS, if that’s relevant.

Hi @nate-dwell you would need to allow public access to all of your services (AWS RDS, AWS Redis and AWS Elasticsearch). There is heaps of tutorials and demos on how to do this with AWS resources. Then ensure your FlyIO server references those services via the public URLs.

eg. mydb.123456789012.us-east-1.rds.amazonaws.com for RDS, See here

For ElastiCache for Redis, I’m not sure you can expose public URL’s, I think you need to access via a AWS VPN… In FlyIO its super simple to spinup your own Redis cache servers see here

@TomWhite1 I’m trying to understand if WireGuard allows me to connection peer to my AWS VPC so that I don’t have to expose this to the open internet. I’ve been reading IPv6 WireGuard Peering · Fly, but still feel lost.

@nate-dwell We’re in the same boat and moved services over using tailscale on fly. Here’s a post I wrote on our setup: Dockerfile for elixir/phx umbrella app w/ tailscale, overmind, honeymarker

2 Likes

I was able to follow the wireguard guides along with rds-connector/main.tf at main · fly-apps/rds-connector · GitHub to set up an EC2 (t2.micro, but I’m unsure how big it needs to be) with HA Proxy on it.

I’ve got connectivity working, but this is adding about 40ms of latency to all database queries.

My wireguard connection name is aws-us-east-1 and I have Rails using aws-us-east-1._peer.internal:5432 as the database server host. My Fly wireguard and Fly app is in iad and my AWS environment is in us-east-1.

Here’s my HA Proxy config (with exact domains anonymized). The top few sections were default. I added the listen sections.

global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
        log     global
        mode    tcp
        option  tcplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000


listen redis-noeviction
  mode tcp
  bind :::6379 v4v6
  server r1 my-redis.use1.cache.amazonaws.com:6379

listen redis-lfu
  mode tcp
  bind :::6380 v4v6
  server r1 my-redis-lfu.use1.cache.amazonaws.com:6379

listen elasticsearch
  mode http
  bind :::9200 v4v6
  server e1 my-elasticsearch.us-east-1.es.amazonaws.com
  option httplog

listen postgres
  mode tcp
  bind :::5432 v4v6
  server pg1 my-postgres.us-east-1.rds.amazonaws.com:5432

@jsierles had helped me debug the HA Proxy script, which I mistakenly didn’t have using IP v6 before.

This comment from @kurt made me think that I should expect like 1ms of latency, not 40ms.

Is there something I’m doing wrong that’s causing 40ms of latency? That’s a lot for a database connection.

@ryansch what kind of latency do you experience? And what regions are you in (Fly and AWS)?

1 Like

I also can’t help but wonder, if instead of using HA Proxy, if there were a way to enable my Fly app to resolve my AWS DNS (RDS, Elasticache) through the wireguard tunnel.

You can use AWS DNS over the tailscale network. That’s how I have things set up.

I’d start by sshing into one of your fly VMs and running tailscale status. You should see that one of your gateways is listed as “direct”. Tailscale will fallback to a DERP relay if it can’t connect directly to a gateway and that will add latency.

@ryansch how much latency do you experience with this?

If I use the fly IAD region and ping a tailscale gateway in us-east-1, I see about 2ms latency over the tailscale network.

I get about 7ms from EWR with the same setup.

^ that’s the tailscale config to set up split DNS for AWS. Make sure you use the correct IP for your VPC.

To be clear, I’m not using any sort of proxy (except for tailscale itself). I’m connecting from my application on fly directly to RDS, etc.

But your AWS environment is not publicly accessible, right? You’re using Tailscale to connect Fly to AWS privately, right?

That’s correct!

I still get access control via the tailscale ACLs combined with my existing AWS security groups and network ACLs.

As discussed in Slack, this is likely due to the wireguard peer being established in a different region than iad. fly wg create <org> iad will get you one that should offer low latency.

1 Like

fly wireguard list shows the region as iad and was created using fly wireguard create dwell-498 iad aws-us-east-1.

1 Like