I have 2 machines in the same region, but when I make a POST request, if the request is not on the primary machine, the request will fail with the error “wal error: read only replica”. I saw that you need to use fly-replay
for a GET request, but I thought that a POST request should be automatically redirected to the primary machine. Should I always check for if I’m on the primary machine every time I receive a post?
Here are my configuration files:
app = ''
primary_region = 'cdg'
kill_signal = 'SIGINT'
kill_timeout = 5
swap_size_mb = 512
[experimental]
auto_rollback = true
[mounts]
source = 'data'
destination = '/data'
[[services]]
internal_port = 8080
processes = [ "app" ]
protocol = "tcp"
script_checks = [ ]
[services.concurrency]
hard_limit = 100
soft_limit = 80
type = "requests"
[[services.ports]]
handlers = [ "http" ]
port = 80
force_https = true
[[services.ports]]
handlers = [ "tls", "http" ]
port = 443
[[services.tcp_checks]]
grace_period = "1s"
interval = "15s"
restart_limit = 0
timeout = "2s"
[[services.http_checks]]
interval = "10s"
grace_period = "5s"
method = "get"
path = "/healthcheck"
protocol = "http"
timeout = "2s"
tls_skip_verify = false
headers = { }
[[services.http_checks]]
grace_period = "10s"
interval = "30s"
method = "GET"
timeout = "5s"
path = "/litefs/health"
[[vm]]
memory = '512mb'
cpu_kind = 'shared'
cpus = 1
# Documented example: https://github.com/superfly/litefs/blob/dec5a7353292068b830001bd2df4830e646f6a2f/cmd/litefs/etc/litefs.yml
fuse:
# Required. This is the mount directory that applications will
# use to access their SQLite databases.
dir: '${LITEFS_DIR}'
data:
# Path to internal data storage.
dir: '/data/litefs'
# This flag ensure that LiteFS continues to run if there is an issue on starup.
# It makes it easy to ssh in and debug any issues you might be having rather
# than continually restarting on initialization failure.
exit-on-error: false
proxy:
# matches the internal_port in fly.toml
addr: ':8080'
target: 'localhost:3000'
db: '${DATABASE_FILENAME}'
# The lease section specifies how the cluster will be managed. We're using the
# "consul" lease type so that our application can dynamically change the primary.
#
# These environment variables will be available in your Fly.io application.
lease:
type: 'consul'
candidate: ${FLY_REGION == PRIMARY_REGION}
promote: true
advertise-url: 'http://${HOSTNAME}.vm.${FLY_APP_NAME}.internal:20202'
consul:
url: '${FLY_CONSUL_URL}'
key: 'nicolas-besnard-litefs/${FLY_APP_NAME}'
exec:
- cmd: npx prisma migrate deploy
if-candidate: true
# Set the journal mode for the database to WAL. This reduces concurrency deadlock issues
- cmd: sqlite3 $DATABASE_PATH "PRAGMA journal_mode = WAL;"
if-candidate: true
- cmd: npm start