How to Change App and DB Region Without Launching New Apps

Hello Community,

I am currently facing a challenge with migrating my Rails app and PostgreSQL database to a different region on, and I’m seeking some advice.

My Rails app and the PostgreSQL database are currently deployed in the YYZ (Toronto) region. I’m looking to move them to the LAX (Los Angeles) region, but I want to do this without launching new instances of these apps.

I’ve attempted to update the region by modifying the fly.toml file for each app with the following settings:

primary_region = "lax"


After applying these changes, I used the fly deploy --strategy immediate command, hoping it would move my apps to the LAX region. However, they still seem to be running in the YYZ region.

Also, I’ve tried to clone the db machine to the new region, but when I stop the original machine, the rails app can’t connect to the new machine. I also reattached the db app to the rails app, but still have the same connection issue.

I’m wondering if anyone here can guide me through the correct process to change the region of my existing applications on Are there specific steps or commands that I am missing? Any insights or suggestions on how to achieve this without starting new app instances would be greatly appreciated.

Thank you in advance for your help and advice!

hi @vladimir

Since your app and DB are running on Machines that are on servers physically located in yyz, there is no way to move the apps without starting some new Machines in lax.

For the Postgres app

You should probably back up your data.

Then you’ll need to follow the steps to perform a regional failover to move your primary to the new region. There will be a short amount of downtime during the failover. Note that the instructions say that you’ll need 3 replicas in lax before you begin. You can add these one at a time with fly machine clone <machine id of the primary> --region lax.

Once you’ve completed the steps linked above, run fly status after a few minutes to make sure all the nodes are healthy and that the primary is now in lax.

Clean up “old” Machines and volumes

You can delete the Machine in yyz (which should have the role replica after the failover):
fly machine destroy <machine id> --force

And then run fly volumes list and carefully delete any volumes that no longer have ATTACHED VMs using the command:
fly volumes destroy <volume id>.

Other notes: If your db had a single node (Machine), this is usually only sufficient for development projects and can result in downtime during host outages.

For the attached Rails app

You can use fly scale count 2 --region lax to add 2 new Machines in lax. And then fly scale count 0 --region yyz to remove the Machines in yyz.

Change the primary_region to lax in the fly.toml file.

Thanks for the detailed instructions, but I’m failing on the first step when I create db replicas. I can’t pass three health checks. The machine logs:

2024-01-05T19:28:01.785  lax [info] [ 2.250414] reboot: Restarting system

2024-01-05T19:30:09.203  lax [info] [ 0.035838] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!

2024-01-05T19:30:09.229  lax [info] [ 0.039208] PCI: Fatal: No config space access function found

2024-01-05T19:30:09.412  lax [info] INFO Starting init (commit: 15238e9)...

2024-01-05T19:30:09.425  lax [info] INFO Mounting /dev/vdb at /data w/ uid: 0, gid: 0 and chmod 0755

2024-01-05T19:30:09.427  lax [info] INFO Resized /data to 1056964608 bytes

2024-01-05T19:30:09.428  lax [info] INFO Preparing to run: ` start` as root

2024-01-05T19:30:09.433  lax [info] INFO [fly api proxy] listening at /.fly/api

2024-01-05T19:30:09.439  lax [info] 2024/01/05 19:30:09 listening on [fdaa:3:ac19:a7b:21a:888b:51fe:2]:22 (DNS: [fdaa::3]:53)

2024-01-05T19:30:09.586  lax [info] Provisioning standby

2024-01-05T19:30:15.470  lax [info] panic: failed to resolve member over dns: unable to resolve cloneable member

2024-01-05T19:30:15.471  lax [info] goroutine 1 [running]:

2024-01-05T19:30:15.471  lax [info] main.panicHandler({0x9a73a0?, 0xc0001c9620})

2024-01-05T19:30:15.471  lax [info] /go/src/ +0x55

2024-01-05T19:30:15.471  lax [info] main.main()

2024-01-05T19:30:15.471  lax [info] /go/src/ +0xe5e

2024-01-05T19:30:16.441  lax [info] INFO Main child exited normally with code: 2

2024-01-05T19:30:16.442  lax [info] INFO Starting clean up.

2024-01-05T19:30:16.442  lax [info] INFO Umounting /dev/vdb from /data

2024-01-05T19:30:16.444  lax [info] WARN hallpass exited, pid: 314, status: signal: 15 (SIGTERM)

2024-01-05T19:30:16.448  lax [info] 2024/01/05 19:30:16 listening on [fdaa:3:ac19:a7b:21a:888b:51fe:2]:22 (DNS: [fdaa::3]:53)

2024-01-05T19:30:17.444  lax [info] [ 8.251610] reboot: Restarting system

Were you able to clone Machines into lax before? Have you tried cloning into the existing region as well?

Could you copy the exact commands you’re running and the output you are seeing?

You can also try looking at the output of fly checks list --app <app name>.

When I clone to the initial region (yyz), I have the same issue.

$ fly machine clone 918577ddf13ee8 --region yyz --app read-db


Provisioning a new Machine with image
  Machine e286513da527d8 has been created...
  Waiting for Machine e286513da527d8 to start...
  Waiting for e286513da527d8 to become healthy (started, 0/3)

Logs: (same as above)

2024-01-05T21:27:55.030 app[e286513da527d8] yyz [info] Provisioning standby

2024-01-05T21:28:35.034 app[e286513da527d8] yyz [info] panic: failed to resolve member over dns: unable to resolve cloneable member

2024-01-05T21:28:35.034 app[e286513da527d8] yyz [info] goroutine 1 [running]:

2024-01-05T21:28:35.035 app[e286513da527d8] yyz [info] main.panicHandler({0x9a73a0?, 0xc0002ff8a0})

2024-01-05T21:28:35.035 app[e286513da527d8] yyz [info] /go/src/ +0x55

2024-01-05T21:28:35.035 app[e286513da527d8] yyz [info] main.main()

$ fly checks list --app appcharger-db

Suggest waiting and trying again, since we did have some slow registry pulls today.

I will also check on other options.

Hi @andie. Do you have any updates?

Are you still unable to clone any of your Postgres app Machines?