v2 Migration gone wrong

I’m trying to migrate the app kcd. It’s a Node.js app running with LiteFS.

Current status: Fly appears to be sending all traffic to a non-migrated version of my app, but I’m also seeing logs for an instance of my app that has been migrated but is experiencing errors.

I tried a straightforward migration, and ended up getting an error that I should have saved to share with you, but I did not. It was something about not having enough room on the volume for the migration or something :man_shrugging: It gave me three options of something to try. I allowed it to auto-rollback and decided to try scaling down to a single instance (from the 8 I was running) and delete all but the attached volume. One of the volumes was stubborn and I kept getting:

~/code/kentcdodds.com (main) 🏎
$ fly vol delete vol_2en7r1pyg0zrk6yx -y
Error: failed destroying volume: You hit a Fly API error with request ID: 01H45W7YT6HYCRZ7DW87H9EDT1-sjc

:man_shrugging: So I just decided to proceed anyway.

That’s when things got kinda weird. Here are all the logs from the migration attempt:

/app/bin $ fly vol delete vol_2en7r1pyg0zrk6yx -y
Error: failed destroying volume: You hit a Fly API error with request ID: 01H45WACAK6G0Y2A5MNWE44X0N-lax

/app/bin $ fly config save -a kcd
Wrote config file fly.toml
/app/bin $ fly migrate-to-v2
This migration process will do the following, in order:
 * Lock your application, preventing changes during the migration
 * Remove legacy VMs 
   * Remove 1 alloc
   * NOTE: Because your app uses volumes, there will be a short downtime during migration while your machines start up.
 * Create clones of each volume in use, for the new machines
   * These cloned volumes will have the suffix '_machines' appended to their names
   * Please note that your old volumes will not be removed.
     (you can do this manually, after making sure the migration was a success)
 * Create machines, copying the configuration of each existing VM
   * Create 1 "app" machine
 * Set the application platform version to "machines"
 * Unlock your application
 * Overwrite the config file at '/app/bin/fly.toml'
? Would you like to continue? Yes
==> Migrating kcd to the V2 platform
>  Locking app to prevent changes during the migration
>  Making snapshots of volumes for the new machines
>  Scaling down to zero VMs. This will cause temporary downtime until new VMs come up.
>  Enabling machine creation on app
>  Creating an app release to register this migration
>  Starting machines
INFO Using wait timeout: 5m0s lease timeout: 13s delay between lease refreshes: 4s
Updating existing machines in 'kcd' with rolling strategy
  [1/1] Waiting for 5683777f75098e [app] to become healthy: 1/2
failed while migrating: timeout reached waiting for healthchecks to pass for machine 5683777f75098e failed to get VM 5683777f75098e: Get "https://api.machines.dev/v1/apps/kcd/machines/5683777f75098e": net/http: request canceled
note: you can change this timeout with the --wait-timeout flag
? Would you like to enter interactive troubleshooting mode? If not, the migration will be rolled back. 

Here are the relevant logs I had during the migration:

2023-06-30T10:08:20Z app[9449075a] den [info]GET kentcdodds.com/blog/the-state-reducer-pattern 200  - 210.467 ms
2023-06-30T10:08:20Z app[9449075a] den [info]POST kentcdodds.com/__metronome 204  - 33.540 ms
2023-06-30T10:08:20Z app[9449075a] den [info]POST kentcdodds.com/__metronome 204  - 27.760 ms
2023-06-30T10:08:21Z runner[9449075a] den [info]Shutting down virtual machine
2023-06-30T10:08:21Z app[9449075a] den [info] INFO Sending signal SIGINT to main child process w/ PID 246
2023-06-30T10:08:21Z app[9449075a] den [info]sending signal to exec process
2023-06-30T10:08:21Z app[9449075a] den [info]waiting for exec process to close
2023-06-30T10:08:21Z app[9449075a] den [info]signal received, litefs shutting down
2023-06-30T10:08:21Z app[9449075a] den [info]6515368117173208084CEBF0: exiting primary, destroying lease
2023-06-30T10:08:21Z app[9449075a] den [info]ERROR: exit status 1: fusermount3: failed to unmount /litefs: Device or resource busy
2023-06-30T10:08:21Z app[9449075a] den [info] INFO Starting clean up.normally with code: 1
2023-06-30T10:08:21Z app[9449075a] den [info] INFO Starting clean up.
2023-06-30T10:08:21Z app[9449075a] den [info] INFO Umounting /dev/vdc from /data
2023-06-30T10:08:21Z app[9449075a] den [info] WARN hallpass exited, pid: 247, status: signal: 15 (SIGTERM)
2023-06-30T10:08:24Z proxy[9449075a] den [error]timed out while connecting to your instance. this indicates a problem with your app (hint: look at your logs and metrics)
2023-06-30T10:08:25Z proxy[9449075a] den [error]timed out while connecting to your instance. this indicates a problem with your app (hint: look at your logs and metrics)
2023-06-30T10:08:31Z runner[5683777f75098e] den [info]Pulling container image registry.fly.io/kcd@sha256:23a2cefb72eb42bb9d822b74b1479402daf439cbe906593b398be04301f74882
2023-06-30T10:09:05Z runner[5683777f75098e] den [info]Successfully prepared image registry.fly.io/kcd@sha256:23a2cefb72eb42bb9d822b74b1479402daf439cbe906593b398be04301f74882 (33.979861304s)
2023-06-30T10:09:05Z runner[5683777f75098e] den [info]Setting up volume 'data_machines'
2023-06-30T10:09:05Z runner[5683777f75098e] den [info]Opening encrypted volume
2023-06-30T10:09:14Z runner[5683777f75098e] den [info]Configuring firecracker
2023-06-30T10:09:14Z app[5683777f75098e] den [info] INFO Starting init (commit: db101a53)...
2023-06-30T10:09:14Z app[5683777f75098e] den [info] INFO Mounting /dev/vdb at /data w/ uid: 0, gid: 0 and chmod 0755
2023-06-30T10:09:15Z app[5683777f75098e] den [info] INFO Resized /data to 3217031168 bytes
2023-06-30T10:09:15Z app[5683777f75098e] den [info] INFO Preparing to run: `docker-entrypoint.sh litefs mount -- node ./other/start.js` as root
2023-06-30T10:09:15Z app[5683777f75098e] den [info] INFO [fly api proxy] listening at /.fly/api
2023-06-30T10:09:15Z app[5683777f75098e] den [info]2023/06/30 10:09:15 listening on [fdaa:0:23df:a7b:d828:18fd:6e90:2]:22 (DNS: [fdaa::3]:53)
2023-06-30T10:09:15Z health[5683777f75098e] den [warn]Health check on port 8080 is in a 'warning' state. Your app may not be responding properly. Services exposed on ports [80, 443] may have intermittent failures until the health check passes.
2023-06-30T10:09:15Z health[5683777f75098e] den [warn]Health check on port 8080 is in a 'warning' state. Your app may not be responding properly. Services exposed on ports [80, 443] may have intermittent failures until the health check passes.
2023-06-30T10:09:15Z app[5683777f75098e] den [info]Using Consul to determine primary
2023-06-30T10:09:15Z app[5683777f75098e] den [info]initializing consul: key=litefs/kcd url= hostname=5683777f75098e advertise-url=http://5683777f75098e.vm.kcd.internal:20202
2023-06-30T10:09:15Z app[5683777f75098e] den [info]config file read from /etc/litefs.yml
2023-06-30T10:09:15Z app[5683777f75098e] den [info]LiteFS main, commit=9ff02a303b5fc2e5c8bef5a173ab96dc4ab1c393
2023-06-30T10:09:15Z app[5683777f75098e] den [info]wal-sync: no wal file exists on "cache.db", skipping sync with ltx
2023-06-30T10:09:15Z app[5683777f75098e] den [info]wal-sync: no wal file exists on "sqlite.db", skipping sync with ltx
2023-06-30T10:09:16Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:16Z app[5683777f75098e] den [info]LiteFS mounted to: /litefs
2023-06-30T10:09:16Z app[5683777f75098e] den [info]http server listening on: http://localhost:20202
2023-06-30T10:09:16Z app[5683777f75098e] den [info]waiting to connect to cluster
2023-06-30T10:09:17Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:18Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:19Z runner[5683777f75098e] den [info]Pulling container image registry.fly.io/kcd@sha256:23a2cefb72eb42bb9d822b74b1479402daf439cbe906593b398be04301f74882
2023-06-30T10:09:19Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:19Z runner[5683777f75098e] den [info]Successfully prepared image registry.fly.io/kcd@sha256:23a2cefb72eb42bb9d822b74b1479402daf439cbe906593b398be04301f74882 (637.195691ms)
2023-06-30T10:09:19Z runner[5683777f75098e] den [info]Setting up volume 'data_machines'
2023-06-30T10:09:19Z runner[5683777f75098e] den [info]Opening encrypted volume
2023-06-30T10:09:20Z proxy[5683777f75098e] den [error]machine is in a non-startable state: created
2023-06-30T10:09:20Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:20Z proxy[5683777f75098e] den [error]machine is in a non-startable state: created
2023-06-30T10:09:20Z proxy[5683777f75098e] den [error]machine is in a non-startable state: created
2023-06-30T10:09:20Z proxy[5683777f75098e] den [error]machine is in a non-startable state: created
2023-06-30T10:09:21Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:21Z proxy[5683777f75098e] den [error]machine is in a non-startable state: created
2023-06-30T10:09:22Z proxy[5683777f75098e] den [error]machine is in a non-startable state: created
2023-06-30T10:09:22Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:22Z app[5683777f75098e] den [info] INFO Sending signal SIGINT to main child process w/ PID 240
2023-06-30T10:09:23Z health[5683777f75098e] den [error]Health check on port 8080 has failed. Your app is not responding properly. Services exposed on ports [80, 443] will have intermittent failures until the health check passes.
2023-06-30T10:09:23Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:23Z proxy[5683777f75098e] den [error]machine is in a non-startable state: replacing
2023-06-30T10:09:24Z proxy[5683777f75098e] den [error]machine is in a non-startable state: replacing
2023-06-30T10:09:24Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:25Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:25Z proxy[5683777f75098e] den [error]machine is in a non-startable state: replacing
2023-06-30T10:09:26Z proxy[5683777f75098e] den [error]machine is in a non-startable state: replacing
2023-06-30T10:09:26Z health[5683777f75098e] den [info]Health check on port 8080 is now passing.
2023-06-30T10:09:26Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:27Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:27Z app[5683777f75098e] den [info] INFO Sending signal SIGTERM to main child process w/ PID 240
2023-06-30T10:09:27Z proxy[5683777f75098e] den [error]machine is in a non-startable state: replacing
2023-06-30T10:09:28Z proxy[5683777f75098e] den [error]machine is in a non-startable state: replacing
2023-06-30T10:09:28Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:29Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:30Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:31Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:32Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:32Z app[5683777f75098e] den [warn]Virtual machine exited abruptly
2023-06-30T10:09:33Z app[5683777f75098e] den [info] INFO Starting init (commit: db101a53)...
2023-06-30T10:09:33Z app[5683777f75098e] den [info] INFO Mounting /dev/vdb at /data w/ uid: 0, gid: 0 and chmod 0755
2023-06-30T10:09:33Z app[5683777f75098e] den [info] INFO Resized /data to 3217031168 bytes
2023-06-30T10:09:33Z app[5683777f75098e] den [info] INFO Preparing to run: `docker-entrypoint.sh litefs mount -- node ./other/start.js` as root
2023-06-30T10:09:33Z app[5683777f75098e] den [info] INFO [fly api proxy] listening at /.fly/api
2023-06-30T10:09:33Z app[5683777f75098e] den [info]2023/06/30 10:09:33 listening on [fdaa:0:23df:a7b:d828:18fd:6e90:2]:22 (DNS: [fdaa::3]:53)
2023-06-30T10:09:33Z app[5683777f75098e] den [info]Using Consul to determine primary
2023-06-30T10:09:33Z app[5683777f75098e] den [info]config file read from /etc/litefs.yml
2023-06-30T10:09:33Z app[5683777f75098e] den [info]LiteFS main, commit=9ff02a303b5fc2e5c8bef5a173ab96dc4ab1c393
2023-06-30T10:09:33Z app[5683777f75098e] den [info]initializing consul: key=litefs/kcd url= hostname=5683777f75098e advertise-url=http://5683777f75098e.vm.kcd.internal:20202
2023-06-30T10:09:33Z app[5683777f75098e] den [info]wal-sync: no wal file exists on "cache.db", skipping sync with ltx
2023-06-30T10:09:34Z app[5683777f75098e] den [info]wal-sync: no wal file exists on "sqlite.db", skipping sync with ltx
2023-06-30T10:09:35Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:35Z app[5683777f75098e] den [info]LiteFS mounted to: /litefs
2023-06-30T10:09:35Z app[5683777f75098e] den [info]http server listening on: http://localhost:20202
2023-06-30T10:09:35Z app[5683777f75098e] den [info]waiting to connect to cluster
2023-06-30T10:09:36Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:37Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:37Z proxy[5683777f75098e] fra [error]could not find a good candidate within 90 attempts at load balancing. last error: no known healthy instances found for route tcp/443. (hint: is your app shutdown? is there an ongoing deployment with a volume or using the 'immediate' strategy? has your app's instances all reached their hard limit?)
2023-06-30T10:09:38Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:39Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:40Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:41Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:42Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:43Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:44Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:45Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:46Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:47Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:48Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:49Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:09:49Z proxy[5683777f75098e] lhr [error]could not find a good candidate within 90 attempts at load balancing. last error: no known healthy instances found for route tcp/443. (hint: is your app shutdown? is there an ongoing deployment with a volume or using the 'immediate' strategy? has your app's instances all reached their hard limit?)

And it continued like that through the entire timeout period for the migration. So then I decided to try interactive troubleshooting mode (maybe I should’ve rolled back?). Here’s what I did then:

Oops! We ran into issues migrating your app.
We're constantly working to improve the migration and squash bugs, but for
now please let this troubleshooting wizard guide you down a yellow brick road
of potential solutions...
               ,,,,,
       ,,.,,,,,,,,, .
   .,,,,,,,
  ,,,,,,,,,.,,
     ,,,,,,,,,,,,,,,,,,,
         ,,,,,,,,,,,,,,,,,,,,
            ,,,,,,,,,,,,,,,,,,,,,
           ,,,,,,,,,,,,,,,,,,,,,,,
        ,,,,,,,,,,,,,,,,,,,,,,,,,,,,.
   , ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,

The app's platform version is 'detached'
This means that the app is stuck in a half-migrated state, and wasn't able to
be fully recovered during the migration error rollback process.

Fixing this depends on how far the app got in the migration process.
Please use these tools to troubleshoot and attempt to repair the app.
No legacy Nomad VMs found. Setting platform version to machines/Apps V2.
/app/bin $ fly status
App
  Name     = kcd          
  Owner    = personal     
  Hostname = kcd.fly.dev  
  Image    = kcd:         
  Platform = machines     

Machines
PROCESS ID              VERSION REGION  STATE   CHECKS                          LAST UPDATED         
app     5683777f75098e  379     den     started 2 total, 1 passing, 1 critical  2023-06-30T10:09:33Z

/app/bin $ fly restart
Error: this command has been removed. please use `fly apps restart` instead

/app/bin $ fly apps restart kcd
Restarting machine 5683777f75098e
  Waiting for 5683777f75098e to become healthy (started, 1/2)
Machine 5683777f75098e restarted successfully!
/app/bin $ 

Interestingly, seconds after I ran fly apps restart kcd I started getting a mix of application logs (from what I believe is an old unmigrated version of the app) and error logs from the new app. Discourse won’t let me post many more characters, so I’ll add the logs in a follow-up comment.

Here’s what my monitoring page looks like now:

So yeah, my site is running ok for users and whatnot, but it’d be pretty cool to finish this migration. I think it makes the most sense to figure out what’s wrong with the migration process, fix that, then start it over again to avoid data loss from the time I started this migration.

Also, as an aside, I’d love to be able to get my volumes renamed from data_machines back to simply data when this is finished if at all possible :sweat_smile:

Any help is appreciated!

Here are the logs I started seeing as soon as I ran the restart command:

2023-06-30T10:15:08Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:09Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:10Z runner[18807713] den [info]Starting instance
2023-06-30T10:15:10Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:10Z runner[18807713] den [info]Configuring virtual machine
2023-06-30T10:15:10Z runner[18807713] den [info]Pulling container image
2023-06-30T10:15:11Z runner[18807713] den [info]Unpacking image
2023-06-30T10:15:11Z runner[18807713] den [info]Preparing kernel init
2023-06-30T10:15:11Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:12Z runner[18807713] den [info]Setting up volume 'data'
2023-06-30T10:15:12Z runner[18807713] den [info]Opening encrypted volume
2023-06-30T10:15:12Z runner[18807713] den [info]Configuring firecracker
2023-06-30T10:15:12Z runner[18807713] den [info]Starting virtual machine
2023-06-30T10:15:12Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:12Z app[18807713] den [info] INFO Starting init (commit: db101a53)...
2023-06-30T10:15:13Z app[18807713] den [info] INFO Mounting /dev/vdc at /data w/ uid: 0, gid: 0 and chmod 0755
2023-06-30T10:15:13Z app[18807713] den [info] INFO Preparing to run: `docker-entrypo
2023-06-30T10:15:13Z app[18807713] den [info] INFO Preparing to run: `docker-entrypo
nt.sh litefs mount -- node ./other/start.js` as root
2023-06-30T10:15:13Z app[18807713] den [info]2023/06/30 10:15:13 listening on [fdaa:0:23df:a7b:d828:4:263a:2]:22 (DNS: [fdaa::3]:53)
2023-06-30T10:15:13Z app[18807713] den [info]config file read from /etc/litefs.yml
2023-06-30T10:15:13Z app[18807713] den [info]LiteFS main, commit=9ff02a303b5fc2e5c8bef5a173ab96dc4ab1c393
2023-06-30T10:15:13Z app[18807713] den [info]Using Consul to determine primary
2023-06-30T10:15:13Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:14Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:15Z app[18807713] den [info]initializing consul: key=litefs/kcd url=https://:b832a863-60c2-48d4-8289-bdc5d80fc444@consul-iad.fly-shared.net/kcd-g3zmqx5x3y49dlp4/ hostname=18807713 advertise-url=http://18807713.vm.kcd.internal:20202
2023-06-30T10:15:15Z app[18807713] den [info]wal-sync: no wal file exists on "cache.db", skipping sync with ltx
2023-06-30T10:15:15Z proxy[18807713] cdg [error]could not find a good candidate within 90 attempts at load balancing. last error: no known healthy instances found for route tcp/443. (hint: is your app shutdown? is there an ongoing deployment with a volume or using the 'immediate' strategy? has your app's instances all reached their hard limit?)
2023-06-30T10:15:15Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:15Z app[18807713] den [info]wal-sync: no wal file exists on "sqlite.db", skipping sync with ltx
2023-06-30T10:15:16Z app[18807713] den [info]LiteFS mounted to: /litefs
2023-06-30T10:15:16Z app[18807713] den [info]http server listening on: http://localhost:20202
2023-06-30T10:15:16Z app[18807713] den [info]waiting to connect to cluster
2023-06-30T10:15:16Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:16Z app[18807713] den [info]6515368117173208084CEBF0: primary lease acquired, advertising as http://18807713.vm.kcd.internal:20202
2023-06-30T10:15:16Z app[18807713] den [info]connected to cluster, ready
2023-06-30T10:15:16Z app[18807713] den [info]starting subprocess: node [./other/start.js]
2023-06-30T10:15:16Z app[18807713] den [info]waiting for signal or subprocess to exit
2023-06-30T10:15:16Z app[18807713] den [info]proxy server listening on: http://localhost:8080
2023-06-30T10:15:17Z app[18807713] den [info]No .primary file found.
2023-06-30T10:15:17Z app[18807713] den [info]Using current instance (18807713) as primary (in den)
2023-06-30T10:15:17Z app[18807713] den [info]Instance (18807713) in den is primary. Deploying migrations.
2023-06-30T10:15:17Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:18Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:19Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:20Z app[18807713] den [info]Prisma schema loaded from prisma/schema.prisma
2023-06-30T10:15:20Z app[18807713] den [info]Datasource "db": SQLite database "sqlite.db" at "file:/litefs/sqlite.db"
2023-06-30T10:15:20Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:20Z app[18807713] den [info]3 migrations found in prisma/migrations
2023-06-30T10:15:20Z app[18807713] den [info]Applying migration `20230520220040_tuned_indexes`
2023-06-30T10:15:21Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:22Z health[18807713] den [error]Health check on port 8080 has failed. Your app is not responding properly. Services exposed on ports [80, 443] will have intermittent failures until the health check passes.
2023-06-30T10:15:22Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:23Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:24Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:25Z proxy[5683777f75098e] lhr [error]could not find a good candidate within 90 attempts at load balancing. last error: no known healthy instances found for route tcp/443. (hint: is your app shutdown? is there an ongoing deployment with a volume or using the 'immediate' strategy? has your app's instances all reached their hard limit?)
2023-06-30T10:15:25Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:26Z proxy[5683777f75098e] bom [error]could not find a good candidate within 90 attempts at load balancing. last error: no known healthy instances found for route tcp/443. (hint: is your app shutdown? is there an ongoing deployment with a volume or using the 'immediate' strategy? has your app's instances all reached their hard limit?)
2023-06-30T10:15:26Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:27Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:28Z health[18807713] den [info]Health check on port 8080 is now passing.
2023-06-30T10:15:28Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:29Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:30Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:31Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:32Z app[18807713] den [info]The following migration have been applied:
2023-06-30T10:15:32Z app[18807713] den [info]migrations/
2023-06-30T10:15:32Z app[18807713] den [info]  └─ 20230520220040_tuned_indexes/
2023-06-30T10:15:32Z app[18807713] den [info]    └─ migration.sql
2023-06-30T10:15:32Z app[18807713] den [info]All migrations have been successfully applied.
2023-06-30T10:15:32Z app[18807713] den [info]npm notice
2023-06-30T10:15:32Z app[18807713] den [info]npm notice New minor version of npm available! 9.5.1 -> 9.7.2
2023-06-30T10:15:32Z app[18807713] den [info]npm notice Changelog: <https://github.com/npm/cli/releases/tag/v9.7.2>
2023-06-30T10:15:32Z app[18807713] den [info]npm notice Run `npm install -g npm@9.7.2` to update!
2023-06-30T10:15:32Z app[18807713] den [info]npm notice
2023-06-30T10:15:32Z app[18807713] den [info]Starting app...
2023-06-30T10:15:32Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:32Z app[18807713] den [info]> kentcdodds.com@1.0.0 start
2023-06-30T10:15:32Z app[18807713] den [info]> cross-env NODE_ENV=production node ./index.js
2023-06-30T10:15:33Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:34Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:34Z app[5683777f75098e] den [info] INFO Sending signal SIGINT to main child process w/ PID 239
2023-06-30T10:15:35Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:35Z proxy[5683777f75098e] lhr [error]could not find a good candidate within 90 attempts at load balancing. last error: no known healthy instances found for route tcp/443. (hint: is your app shutdown? is there an ongoing deployment with a volume or using the 'immediate' strategy? has your app's instances all reached their hard limit?)
2023-06-30T10:15:36Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:37Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:38Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:39Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:39Z app[5683777f75098e] den [info] INFO Sending signal SIGTERM to main child process w/ PID 239
2023-06-30T10:15:40Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:41Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:42Z proxy[5683777f75098e] lhr [error]could not find a good candidate within 90 attempts at load balancing. last error: no known healthy instances found for route tcp/443. (hint: is your app shutdown? is there an ongoing deployment with a volume or using the 'immediate' strategy? has your app's instances all reached their hard limit?)
2023-06-30T10:15:42Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:43Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:44Z app[18807713] den [info][@octokit/plugin-throttling] `onAbuseLimit()` is deprecated and will be removed in a future release of `@octokit/plugin-throttling`, please use the `onSecondaryRateLimit` handler instead
2023-06-30T10:15:44Z app[18807713] den [info]🐨  let's get rolling!
2023-06-30T10:15:44Z app[18807713] den [info]Local:            http://localhost:8081
2023-06-30T10:15:44Z app[18807713] den [info]On Your Network:  http://172.19.64.50:8081
2023-06-30T10:15:44Z app[18807713] den [info]Press Ctrl+C to stop
2023-06-30T10:15:44Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:44Z app[18807713] den [info]prisma:info Starting a sqlite pool with 3 connections.
2023-06-30T10:15:45Z app[5683777f75098e] den [info] INFO Starting init (commit: db101a53)...
2023-06-30T10:15:46Z app[5683777f75098e] den [info] INFO Mounting /dev/vdb at /data w/ uid: 0, gid: 0 and chmod 0755
2023-06-30T10:15:46Z app[5683777f75098e] den [info] INFO Resized /data to 3217031168 bytes
2023-06-30T10:15:46Z app[5683777f75098e] den [info] INFO Preparing to run: `docker-entrypoint.sh litefs mount -- node ./other/start.js` as root
2023-06-30T10:15:46Z app[5683777f75098e] den [info] INFO [fly api proxy] listening at /.fly/api
2023-06-30T10:15:46Z app[5683777f75098e] den [info]2023/06/30 10:15:46 listening on [fdaa:0:23df:a7b:d828:18fd:6e90:2]:22 (DNS: [fdaa::3]:53)
2023-06-30T10:15:46Z app[5683777f75098e] den [info]Using Consul to determine primary
2023-06-30T10:15:46Z app[5683777f75098e] den [info]initializing consul: key=litefs/kcd url= hostname=5683777f75098e advertise-url=http://5683777f75098e.vm.kcd.internal:20202
2023-06-30T10:15:46Z app[5683777f75098e] den [info]config file read from /etc/litefs.yml
2023-06-30T10:15:46Z app[5683777f75098e] den [info]LiteFS main, commit=9ff02a303b5fc2e5c8bef5a173ab96dc4ab1c393
2023-06-30T10:15:46Z app[5683777f75098e] den [info]wal-sync: no wal file exists on "cache.db", skipping sync with ltx
2023-06-30T10:15:46Z app[5683777f75098e] den [info]wal-sync: no wal file exists on "sqlite.db", skipping sync with ltx
2023-06-30T10:15:47Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:47Z app[5683777f75098e] den [info]LiteFS mounted to: /litefs
2023-06-30T10:15:47Z app[5683777f75098e] den [info]http server listening on: http://localhost:20202
2023-06-30T10:15:47Z app[5683777f75098e] den [info]waiting to connect to cluster
2023-06-30T10:15:47Z proxy[5683777f75098e] maa [error]could not find a good candidate within 90 attempts at load balancing. last error: no known healthy instances found for route tcp/443. (hint: is your app shutdown? is there an ongoing deployment with a volume or using the 'immediate' strategy? has your app's instances all reached their hard limit?)
2023-06-30T10:15:48Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:48Z proxy[5683777f75098e] maa [error]could not find a good candidate within 90 attempts at load balancing. last error: no known healthy instances found for route tcp/443. (hint: is your app shutdown? is there an ongoing deployment with a volume or using the 'immediate' strategy? has your app's instances all reached their hard limit?)
2023-06-30T10:15:49Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:50Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:51Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:52Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:52Z app[18807713] den [info]Updated the cache value for total-post-reads:__all-posts__. Getting a fresh value for this took 483ms. Caching for 60000ms + 86400000ms stale in LRU.
2023-06-30T10:15:53Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:53Z app[18807713] den [info]prisma:query - 833ms -
2023-06-30T10:15:53Z app[18807713] den [info]      SELECT
2023-06-30T10:15:53Z app[18807713] den [info]        (SELECT COUNT(DISTINCT "userId") FROM "PostRead" WHERE "userId" IS NOT NULL) +
2023-06-30T10:15:53Z app[18807713] den [info]        (SELECT COUNT(DISTINCT "clientId") FROM "PostRead" WHERE "clientId" IS NOT NULL)
2023-06-30T10:15:53Z app[18807713] den [info]Updated the cache value for total-reader-count. Getting a fresh value for this took 1322ms. Caching for 300000ms + 86400000ms stale in LRU.
2023-06-30T10:15:53Z app[18807713] den [info]Updated the cache value for sorted-most-popular-post-slugs. Getting a fresh value for this took 238ms. Caching for 1800000ms + 86400000ms stale in LRU.
2023-06-30T10:15:53Z app[18807713] den [info]HEAD 172.19.64.50:8080/ 200  - 1908.216 ms
2023-06-30T10:15:53Z app[18807713] den [info]GET 172.19.64.50:8080/healthcheck 200  - 2006.715 ms
2023-06-30T10:15:54Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:54Z health[18807713] den [info]Health check on port 8080 is now passing.
2023-06-30T10:15:55Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:56Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:57Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:58Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:15:59Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:16:00Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:16:01Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:16:01Z app[18807713] den [info]GET kentcdodds.com/blog/javascript-to-know-for-react?_data=root 200  - 197.923 ms
2023-06-30T10:16:01Z app[18807713] den [info]Updated the cache value for total-post-reads:javascript-to-know-for-react. Getting a fresh value for this took 39ms. Caching for 60000ms + 86400000ms stale in LRU.
2023-06-30T10:16:01Z app[18807713] den [info]GET kentcdodds.com/blog/javascript-to-know-for-react?_data=routes/blog_+/$slug 200  - 210.650 ms
2023-06-30T10:16:02Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:16:02Z app[18807713] den [info]GET kentcdodds.com/blog/rss.xml 200  - 39.794 ms
2023-06-30T10:16:03Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:16:03Z app[18807713] den [info]Updated the cache value for total-post-reads:colocation. Getting a fresh value for this took 26ms. Caching for 60000ms + 86400000ms stale in LRU.
2023-06-30T10:16:04Z app[18807713] den [info]HEAD 172.19.64.50:8080/ 200  - 73.044 ms
2023-06-30T10:16:04Z app[18807713] den [info]GET 172.19.64.50:8080/healthcheck 200  - 216.708 ms
2023-06-30T10:16:04Z app[18807713] den [info]GET kentcdodds.com/blog/colocation 200  - 295.163 ms
2023-06-30T10:16:04Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:16:05Z app[18807713] den [info]Updated the cache value for total-post-reads:common-mistakes-with-react-testing-library. Getting a fresh value for this took 47ms. Caching for 60000ms + 86400000ms stale in LRU.
2023-06-30T10:16:05Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused
2023-06-30T10:16:06Z app[5683777f75098e] den [info]6515368117173208084CEBF0: cannot acquire lease or find primary, retrying: fetch primary url: Get "http://127.0.0.1:8500/v1/kv/litefs/kcd": dial tcp 127.0.0.1:8500: connect: connection refused

And here’s my current fly status:

App
  Name     = kcd          
  Owner    = personal     
  Hostname = kcd.fly.dev  
  Image    = kcd:         
  Platform = machines     

Machines
PROCESS ID              VERSION REGION  STATE   CHECKS                          LAST UPDATED         
app     5683777f75098e  379     den     started 2 total, 1 passing, 1 critical  2023-06-30T10:15:46Z

hey Kent, I think you need to run fly consul attach on the new app. It looks like it’s trying to connect to http://127.0.0.1:8500 rather than the Fly Consul instance.

That makes sense. Can that be a part of the migration script? It worked fine when I did this for epic-stack-template.

So I’ve run fly consul attach and now I’m getting these logs:

2023-06-30T13:06:10.708 app[18807713] den [info] http: error: cannot connect to self

2023-06-30T13:06:10.709 app[5683777f75098e] den [info] 6515368117173208084CEBF0: disconnected from primary with error, retrying: connect to primary: invalid response: code=400 ('http://18807713.vm.kcd.internal:20202')

Yes, we can add that to the migration script.

The LiteFS nodes generate a unique, persistent node ID and send that with each request just to make sure that a node doesn’t accidentally connect to itself. However, if you have a volume that was copied then it’ll have the same node ID as another node.

I’m guessing that’s the case here since there’s two instance IDs (18807713 & 5683777f75098e). Can you check contents of the $LITEFS_DATA_DIR/id file in both nodes? If it’s the same, you can delete one of them and restart that node. It should generate a new ID.

It might be more useful for LiteFS to simply keep the node ID in-memory only and regenerate on each startup.

I don’t know how to SSH into both nodes. When I run:

fly ssh console -C bash -s

I only get one node to choose from and it’s the new machine I think:

den: 5683777f75098e fdaa:0:23df:a7b:d828:18fd:6e90:2 young-resonance-3411

Try removing the id file from that machine and restarting it. It should resolve the issue for both since they’ll then have unique IDs.

That seems to have worked, though now it doesn’t appear any traffic is going to the new app, it’s just getting healthcheck traffic. All the user traffic is still going to the old app:

2023-06-30T14:25:34.114 app[5683777f75098e] den [info] 6515368117173208084CEBF0: existing primary found (18807713), connecting as replica
2023-06-30T14:25:34.115 app[18807713] den [info] http: error: cannot connect to self
2023-06-30T14:25:34.115 app[5683777f75098e] den [info] 6515368117173208084CEBF0: disconnected from primary with error, retrying: connect to primary: invalid response: code=400 ('http://18807713.vm.kcd.internal:20202')
2023-06-30T14:25:34.268 app[5683777f75098e] den [info] INFO Sending signal SIGINT to main child process w/ PID 240
2023-06-30T14:25:35.309 app[5683777f75098e] den [info] 6515368117173208084CEBF0: existing primary found (18807713), connecting as replica
2023-06-30T14:25:35.310 app[18807713] den [info] http: error: cannot connect to self
2023-06-30T14:25:35.310 app[5683777f75098e] den [info] 6515368117173208084CEBF0: disconnected from primary with error, retrying: connect to primary: invalid response: code=400 ('http://18807713.vm.kcd.internal:20202')
2023-06-30T14:25:35.415 app[18807713] den [info] GET kentcdodds.com/?_data=routes/index 200 - 34.221 ms
2023-06-30T14:25:36.623 app[5683777f75098e] den [info] 6515368117173208084CEBF0: existing primary found (18807713), connecting as replica
2023-06-30T14:25:36.624 app[18807713] den [info] http: error: cannot connect to self
2023-06-30T14:25:36.624 app[5683777f75098e] den [info] 6515368117173208084CEBF0: disconnected from primary with error, retrying: connect to primary: invalid response: code=400 ('http://18807713.vm.kcd.internal:20202')
2023-06-30T14:25:36.730 app[18807713] den [info] GET kentcdodds.com/blog/rss.xml 200 - 15.965 ms
2023-06-30T14:25:37.691 app[5683777f75098e] den [info] 6515368117173208084CEBF0: existing primary found (18807713), connecting as replica
2023-06-30T14:25:37.692 app[18807713] den [info] http: error: cannot connect to self
2023-06-30T14:25:37.692 app[5683777f75098e] den [info] 6515368117173208084CEBF0: disconnected from primary with error, retrying: connect to primary: invalid response: code=400 ('http://18807713.vm.kcd.internal:20202')
2023-06-30T14:25:37.996 app[18807713] den [info] HEAD 172.19.64.50:8080/ 200 - 69.408 ms
2023-06-30T14:25:38.000 app[18807713] den [info] GET 172.19.64.50:8080/healthcheck 200 - 80.806 ms
2023-06-30T14:25:38.035 app[18807713] den [info] Background refresh for total-post-reads:__all-posts__ successful. Getting a fresh value for this took 95ms. Caching for 60000ms + 86400000ms stale in LRU.
2023-06-30T14:25:38.135 app[18807713] den [info] GET kentcdodds.com/ 200 - 46.973 ms
2023-06-30T14:25:38.360 app[18807713] den [info] GET kentcdodds.com/__metronome/metronome-6.0.1.js 200 - 1.807 ms
2023-06-30T14:25:38.743 app[5683777f75098e] den [info] 6515368117173208084CEBF0: existing primary found (18807713), connecting as replica
2023-06-30T14:25:38.743 app[18807713] den [info] http: error: cannot connect to self
2023-06-30T14:25:38.743 app[5683777f75098e] den [info] 6515368117173208084CEBF0: disconnected from primary with error, retrying: connect to primary: invalid response: code=400 ('http://18807713.vm.kcd.internal:20202')
2023-06-30T14:25:39.300 app[5683777f75098e] den [info] INFO Sending signal SIGTERM to main child process w/ PID 240
2023-06-30T14:25:39.743 app[18807713] den [info] GET kentcdodds.com/calls/01/25/why-is-forward-ref-required-to-limit-re-renders 200 - 84.863 ms
2023-06-30T14:25:39.810 app[5683777f75098e] den [info] 6515368117173208084CEBF0: existing primary found (18807713), connecting as replica
2023-06-30T14:25:39.811 app[18807713] den [info] http: error: cannot connect to self
2023-06-30T14:25:39.811 app[5683777f75098e] den [info] 6515368117173208084CEBF0: disconnected from primary with error, retrying: connect to primary: invalid response: code=400 ('http://18807713.vm.kcd.internal:20202')
2023-06-30T14:25:40.353 app[18807713] den [info] GET kentcdodds.com/blog/rss.xml 200 - 23.289 ms
2023-06-30T14:25:40.423 app[18807713] den [info] POST kentcdodds.com/__metronome 204 - 10.753 ms
2023-06-30T14:25:40.742 app[18807713] den [info] GET kentcdodds.com/blog/rss.xml 200 - 14.151 ms
2023-06-30T14:25:40.923 app[5683777f75098e] den [info] 6515368117173208084CEBF0: existing primary found (18807713), connecting as replica
2023-06-30T14:25:40.924 app[18807713] den [info] http: error: cannot connect to self
2023-06-30T14:25:40.924 app[5683777f75098e] den [info] 6515368117173208084CEBF0: disconnected from primary with error, retrying: connect to primary: invalid response: code=400 ('http://18807713.vm.kcd.internal:20202')
2023-06-30T14:25:41.986 app[5683777f75098e] den [info] 6515368117173208084CEBF0: existing primary found (18807713), connecting as replica
2023-06-30T14:25:41.987 app[18807713] den [info] http: error: cannot connect to self
2023-06-30T14:25:41.987 app[5683777f75098e] den [info] 6515368117173208084CEBF0: disconnected from primary with error, retrying: connect to primary: invalid response: code=400 ('http://18807713.vm.kcd.internal:20202')
2023-06-30T14:25:42.978 app[18807713] den [info] GET kentcdodds.com/blog/rss.xml 200 - 30.204 ms
2023-06-30T14:25:42.981 app[18807713] den [info] GET kentcdodds.com/courses?_data=routes/courses 200 - 19.098 ms
2023-06-30T14:25:43.065 app[5683777f75098e] den [info] 6515368117173208084CEBF0: existing primary found (18807713), connecting as replica
2023-06-30T14:25:43.065 app[18807713] den [info] http: error: cannot connect to self
2023-06-30T14:25:43.065 app[5683777f75098e] den [info] 6515368117173208084CEBF0: disconnected from primary with error, retrying: connect to primary: invalid response: code=400 ('http://18807713.vm.kcd.internal:20202')
2023-06-30T14:25:43.501 app[18807713] den [info] GET kentcdodds.com/courses 200 - 47.842 ms
2023-06-30T14:25:44.141 app[5683777f75098e] den [info] 6515368117173208084CEBF0: existing primary found (18807713), connecting as replica
2023-06-30T14:25:44.142 app[18807713] den [info] http: error: cannot connect to self
2023-06-30T14:25:44.142 app[5683777f75098e] den [info] 6515368117173208084CEBF0: disconnected from primary with error, retrying: connect to primary: invalid response: code=400 ('http://18807713.vm.kcd.internal:20202')
2023-06-30T14:25:44.199 app[18807713] den [info] POST kentcdodds.com/__metronome 204 - 21.842 ms
2023-06-30T14:25:44.488 app[5683777f75098e] den [warn] Virtual machine exited abruptly
2023-06-30T14:25:44.589 app[18807713] den [info] GET kentcdodds.com/ 200 - 49.396 ms
2023-06-30T14:25:44.885 app[5683777f75098e] den [info] INFO Starting init (commit: db101a53)...
2023-06-30T14:25:44.906 app[5683777f75098e] den [info] INFO Mounting /dev/vdb at /data w/ uid: 0, gid: 0 and chmod 0755
2023-06-30T14:25:44.916 app[5683777f75098e] den [info] INFO Resized /data to 3217031168 bytes
2023-06-30T14:25:44.917 app[5683777f75098e] den [info] INFO Preparing to run: `docker-entrypoint.sh litefs mount -- node ./other/start.js` as root
2023-06-30T14:25:44.931 app[5683777f75098e] den [info] INFO [fly api proxy] listening at /.fly/api
2023-06-30T14:25:44.944 app[5683777f75098e] den [info] 2023/06/30 14:25:44 listening on [fdaa:0:23df:a7b:d828:18fd:6e90:2]:22 (DNS: [fdaa::3]:53)
2023-06-30T14:25:44.987 app[5683777f75098e] den [info] config file read from /etc/litefs.yml
2023-06-30T14:25:44.987 app[5683777f75098e] den [info] LiteFS main, commit=9ff02a303b5fc2e5c8bef5a173ab96dc4ab1c393
2023-06-30T14:25:44.987 app[5683777f75098e] den [info] Using Consul to determine primary
2023-06-30T14:25:45.429 app[5683777f75098e] den [info] initializing consul: key=litefs/kcd url=https://:b832a863-60c2-48d4-8289-bdc5d80fc444@consul-iad.fly-shared.net/kcd-g3zmqx5x3y49dlp4/ hostname=5683777f75098e advertise-url=http://5683777f75098e.vm.kcd.internal:20202
2023-06-30T14:25:45.443 app[5683777f75098e] den [info] wal-sync: no wal file exists on "cache.db", skipping sync with ltx
2023-06-30T14:25:45.452 app[18807713] den [info] POST kentcdodds.com/__metronome 204 - 12.195 ms
2023-06-30T14:25:45.608 app[5683777f75098e] den [info] wal-sync: no wal file exists on "sqlite.db", skipping sync with ltx
2023-06-30T14:25:45.919 app[18807713] den [info] GET kentcdodds.com/blog/rss.xml 200 - 23.836 ms
2023-06-30T14:25:46.081 app[5683777f75098e] den [info] LiteFS mounted to: /litefs
2023-06-30T14:25:46.081 app[5683777f75098e] den [info] http server listening on: http://localhost:20202
2023-06-30T14:25:46.081 app[5683777f75098e] den [info] waiting to connect to cluster
2023-06-30T14:25:46.206 app[5683777f75098e] den [info] BB9321531C60421741511B82: existing primary found (18807713), connecting as replica
2023-06-30T14:25:46.208 app[18807713] den [info] 6515368117173208084CEBF0: stream connected
2023-06-30T14:25:46.208 app[18807713] den [info] transaction file for txid 00000000000919f4 no longer available, writing snapshot
2023-06-30T14:25:46.208 app[18807713] den [info] writing snapshot "cache.db" @ 0000000000091c8a
2023-06-30T14:25:46.706 app[18807713] den [info] transaction file for txid 000000000001f601 no longer available, writing snapshot
2023-06-30T14:25:46.706 app[18807713] den [info] writing snapshot "sqlite.db" @ 000000000001f693
2023-06-30T14:25:46.721 app[5683777f75098e] den [info] snapshot received for "cache.db", removing other ltx files: 0000000000000001-0000000000091c8a.ltx
2023-06-30T14:25:48.503 app[18807713] den [info] HEAD 172.19.64.50:8080/ 200 - 431.550 ms
2023-06-30T14:25:48.510 app[18807713] den [info] GET 172.19.64.50:8080/healthcheck 200 - 493.875 ms
2023-06-30T14:25:49.350 app[5683777f75098e] den [info] snapshot received for "sqlite.db", removing other ltx files: 0000000000000001-000000000001f693.ltx
2023-06-30T14:25:49.359 app[18807713] den [info] GET kentcdodds.com/blog/rss.xml 200 - 45.363 ms
2023-06-30T14:25:49.401 app[18807713] den [info] POST kentcdodds.com/__metronome 204 - 12.269 ms
2023-06-30T14:25:50.106 app[5683777f75098e] den [info] connected to cluster, ready
2023-06-30T14:25:50.106 app[5683777f75098e] den [info] starting subprocess: node [./other/start.js]
2023-06-30T14:25:50.110 app[5683777f75098e] den [info] proxy server listening on: http://localhost:8080
2023-06-30T14:25:50.110 app[5683777f75098e] den [info] waiting for signal or subprocess to exit
2023-06-30T14:25:50.355 app[5683777f75098e] den [info] Found primary instance in .primary file: 18807713
2023-06-30T14:25:50.356 app[5683777f75098e] den [info] Instance (5683777f75098e) in den is not primary (the primary instance is 18807713). Skipping migrations.
2023-06-30T14:25:50.356 app[5683777f75098e] den [info] Starting app...
2023-06-30T14:25:50.894 app[18807713] den [info] GET kentcdodds.com/blog 200 - 55.198 ms
2023-06-30T14:25:51.085 app[5683777f75098e] den [info] > kentcdodds.com@1.0.0 start
2023-06-30T14:25:51.085 app[5683777f75098e] den [info] > cross-env NODE_ENV=production node ./index.js
2023-06-30T14:25:51.144 app[18807713] den [info] Background refresh for total-post-reads:application-state-management-with-react successful. Getting a fresh value for this took 81ms. Caching for 60000ms + 86400000ms stale in LRU.
2023-06-30T14:25:53.136 app[18807713] den [info] GET kentcdodds.com/courses 200 - 46.423 ms
2023-06-30T14:25:53.239 app[18807713] den [info] POST kentcdodds.com/__metronome 204 - 3.303 ms
2023-06-30T14:25:53.483 app[18807713] den [info] POST kentcdodds.com/__metronome 204 - 10.604 ms
2023-06-30T14:25:54.554 app[5683777f75098e] den [info] [@octokit/plugin-throttling] `onAbuseLimit()` is deprecated and will be removed in a future release of `@octokit/plugin-throttling`, please use the `onSecondaryRateLimit` handler instead
2023-06-30T14:25:54.609 app[5683777f75098e] den [info] 🐨 let's get rolling!
2023-06-30T14:25:54.610 app[5683777f75098e] den [info] Local: http://localhost:8081
2023-06-30T14:25:54.610 app[5683777f75098e] den [info] On Your Network: http://172.19.130.210:8081
2023-06-30T14:25:54.610 app[5683777f75098e] den [info] Press Ctrl+C to stop
2023-06-30T14:25:54.671 app[5683777f75098e] den [info] prisma:info Starting a sqlite pool with 3 connections.
2023-06-30T14:25:58.597 app[18807713] den [info] HEAD 172.19.64.50:8080/ 200 - 32.247 ms
2023-06-30T14:25:58.599 app[18807713] den [info] GET 172.19.64.50:8080/healthcheck 200 - 47.151 ms
2023-06-30T14:26:00.233 app[5683777f75098e] den [info] GET 172.19.130.210:8080/healthcheck 200 - 90.599 ms
2023-06-30T14:26:00.757 app[18807713] den [info] GET kentcdodds.com/blog/rss.xml 200 - 22.349 ms
2023-06-30T14:26:00.788 health[5683777f75098e] den [info] Health check on port 8080 is now passing.
2023-06-30T14:26:03.054 app[18807713] den [info] Background refresh for total-post-reads:how-to-test-custom-react-hooks successful. Getting a fresh value for this took 352ms. Caching for 60000ms + 86400000ms stale in LRU.
2023-06-30T14:26:03.351 app[18807713] den [info] GET kentcdodds.com/blog/how-to-test-custom-react-hooks 200 - 436.277 ms
2023-06-30T14:26:03.460 app[18807713] den [info] Updated the cache value for blog:colocation:rankings. Getting a fresh value for this took 793ms. Caching for 604800000ms + 86400000ms stale in SQLite cache.
2023-06-30T14:26:03.561 app[18807713] den [info] Updated the cache value for blog:rankings. Getting a fresh value for this took 894ms. Caching for 3600000ms + 86400000ms stale in SQLite cache.
2023-06-30T14:26:03.632 app[18807713] den [info] POST kentcdodds.com/blog/colocation?_data=routes/blog_+/$slug 200 - 964.775 ms
2023-06-30T14:26:03.779 app[18807713] den [info] GET kentcdodds.com/__metronome/metronome-6.0.1.js 200 - 3.417 ms
2023-06-30T14:26:03.960 app[18807713] den [info] Background refresh for total-post-reads:colocation successful. Getting a fresh value for this took 70ms. Caching for 60000ms + 86400000ms stale in LRU.
2023-06-30T14:26:03.969 app[18807713] den [info] GET kentcdodds.com/blog/colocation?_data=routes/blog_+/$slug 200 - 85.789 ms
2023-06-30T14:26:03.972 app[18807713] den [info] GET kentcdodds.com/blog/colocation?_data=root 200 - 99.171 ms
2023-06-30T14:26:08.653 app[18807713] den [info] HEAD 172.19.64.50:8080/ 200 - 39.600 ms
2023-06-30T14:26:08.656 app[18807713] den [info] GET 172.19.64.50:8080/healthcheck 200 - 52.427 ms
2023-06-30T14:26:10.257 app[5683777f75098e] den [info] GET 172.19.130.210:8080/healthcheck 200 - 11.822 ms
2023-06-30T14:26:11.558 app[18807713] den [info] POST kentcdodds.com/__metronome 204 - 20.730 ms

Our backend shows an old nomad alloc hanging around still (it’s the shorter ID of app[<some_id>] in those logs)

You should be able to run fly migrate-to-v2 troubleshoot and it’ll get rid of that :slight_smile:

2 Likes

Awesome, that got it all fixed up. Thank you very much :slight_smile:

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.