Thanks @senyo
After moving to auto-start/auto-stop I see machines wound down in the logs often followed by immediately the same machine starting back up:
# quitting because there's 2 machines?
2023-06-13T15:20:15Z proxy [e148e452addd89] yyz [info]Downscaling app udns in region yyz. Automatically stopping machine e148e452addd89. 2 instances are running, 0 are at soft limit, we only need 1 running
# starting back up after 5s ...
2023-06-13T15:20:20Z app[e148e452addd89] yyz [info]2023-06-13T15:20:20.240Z I NodeJs http-check listening on: [::]:8888
2023-06-13T15:20:20Z app[e148e452addd89] yyz [info]2023-06-13T15:20:20.241Z I NodeJs DoT listening on: [::]:10000
2023-06-13T15:20:20Z app[e148e452addd89] yyz [info]2023-06-13T15:20:20.241Z I NodeJs DoH listening on: [::]:8080
2023-06-13T15:20:20Z proxy[e148e452addd89] yyz [info]machine became reachable in 618.289451ms
Curiously, there’s only one machine in yyz
. Btw, this happens with most other regions. Here’s the scale
characteristic for the udns
app:
➜ fly scale show -a udns
VM Resources for app: udns
Groups
NAME COUNT KIND CPUS MEMORY REGIONS
app 39 shared 1 256 MB ams(2),arn,atl,bog,bom(2),bos,cdg,den,dfw,ewr,eze,fra(2),gdl,gig,gru,hkg,iad,jnb,lax,lhr(2),mad,mia,nrt,ord,otp,phx,qro,scl,sea,sin(2),sjc,syd,yul,yyz
Only regions ams, bom, fra, and sin have 2 machines, while the rest don’t. Am I hitting a bug here wrt Fly expecting at least 2 machines per region for all regions?