Hey there!
I have an app that is connected to the Fly Postgres app. When the app is being actively used the Postgres app gets queried once every hour to refresh the expired tokens stored in the Postgres app.
While creating the Postgres app I have selected the option to activate “automatic scale to zero” and I think that’s probably the reason why it’s going down after an hour. The part that I don’t understand is this: “If you have an app that connects to your database, make sure that it also scales to zero (otherwise the connection remaining open will prevent it from “going to sleep”).”
The app that is connected to the database doesn’t scale to zero so I don’t understand why my Postgres “goes to sleep” in this case?
How do I either prevent the Postgres app from going to sleep while there is an active connection OR at least wake it up when there is a query coming in from the app?
Thanks!
Here are the logs from the Postgres app from the moment the health checks are passing again (“awoken” with the “fly machines start…” command to he moment it goes to sleep:
2023-04-29T20:45:30.148 health[148ed544c26658] ams [info] Health check for your postgres vm is now passing.
2023-04-29T20:45:31.147 health[148ed544c26658] ams [info] Health check for your postgres role is now passing.
2023-04-29T20:45:32.147 health[148ed544c26658] ams [info] Health check for your postgres database is now passing.
2023-04-29T20:50:24.982 app[148ed544c26658] ams [info] monitor | Voting member(s): 1, Active: 1, Inactive: 0, Conflicts: 0
2023-04-29T20:50:25.263 app[148ed544c26658] ams [info] postgres | 2023-04-29 20:50:25.263 UTC [570] LOG: checkpoint starting: time
2023-04-29T20:50:26.367 app[148ed544c26658] ams [info] postgres | 2023-04-29 20:50:26.367 UTC [570] LOG: checkpoint complete: wrote 12 buffers (0.4%); 0 WAL file(s) added, 0 removed, 0 recycled; write=1.103 s, sync=0.001 s, total=1.105 s; sync files=9, longest=0.001 s, average=0.001 s; distance=11 kB, estimate=11 kB
2023-04-29T20:50:30.611 app[148ed544c26658] ams [info] repmgrd | [2023-04-29 20:50:30] [INFO] monitoring primary node "fdaa:1:3614:a7b:10d:f0db:ac6a:2" (ID: 406103190) in normal state
2023-04-29T20:55:25.076 app[148ed544c26658] ams [info] monitor | Voting member(s): 1, Active: 1, Inactive: 0, Conflicts: 0
2023-04-29T20:55:25.466 app[148ed544c26658] ams [info] postgres | 2023-04-29 20:55:25.466 UTC [570] LOG: checkpoint starting: time
2023-04-29T20:55:25.568 app[148ed544c26658] ams [info] postgres | 2023-04-29 20:55:25.568 UTC [570] LOG: checkpoint complete: wrote 2 buffers (0.1%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.101 s, sync=0.001 s, total=0.103 s; sync files=2, longest=0.001 s, average=0.001 s; distance=1 kB, estimate=10 kB
2023-04-29T20:55:31.050 app[148ed544c26658] ams [info] repmgrd | [2023-04-29 20:55:31] [INFO] monitoring primary node "fdaa:1:3614:a7b:10d:f0db:ac6a:2" (ID: 406103190) in normal state
2023-04-29T21:00:25.024 app[148ed544c26658] ams [info] monitor | Voting member(s): 1, Active: 1, Inactive: 0, Conflicts: 0
2023-04-29T21:00:31.496 app[148ed544c26658] ams [info] repmgrd | [2023-04-29 21:00:31] [INFO] monitoring primary node "fdaa:1:3614:a7b:10d:f0db:ac6a:2" (ID: 406103190) in normal state
2023-04-29T21:05:25.075 app[148ed544c26658] ams [info] monitor | Voting member(s): 1, Active: 1, Inactive: 0, Conflicts: 0
2023-04-29T21:05:31.939 app[148ed544c26658] ams [info] repmgrd | [2023-04-29 21:05:31] [INFO] monitoring primary node "fdaa:1:3614:a7b:10d:f0db:ac6a:2" (ID: 406103190) in normal state
2023-04-29T21:10:24.979 app[148ed544c26658] ams [info] monitor | Voting member(s): 1, Active: 1, Inactive: 0, Conflicts: 0
2023-04-29T21:10:32.378 app[148ed544c26658] ams [info] repmgrd | [2023-04-29 21:10:32] [INFO] monitoring primary node "fdaa:1:3614:a7b:10d:f0db:ac6a:2" (ID: 406103190) in normal state
2023-04-29T21:15:25.060 app[148ed544c26658] ams [info] monitor | Voting member(s): 1, Active: 1, Inactive: 0, Conflicts: 0
2023-04-29T21:15:32.829 app[148ed544c26658] ams [info] repmgrd | [2023-04-29 21:15:32] [INFO] monitoring primary node "fdaa:1:3614:a7b:10d:f0db:ac6a:2" (ID: 406103190) in normal state
2023-04-29T21:20:24.989 app[148ed544c26658] ams [info] monitor | Voting member(s): 1, Active: 1, Inactive: 0, Conflicts: 0
2023-04-29T21:20:33.270 app[148ed544c26658] ams [info] repmgrd | [2023-04-29 21:20:33] [INFO] monitoring primary node "fdaa:1:3614:a7b:10d:f0db:ac6a:2" (ID: 406103190) in normal state
2023-04-29T21:25:25.029 app[148ed544c26658] ams [info] monitor | Voting member(s): 1, Active: 1, Inactive: 0, Conflicts: 0
2023-04-29T21:25:33.715 app[148ed544c26658] ams [info] repmgrd | [2023-04-29 21:25:33] [INFO] monitoring primary node "fdaa:1:3614:a7b:10d:f0db:ac6a:2" (ID: 406103190) in normal state
2023-04-29T21:30:25.069 app[148ed544c26658] ams [info] monitor | Voting member(s): 1, Active: 1, Inactive: 0, Conflicts: 0
2023-04-29T21:30:34.158 app[148ed544c26658] ams [info] repmgrd | [2023-04-29 21:30:34] [INFO] monitoring primary node "fdaa:1:3614:a7b:10d:f0db:ac6a:2" (ID: 406103190) in normal state
2023-04-29T21:35:25.038 app[148ed544c26658] ams [info] monitor | Voting member(s): 1, Active: 1, Inactive: 0, Conflicts: 0
2023-04-29T21:35:34.601 app[148ed544c26658] ams [info] repmgrd | [2023-04-29 21:35:34] [INFO] monitoring primary node "fdaa:1:3614:a7b:10d:f0db:ac6a:2" (ID: 406103190) in normal state
2023-04-29T21:40:25.077 app[148ed544c26658] ams [info] monitor | Voting member(s): 1, Active: 1, Inactive: 0, Conflicts: 0
2023-04-29T21:40:35.040 app[148ed544c26658] ams [info] repmgrd | [2023-04-29 21:40:35] [INFO] monitoring primary node "fdaa:1:3614:a7b:10d:f0db:ac6a:2" (ID: 406103190) in normal state
2023-04-29T21:45:24.897 app[148ed544c26658] ams [info] Current connection count is 1
2023-04-29T21:45:24.904 app[148ed544c26658] ams [info] Starting clean up.
2023-04-29T21:45:24.905 app[148ed544c26658] ams [info] Umounting /dev/vdb from /data
2023-04-29T21:45:24.905 app[148ed544c26658] ams [info] error umounting /data: EBUSY: Device or resource busy, retrying in a bit
2023-04-29T21:45:25.658 app[148ed544c26658] ams [info] error umounting /data: EBUSY: Device or resource busy, retrying in a bit
2023-04-29T21:45:26.410 app[148ed544c26658] ams [info] error umounting /data: EBUSY: Device or resource busy, retrying in a bit
2023-04-29T21:45:27.161 app[148ed544c26658] ams [info] error umounting /data: EBUSY: Device or resource busy, retrying in a bit
2023-04-29T21:45:28.914 app[148ed544c26658] ams [info] [ 3604.281746] reboot: Restarting system
2023-04-29T21:45:29.655 runner[148ed544c26658] ams [info] machine exited with exit code 0, not restarting```