Database machine fails to restart

Since yesterday, my database machine has stopped working randomly. I managed to restart it once, but now it won’t. I see these errors in the console:

023-02-21T02:33:02.652 app[9080291c6d9948] cdg [info] Starting clean up.
2023-02-21T02:37:40.593 app[9080291c6d9948] cdg [info] Starting init (commit: b8364bb)...

2023-02-21T02:37:40.606 app[9080291c6d9948] cdg [info] Mounting /dev/vdb at /data w/ uid: 0, gid: 0 and chmod 0755

2023-02-21T02:37:40.607 app[9080291c6d9948] cdg [info] could not resize filesystem to fill device size: Operation not permitted (os error 1)

2023-02-21T02:37:40.608 app[9080291c6d9948] cdg [info] Preparing to run: `docker-entrypoint.sh start` as root

2023-02-21T02:37:40.623 app[9080291c6d9948] cdg [info] 2023/02/21 02:37:40 listening on [fdaa:0:8835:a7b:5adc:1ed:8f9b:2]:22 (DNS: [fdaa::3]:53)

2023-02-21T02:37:40.668 app[9080291c6d9948] cdg [info] [ 0.165941] EXT4-fs error (device vdb): ext4_validate_block_bitmap:390: comm start: bg 3: bad block bitmap checksum

2023-02-21T02:37:40.680 app[9080291c6d9948] cdg [info] [ 0.178122] EXT4-fs error (device vdb): ext4_lookup:1709: inode #34: comm chown: deleted inode referenced: 1021

2023-02-21T02:37:40.685 app[9080291c6d9948] cdg [info] panic: exit status 1

2023-02-21T02:37:40.685 app[9080291c6d9948] cdg [info] goroutine 1 [running]:

2023-02-21T02:37:41.617 app[9080291c6d9948] cdg [info] Starting clean up.

2023-02-21T02:37:41.617 app[9080291c6d9948] cdg [info] Umounting /dev/vdb from /data

2023-02-21T02:37:42.622 app[9080291c6d9948] cdg [info] [ 2.119921] reboot: Restarting system

2023-02-21T02:33:02.652 app[9080291c6d9948] cdg [info] Umounting /dev/vdb from /data

2023-02-21T02:33:03.657 app[9080291c6d9948] cdg [info] [ 2.122897] reboot: Restarting system

2023-02-21T02:33:31.380 app[2d8b1d1b] cdg [info] keeper | 2023-02-21 02:33:31.380 UTC [7262] ERROR: invalid page in block 13 of relation base/13757/2840

2023-02-21T02:33:31.380 app[2d8b1d1b] cdg [info] keeper | 2023-02-21 02:33:31.380 UTC [7262] CONTEXT: automatic analyze of table "postgres.public.wiki"

2023-02-21T02:33:31.391 app[2d8b1d1b] cdg [info] keeper | 2023-02-21 02:33:31.391 UTC [7262] ERROR: invalid page in block 13 of relation base/13757/2840

2023-02-21T02:33:31.391 app[2d8b1d1b] cdg [info] keeper | 2023-02-21 02:33:31.391 UTC [7262] CONTEXT: while scanning block 13 of relation "pg_toast.pg_toast_2619"

2023-02-21T02:33:31.391 app[2d8b1d1b] cdg [info] keeper | automatic vacuum of table "postgres.pg_toast.pg_toast_2619"

And this loops forever. What can I do?

More of the constant stream of errors, if they make sense to anyone.

2023-02-21T11:41:23.284 app[f6adb1eb] cdg [info] keeper | 2023-02-21T11:41:23.283Z ERROR cmd/keeper.go:719 cannot get configured pg parameters {"error": "pq: the database system is shutting down"}

2023-02-21T11:41:23.446 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:23.445 UTC [2659] LOG: request to flush past end of generated WAL; request 0/50C8B530, current position 0/50C8A608

2023-02-21T11:41:23.446 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:23.445 UTC [2659] CONTEXT: writing block 19 of relation base/13757/2619

2023-02-21T11:41:23.446 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:23.445 UTC [2659] ERROR: xlog flush request 0/50C8B530 is not satisfied --- flushed only to 0/50C8A608

2023-02-21T11:41:23.446 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:23.445 UTC [2659] CONTEXT: writing block 19 of relation base/13757/2619

2023-02-21T11:41:23.446 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:23.445 UTC [2659] WARNING: could not write block 19 of base/13757/2619

2023-02-21T11:41:23.446 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:23.445 UTC [2659] DETAIL: Multiple failures --- write error might be permanent.

2023-02-21T11:41:23.912 app[f6adb1eb] cdg [info] keeper | .2023-02-21 11:41:23.911 UTC [2425] FATAL: the database system is shutting down

2023-02-21T11:41:24.277 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:24.276 UTC [2427] FATAL: the database system is shutting down

2023-02-21T11:41:24.448 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:24.447 UTC [2659] LOG: request to flush past end of generated WAL; request 0/50C8B530, current position 0/50C8A608

2023-02-21T11:41:24.448 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:24.447 UTC [2659] CONTEXT: writing block 19 of relation base/13757/2619

2023-02-21T11:41:24.448 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:24.447 UTC [2659] ERROR: xlog flush request 0/50C8B530 is not satisfied --- flushed only to 0/50C8A608

2023-02-21T11:41:24.448 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:24.447 UTC [2659] CONTEXT: writing block 19 of relation base/13757/2619

2023-02-21T11:41:24.448 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:24.447 UTC [2659] WARNING: could not write block 19 of base/13757/2619

2023-02-21T11:41:24.448 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:24.447 UTC [2659] DETAIL: Multiple failures --- write error might be permanent.

2023-02-21T11:41:25.450 app[f6adb1eb] cdg [info] keeper | .2023-02-21 11:41:25.449 UTC [2659] LOG: request to flush past end of generated WAL; request 0/50C8B530, current position 0/50C8A608

2023-02-21T11:41:25.450 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:25.449 UTC [2659] CONTEXT: writing block 19 of relation base/13757/2619

2023-02-21T11:41:25.450 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:25.449 UTC [2659] ERROR: xlog flush request 0/50C8B530 is not satisfied --- flushed only to 0/50C8A608

2023-02-21T11:41:25.450 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:25.449 UTC [2659] CONTEXT: writing block 19 of relation base/13757/2619

2023-02-21T11:41:25.450 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:25.449 UTC [2659] WARNING: could not write block 19 of base/13757/2619

2023-02-21T11:41:25.450 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:25.449 UTC [2659] DETAIL: Multiple failures --- write error might be permanent.

2023-02-21T11:41:25.786 app[f6adb1eb] cdg [info] keeper | .2023-02-21 11:41:25.785 UTC [2429] FATAL: the database system is shutting down

2023-02-21T11:41:25.786 app[f6adb1eb] cdg [info] keeper | 2023-02-21T11:41:25.786Z ERROR cmd/keeper.go:719 cannot get configured pg parameters {"error": "pq: the database system is shutting down"}

2023-02-21T11:41:26.185 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:26.184 UTC [2431] FATAL: the database system is shutting down

2023-02-21T11:41:26.452 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:26.451 UTC [2659] LOG: request to flush past end of generated WAL; request 0/50C8B530, current position 0/50C8A608

2023-02-21T11:41:26.452 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:26.451 UTC [2659] CONTEXT: writing block 19 of relation base/13757/2619

2023-02-21T11:41:26.452 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:26.451 UTC [2659] ERROR: xlog flush request 0/50C8B530 is not satisfied --- flushed only to 0/50C8A608

2023-02-21T11:41:26.452 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:26.451 UTC [2659] CONTEXT: writing block 19 of relation base/13757/2619

2023-02-21T11:41:26.452 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:26.451 UTC [2659] WARNING: could not write block 19 of base/13757/2619

2023-02-21T11:41:26.452 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:26.451 UTC [2659] DETAIL: Multiple failures --- write error might be permanent.

2023-02-21T11:41:27.454 app[f6adb1eb] cdg [info] keeper | .2023-02-21 11:41:27.454 UTC [2659] LOG: request to flush past end of generated WAL; request 0/50C8B530, current position 0/50C8A608

2023-02-21T11:41:27.454 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:27.454 UTC [2659] CONTEXT: writing block 19 of relation base/13757/2619

2023-02-21T11:41:27.454 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:27.454 UTC [2659] ERROR: xlog flush request 0/50C8B530 is not satisfied --- flushed only to 0/50C8A608

2023-02-21T11:41:27.454 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:27.454 UTC [2659] CONTEXT: writing block 19 of relation base/13757/2619

2023-02-21T11:41:27.454 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:27.454 UTC [2659] WARNING: could not write block 19 of base/13757/2619

2023-02-21T11:41:27.454 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:27.454 UTC [2659] DETAIL: Multiple failures --- write error might be permanent.

2023-02-21T11:41:27.766 app[f6adb1eb] cdg [info] keeper | .2023-02-21 11:41:27.764 UTC [2433] FATAL: the database system is shutting down

2023-02-21T11:41:28.289 app[f6adb1eb] cdg [info] keeper | 2023-02-21 11:41:28.288 UTC [2434] FATAL: the database system is shutting down

2023-02-21T11:41:28.289 app[f6adb1eb] cdg [info] keeper | 2023-02-21T11:41:28.288Z ERROR cmd/keeper.go:719 cannot get configured pg parameters {"error": "pq: the database system is shutting down"}

To answer my own question, it appears to have been caused by a corrupted database (in the pg_statistics table more precisely, which is not critical). What’s unclear to me is why postgres or fly decided to shut down the database every few seconds or so to restart it, which made it available only a few seconds at a time at regular intervals.
My solution was to back up my data, create a new machine/cluster, copy it there, and delete the old one.