Hi all,
I am looking for help with my django-powered app as I am experiencing performance issues that I cannot find a solution for. Here are the logs:
$ fly logs -a bl-app
Waiting for logs…
2025-06-10T11:46:21.539 app[148e276f172968] ams [info] [2025-06-10 11:46:21 +0000] [632] [CRITICAL] WORKER TIMEOUT (pid:639)
2025-06-10T11:46:21.542 app[148e276f172968] ams [info] [2025-06-10 11:46:21 +0000] [632] [CRITICAL] WORKER TIMEOUT (pid:640)
2025-06-10T11:46:21.619 app[148e276f172968] ams [info] [2025-06-10 13:46:21 +0200] [639] [INFO] Worker exiting (pid: 639)
2025-06-10T11:46:21.860 app[148e276f172968] ams [info] [2025-06-10 13:46:21 +0200] [640] [INFO] Worker exiting (pid: 640)
2025-06-10T11:46:23.219 app[148e276f172968] ams [info] [2025-06-10 11:46:23 +0000] [632] [ERROR] Worker (pid:640) was sent SIGKILL! Perhaps out of memory?
2025-06-10T11:46:23.459 app[148e276f172968] ams [info] [2025-06-10 11:46:23 +0000] [632] [ERROR] Worker (pid:639) was sent SIGKILL! Perhaps out of memory?
2025-06-10T11:46:23.460 app[148e276f172968] ams [info] [2025-06-10 11:46:23 +0000] [644] [INFO] Booting worker with pid: 644
2025-06-10T11:46:24.020 app[148e276f172968] ams [info] [2025-06-10 11:46:23 +0000] [645] [INFO] Booting worker with pid: 645
2025-06-10T11:46:57.655 app[148e276f172968] ams [info] [WSGI] Memory on startup: 39.02 MB
2025-06-10T11:46:57.655 app[148e276f172968] ams [info] [WSGI] Setting default DJANGO_SETTINGS_MODULE
2025-06-10T11:46:57.659 app[148e276f172968] ams [info] [WSGI] Calling get_wsgi_application()
2025-06-10T11:46:57.667 app[148e276f172968] ams [info] [WSGI] Memory on startup: 39.02 MB
2025-06-10T11:46:57.667 app[148e276f172968] ams [info] [WSGI] Setting default DJANGO_SETTINGS_MODULE
2025-06-10T11:46:57.667 app[148e276f172968] ams [info] [WSGI] Calling get_wsgi_application()
2025-06-10T11:47:54.423 app[148e276f172968] ams [info] [2025-06-10 11:47:53 +0000] [632] [CRITICAL] WORKER TIMEOUT (pid:644)
2025-06-10T11:47:54.424 app[148e276f172968] ams [info] [2025-06-10 11:47:54 +0000] [632] [CRITICAL] WORKER TIMEOUT (pid:645)
2025-06-10T11:47:54.579 app[148e276f172968] ams [info] [2025-06-10 13:47:54 +0200] [645] [INFO] Worker exiting (pid: 645)
2025-06-10T11:47:54.580 app[148e276f172968] ams [info] [2025-06-10 13:47:54 +0200] [644] [INFO] Worker exiting (pid: 644)
2025-06-10T11:47:56.098 app[148e276f172968] ams [info] [2025-06-10 11:47:56 +0000] [632] [ERROR] Worker (pid:644) was sent SIGKILL! Perhaps out of memory?
2025-06-10T11:47:56.099 app[148e276f172968] ams [info] [2025-06-10 11:47:56 +0000] [632] [ERROR] Worker (pid:645) was sent SIGKILL! Perhaps out of memory?
2025-06-10T11:47:56.183 app[148e276f172968] ams [info] [2025-06-10 11:47:56 +0000] [648] [INFO] Booting worker with pid: 648
2025-06-10T11:47:56.499 app[148e276f172968] ams [info] [2025-06-10 11:47:56 +0000] [649] [INFO] Booting worker with pid: 649
The application logs suggest a potential memory issue, but I’m skeptical, as even static pages occasionally fail to load. This intermittent behavior is key: the app functions perfectly at times, then unexpectedly presents the errors shown in the logs. I haven’t yet identified a pattern, though a common scenario is a sudden unresponsiveness, even for simple pages.
To investigate memory, I added a print statement to wsgi.py
, which shows usage around a low 40 MB. I’ve already attempted various fixes, including disabling Sentry, increasing worker timeouts to 90 seconds, and optimizing database queries with prefetch_related
and select_related
, but the problem persists randomly.
Thanks for your input. Do you have any further ideas?
Note: I have had a similar issue about half a year ago which I was able to resolve by optimizing the database queries. See here.