There was some problem with my deployed app. It seem to be that there’s some problem with the proxy. This is the error message, any one can point me to the right direction?
I have tried:
destroy machines
redeploy
restart app
2024-01-11T07:26:36.987 proxy[xx] lhr [error] could not complete HTTP request to instance: error from user's HttpBody stream: error reading a body from connection: Connection reset by peer (os error 104)
2024-01-11T07:26:37.154 proxy[xx] sin [error] could not complete HTTP request to instance: error from user's HttpBody stream: error reading a body from connection: end of file before message length reached
2024-01-11T07:26:37.154 app[xx] sin [info] {"level":"error","error":"http: invalid Read on closed Body","time":"2024-01-11T07:26:37Z","message":"Failed to read request body"}
The last few days I am also suddenly dealing with thousands of “request aborted” errors on my NodeJs server.
I am only starting to investigate this issue. But since I havn’t made any relevant changes on my side I suspect it could be the fly proxy at fault. Also already notified support.
I have received those errors in the past but only few. Also I have never faced those errors when I still used Heroku – they first appeared when I moved to fly last autumn.
From the express docs. request aborted
This error will occur when the request is aborted by the client before reading the body has finished. The received property will be set to the number of bytes received before the request was aborted and the expected property is set to the number of expected bytes. The status property is set to 400 and type property is set to 'request.aborted'.
BadRequestError: request aborted
at IncomingMessage.onAborted (/usr/app/server/node_modules/raw-body/index.js:238:10)
at IncomingMessage.emit (node:events:514:28)
at IncomingMessage.emit (node:domain:489:12)
at IncomingMessage._destroy (node:_http_incoming:224:10)
at _destroy (node:internal/streams/destroy:109:10)
at IncomingMessage.destroy (node:internal/streams/destroy:71:5)
at abortIncoming (node:_http_server:766:9)
at socketOnClose (node:_http_server:760:3)
at Socket.emit (node:events:526:35)
at Socket.emit (node:domain:489:12) {
code: ‘ECONNABORTED’,
expected: 2878,
length: 2878,
received: 0,
type: ‘request.aborted’
}
And on my React app my users face quite a few 520 status errors. Those are only logged after retried requests fail. So it’s seems to actually affect my users.
We’re having similar issues with Cloudflare proxying, that we didn’t have a few weeks ago.
Ruby application
reference-structuring--v2--production could not complete HTTP request to instance: error from user's HttpBody stream: error reading a body from connection: Connection reset by peer (os error 104)
reference-structuring--v2--production could not complete HTTP request to instance: error from user's HttpBody stream: error reading a body from connection: end of file before message length reached
NodeJS application
pergamon--v2--production could not complete HTTP request to instance: error from user's HttpBody stream: error reading a body from connection: stream error received: unexpected internal error encountered
These seem to have started happening quite recently. We haven’t changed any settings in Cloudflare. (EDIT: Cloudflare seems to be a red herring, these issues still happen even with Cloudflare proxying disabled)
### google.com
traceroute to google.com (142.251.133.238), 64 hops max, 52 byte packets
1 192.168.100.1 (192.168.100.1) 10.024 ms 9.935 ms 9.229 ms
2 100.71.48.2 (100.71.48.2) 15.118 ms 41.227 ms 11.907 ms
3 10.0.3.228 (10.0.3.228) 10.304 ms * 13.111 ms
4 be5-2.cf223-br-05.claro.net.ar (170.51.254.176) 21.115 ms 27.233 ms
be5-2.c1900-br-05.claro.net.ar (170.51.254.172) 26.619 ms
5 142.250.165.154 (142.250.165.154) 23.396 ms 20.173 ms 26.488 ms
6 * * *
7 172.253.71.10 (172.253.71.10) 25.935 ms
142.251.79.174 (142.251.79.174) 32.482 ms
142.251.77.0 (142.251.77.0) 37.750 ms
8 142.251.239.157 (142.251.239.157) 23.592 ms
192.178.84.14 (192.178.84.14) 55.107 ms
192.178.84.66 (192.178.84.66) 23.025 ms
9 142.250.60.175 (142.250.60.175) 23.840 ms
192.178.85.145 (192.178.85.145) 45.320 ms
192.178.85.139 (192.178.85.139) 21.971 ms
10 eze10s08-in-f14.1e100.net (142.251.133.238) 23.693 ms 18.425 ms 28.598 ms