Any issues with proxy?

There was some problem with my deployed app. It seem to be that there’s some problem with the proxy. This is the error message, any one can point me to the right direction?

I have tried:

  • destroy machines
  • redeploy
  • restart app
2024-01-11T07:26:36.987 proxy[xx] lhr [error] could not complete HTTP request to instance: error from user's HttpBody stream: error reading a body from connection: Connection reset by peer (os error 104)

2024-01-11T07:26:37.154 proxy[xx] sin [error] could not complete HTTP request to instance: error from user's HttpBody stream: error reading a body from connection: end of file before message length reached

2024-01-11T07:26:37.154 app[xx] sin [info] {"level":"error","error":"http: invalid Read on closed Body","time":"2024-01-11T07:26:37Z","message":"Failed to read request body"}

Hi @upjinjie

This error indicates that the client closed the connection too early before finishing the request.

The chain of events seems to be:

  • a proxy in LHR (on edge) accepted a request from client and forwarded it to a proxy in SIN
  • a proxy in SIN started forwarding it to your app
  • the client terminated the connection before sending the request in full
  • proxy in LHR detected that (Connection reset by peer (os error 104)) and closed the request to the proxy in SIN
  • proxy in SIN closed the request to your app, that’s why you got http: invalid Read on closed Body

Are you sure you are sending full request body with the request?

The last few days I am also suddenly dealing with thousands of “request aborted” errors on my NodeJs server.

I am only starting to investigate this issue. But since I havn’t made any relevant changes on my side I suspect it could be the fly proxy at fault. Also already notified support.

I have received those errors in the past but only few. Also I have never faced those errors when I still used Heroku – they first appeared when I moved to fly last autumn.

From the express docs.
request aborted
This error will occur when the request is aborted by the client before reading the body has finished. The received property will be set to the number of bytes received before the request was aborted and the expected property is set to the number of expected bytes. The status property is set to 400 and type property is set to 'request.aborted'.

BadRequestError: request aborted
at IncomingMessage.onAborted (/usr/app/server/node_modules/raw-body/index.js:238:10)
at IncomingMessage.emit (node:events:514:28)
at IncomingMessage.emit (node:domain:489:12)
at IncomingMessage._destroy (node:_http_incoming:224:10)
at _destroy (node:internal/streams/destroy:109:10)
at IncomingMessage.destroy (node:internal/streams/destroy:71:5)
at abortIncoming (node:_http_server:766:9)
at socketOnClose (node:_http_server:760:3)
at Socket.emit (node:events:526:35)
at Socket.emit (node:domain:489:12) {
code: ‘ECONNABORTED’,
expected: 2878,
length: 2878,
received: 0,
type: ‘request.aborted’
}

And on my React app my users face quite a few 520 status errors. Those are only logged after retried requests fail. So it’s seems to actually affect my users.

I am using cloudflare – here is some info from their side regarding 520s https://community.cloudflare.com/t/community-tip-fixing-error-520-web-server-is-returning-an-unknown-error/44205

We’re having similar issues with Cloudflare proxying, that we didn’t have a few weeks ago.

Ruby application

reference-structuring--v2--production could not complete HTTP request to instance: error from user's HttpBody stream: error reading a body from connection: Connection reset by peer (os error 104)
reference-structuring--v2--production could not complete HTTP request to instance: error from user's HttpBody stream: error reading a body from connection: end of file before message length reached

NodeJS application

pergamon--v2--production could not complete HTTP request to instance: error from user's HttpBody stream: error reading a body from connection: stream error received: unexpected internal error encountered

These seem to have started happening quite recently. We haven’t changed any settings in Cloudflare. (EDIT: Cloudflare seems to be a red herring, these issues still happen even with Cloudflare proxying disabled)


Those graphs by bruno-scholarcy match very well with the occurrences of those BadRequest errors on my side.

Can you share traceroutes to your apps, please? We’ve had some weird routing issues recently that I suspect might be the reason for this.

@pavel here you go (I’m in South America, our apps are in lhr/cdg):

### reference-structuring--v2--production
traceroute to reference-structuring--v2--production.fly.dev (66.241.125.195), 64 hops max, 52 byte packets
 1  192.168.100.1 (192.168.100.1)  28.666 ms  14.430 ms  5.441 ms
 2  100.71.48.2 (100.71.48.2)  9.255 ms  29.102 ms  21.457 ms
 3  10.0.3.228 (10.0.3.228)  8.166 ms *  146.755 ms
 4  be5-2.c1900-br-05.claro.net.ar (170.51.254.172)  27.550 ms  56.729 ms
    be5-2.cf223-br-05.claro.net.ar (170.51.254.176)  25.697 ms
 5  ae7.0.edge2.eze2.as7195.net (200.25.50.230)  24.053 ms  56.564 ms  25.677 ms
 6  ae814.0.edge1.eze1.as7195.net (200.25.51.69)  22.441 ms  30.571 ms
    195.22.219.21 (195.22.219.21)  57.853 ms
 7  * * *
 8  * * *
 9  66.219.163.148.ptr.anycast.net (148.163.219.66)  63.625 ms * *
10  * * *
11  * * *
12  * * *
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  * * *
26  * * *
27  * * *
28  * * *
29  * * *
30  * * *
31  * * *
32  * * *
33  * * *
34  * * *
35  * * *
36  * * *
37  * * *
38  * * *
39  * * *
40  * * *
41  * * *
42  * * *
43  * * *
44  * * *
45  * * *
46  * * *
47  * * *
48  * * *
49  * * *
50  * * *
51  * * *
52  * * *
53  * * *
54  * * *
55  * * *
56  * * *
57  * * *
58  * * *
59  * * *
60  * * *
61  * * *
62  * * *
63  * * *
64  * * *
## pergamon--v2--production
traceroute to pergamon--v2--production.fly.dev (66.241.124.221), 64 hops max, 52 byte packets
 1  192.168.100.1 (192.168.100.1)  6.587 ms  8.183 ms *
 2  100.71.48.2 (100.71.48.2)  25.579 ms  13.222 ms  12.786 ms
 3  10.0.3.228 (10.0.3.228)  11.429 ms *  11.164 ms
 4  be5-2.cf223-br-05.claro.net.ar (170.51.254.176)  23.047 ms  25.402 ms
    be5-2.c1900-br-05.claro.net.ar (170.51.254.172)  28.838 ms
 5  195.22.220.44 (195.22.220.44)  23.094 ms
    ae7.0.edge2.eze2.as7195.net (200.25.50.230)  20.745 ms  20.338 ms
 6  ae814.0.edge1.eze1.as7195.net (200.25.51.69)  31.131 ms  17.381 ms
    195.22.219.21 (195.22.219.21)  55.890 ms
 7  * * *
 8  * * *
 9  66.219.163.148.ptr.anycast.net (148.163.219.66)  125.680 ms * *
10  * * *
11  * * *
12  * * *
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  * * *
26  * * *
27  * * *
28  * * *
29  * * *
30  * * *
31  * * *
32  * * *
33  * * *
34  * * *
35  * * *
36  * * *
37  * * *
38  * * *
39  * * *
40  * * *
41  * * *
42  * * *
43  * * *
44  * * *
45  * * *
46  * * *
47  * * *
48  * * *
49  * * *
50  * * *
51  * * *
52  * * *
53  * * *
54  * * *
55  * * *
56  * * *
57  * * *
58  * * *
59  * * *
60  * * *
61  * * *
62  * * *
63  * * *
64  * * *

For comparison:

### google.com
traceroute to google.com (142.251.133.238), 64 hops max, 52 byte packets
 1  192.168.100.1 (192.168.100.1)  10.024 ms  9.935 ms  9.229 ms
 2  100.71.48.2 (100.71.48.2)  15.118 ms  41.227 ms  11.907 ms
 3  10.0.3.228 (10.0.3.228)  10.304 ms *  13.111 ms
 4  be5-2.cf223-br-05.claro.net.ar (170.51.254.176)  21.115 ms  27.233 ms
    be5-2.c1900-br-05.claro.net.ar (170.51.254.172)  26.619 ms
 5  142.250.165.154 (142.250.165.154)  23.396 ms  20.173 ms  26.488 ms
 6  * * *
 7  172.253.71.10 (172.253.71.10)  25.935 ms
    142.251.79.174 (142.251.79.174)  32.482 ms
    142.251.77.0 (142.251.77.0)  37.750 ms
 8  142.251.239.157 (142.251.239.157)  23.592 ms
    192.178.84.14 (192.178.84.14)  55.107 ms
    192.178.84.66 (192.178.84.66)  23.025 ms
 9  142.250.60.175 (142.250.60.175)  23.840 ms
    192.178.85.145 (192.178.85.145)  45.320 ms
    192.178.85.139 (192.178.85.139)  21.971 ms
10  eze10s08-in-f14.1e100.net (142.251.133.238)  23.693 ms  18.425 ms  28.598 ms

@pavel

on a positive side note: No occurrences of this error in the last 3 hours.

Traceroutes:

traceroute to app-production.fly.dev (66.241.124.223), 64 hops max, 52 byte packets
 1  192.168.0.1 (192.168.0.1)  5.333 ms  5.018 ms  4.860 ms
 2  at-vie03c-rt01.as8412.net (217.25.120.4)  14.140 ms  17.597 ms  13.869 ms
 3  at-vie03c-rc01.as8412.net (217.25.122.252)  15.032 ms  15.851 ms  14.015 ms
 4  80.157.204.97 (80.157.204.97)  16.039 ms
    80.157.205.57 (80.157.205.57)  17.471 ms
    80.157.204.97 (80.157.204.97)  15.012 ms
 5  vie-sb5-i.vie.at.net.dtag.de (217.239.41.86)  16.325 ms
    vie-sb5-i.vie.at.net.dtag.de (217.239.55.145)  15.493 ms
    vie-sb5-i.vie.at.net.dtag.de (217.239.41.86)  16.451 ms
 6  ce-0-6-0-3.r00.vienat02.at.bb.gin.ntt.net (129.250.66.65)  15.093 ms  17.335 ms  16.340 ms
 7  * ae-3.r20.vienat02.at.bb.gin.ntt.net (129.250.7.18)  15.619 ms  13.931 ms
 8  ae-12.r20.amstnl07.nl.bb.gin.ntt.net (129.250.7.29)  41.342 ms  43.356 ms  47.491 ms
 9  ae-1.a00.amstnl08.nl.bb.gin.ntt.net (129.250.2.11)  42.987 ms  45.226 ms  45.075 ms
10  * * *
11  * * *
12  * * *
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *

Yeah, I believe the routing issue that affected your apps has been fixed.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.