502 Build Error

When building I get ←[31mError←[0m server returned a non-200 status code: 502

I am getting image like I seriously cant build rn

Hi @heppcat,

Sorry about that.

Can you help us by providing a traceroute to api.fly.io? (and if you have ipv6, can you get us one for both?)

I’m getting it again too, for a remote build using CI. Deploys fail.

Build complete - in_progress                                                                                                                              
Error server returned a non-200 status code: 502

Oh. Local builds don’t work either.

--> Done Pushing Image
==> Optimizing Image
Error server returned a non-200 status code: 502

Routing does have a lot of * in.

host213-121-192-58.ukcore.bt.net (213.121.192.58) 21.248 ms
core3-hu0-14-0-1.faraday.ukcore.bt.net (195.99.127.42) 20.719 ms
core3-hu0-1-0-1.faraday.ukcore.bt.net (195.99.127.34) 22.019 ms
6 62.6.201.144 (62.6.201.144) 24.081 ms
166-49-209-132.gia.bt.net (166.49.209.132) 21.126 ms
62.6.201.144 (62.6.201.144) 22.111 ms
7 166-49-209-132.gia.bt.net (166.49.209.132) 21.321 ms 21.074 ms 85.251 ms
8 212.119.4.140 (212.119.4.140) 22.331 ms 22.271 ms 23.674 ms
9 ae-7.r20.londen12.uk.bb.gin.ntt.net (129.250.4.140) 25.314 ms
ae-0.a01.londen12.uk.bb.gin.ntt.net (129.250.2.33) 21.730 ms 22.719 ms
10 ae-0.a01.londen12.uk.bb.gin.ntt.net (129.250.2.33) 21.429 ms * 24.258 ms
11 * * *
12 * * *
13 * * *
14 * * *
15 * * *
16 * * *
17 * * *
18 * * *
19 * * *
20 * * *
21 * * *
22 * * *
23 * * *
24 * * *
25 * * *
26 * * *
27 * * *
28 * * *
29 * * *
30 * * *

I think this may have cleared up now. We’ve since setup some monitoring that hits every layer of our API from a bunch of different regions. Hopefully if it becomes an issue again, we’ll catch it.

I did a deploy so the 502 error has temporarily gone away.

Though there was weirdness. The thing works fine locally, failed during the deploy. Retried without changing anything and it worked. Hmm …

09-15T15:43:59Z","Source":"deploy","Status":"info","Message":"v29 is being deployed"}                                                                

{"TS":"2020-09-15T15:44:01Z","Source":"deploy","Status":"info","Message":"356cf465: lhr pending"}                                                                

{"TS":"2020-09-15T15:44:01Z","Source":"deploy","Status":"detail","Message":"v29 failed - Failed due to unhealthy allocations - rolling back to job version 28\n"}

{"TS":"2020-09-15T15:44:03Z","Source":"deploy","Status":"info","Message":"v30 is being deployed"}                                                                

{"TS":"2020-09-15T15:44:13Z","Source":"deploy","Status":"done","Message":"v30 deployed successfully\n"}                                                          

Reading environment variable exporting file contents.                                                                                                            

Reading environment variable exporting file contents.                                                                                                            

{"TS":"2020-09-15T15:44:15Z","Source":"deploy","Status":"error","Message":"v29 failed - Failed due to unhealthy allocations - rolling back to job version 28 and 

deploying as v30 \n"}

Usually when a deploy fails, and then the next one succeeds it means either the process is crashing trying to connect to an external resource, or health checks are taking too long to start working.

You can probably see what happened from the logs, if you run flyctl status --all you’ll see the failed VMs, then flyctl logs -i <id> to get specific logs.

Ok.

I tried that but no logs were returned for the status failed one when using the -i [id] flag.

It was an hour ago now so maybe the tail only goes back an hour. Ah well.

This might be an issue on our end, we’re looking into it.

@greg There’s supposed to be a log of failed vms after a deploy, but that might be getting omitted if you’re using flyctl’s JSON output – are you running it with the --json flag?

I can’t remember which command I tried …

Next time I’ll check.

No worries. We’re working on some fixes that’ll show unhealthy vms in more cases.