Old parts of code still there after deploying

Hi - After deploying to my fly app, it seems like old parts of the code that are not there anymore on my local project still exist. My console.logs are updated to whatever is in my local project, however it still crashes because of an error that should not be happening anymore. The error refers to a line in the code that is not even there anymore. I do also get a Health check on port 8080 has failed, thought not sure if that is relevant.

Hi!

What does the output of the following show?

fly status

You should be able to see similar information on the “Monitoring” on your Fly.io dashboard.

Is it possible the deploy failed, and resulted in rolling back to the previous version of the app? This may explain why you’re seeing errors related to “old” code.

The healthcheck may be related to this yes - it might be a reason for why the deploy failed.

Hello - This is the output of fly status

App
  Name     = gptcord
  Owner    = personal
  Version  = 21
  Status   = running
  Hostname = gptcord.fly.dev
  Platform = nomad

Deployment Status
  ID          = f928aec1-e31a-963d-f1f4-c96eabd4d6a9
  Version     = v21
  Status      = failed
  Description = Failed due to unhealthy allocations - not rolling back to stable job version 21 as current job has same specification
  Instances   = 1 desired, 1 placed, 0 healthy, 1 unhealthy

Instances
ID              PROCESS VERSION REGION  DESIRED STATUS  HEALTH CHECKS           RESTARTS        CREATED
c18d7c21        app     16      sjc     run     running 1 total, 1 critical     5               22h37m ago

It might be that the deploy failed, but I find it weird how instead of not deploying at all, it deploys parts of the new code still.

I suspect it is deploying the new code (and you’re seeing evidence of this in the logs I think).
But the health checks are then subsequently failing.
Requests then continue to be routed to the old deploy which is why you’re seeing “old” errors.

Can you post your fly.toml config - are the ports setup correctly for that healthcheck to be reachable?

This is my fly.toml:

# fly.toml file generated for gptcord on 2023-03-14T13:14:38+01:00

app = "gptcord"
kill_signal = "SIGINT"
kill_timeout = 5
primary_region = "sjc"
processes = []

[env]
  PORT = "8080"

[experimental]
  auto_rollback = true

[[services]]
  http_checks = []
  internal_port = 8080
  processes = ["app"]
  protocol = "tcp"
  script_checks = []
  [services.concurrency]
    hard_limit = 25
    soft_limit = 20
    type = "connections"

  [[services.ports]]
    force_https = true
    handlers = ["http"]
    port = 80

  [[services.ports]]
    handlers = ["tls", "http"]
    port = 443

  [[services.tcp_checks]]
    grace_period = "1s"
    interval = "15s"
    restart_limit = 0
    timeout = "2s"

I’ve not touched this file myself

It would be worth checking if the app itself that you are deploying is configured to listen on 0.0.0.0:8080.
Check its not just binding to 127.0.0.1 for example and that its listening on the correct port.

That fly.toml routes external traffic for ports 80/443 to internal port 8080 on the app.

Alternatively - if your app doesn’t actually expose any ports (so nothing to expose and nothing to run health checks against) you can simply remove the [[services]] configuration entirely (including all the nested ports, tcp_checks etc.)

The default fly.toml assumes you are exposing a web application on http/https to get you up and running quickly. But this doesn’t necessarily make sense if you are not deploying a web app.

Yep, that fixed it! Thanks a lot for the good help :slight_smile:

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.