Deployed Nodejs App Not Reachable

I’m having trouble accessing my deployed app. This is what console shows following deployment:

WARNING The app is not listening on the expected address and will not be reachable by  fly-proxy.
You can fix this by configuring your app to listen on the following addresses:
  - 0.0.0.0:4000
Found these processes inside the machine with open listening sockets:
  PROCESS        | ADDRESSES                             
-----------------*---------------------------------------
  /.fly/hallpass | [fdaa:0:a8c2:a7b:16b:9748:b117:2]:22  
  Finished deploying

Despite the fact that I’ve already bounded to 0.0.0.0:4000.

The error message I get in the logs when I hit the deployed app in the url is:
[error]could not make HTTP request to instance: connection closed before message completed

Not sure what else I could be doing wrong.

I see nothing obviously wrong with your fly.toml or Dockerfile.

You can configure the grace period in fly.toml. I believe the default is one second. What this message, combined with the list of ADDRESSES below it, means is that after the grace period expired your app was not listening on ANY address. So all this means is that your app hasn’t started yet. But I suspect that’s not the problem.

This generally means that nothing is listening on the expected port; which is another way of saying that your app still hasn’t started yet.

Based on the available information, it seems likely that your application got an exception which prevented it from reaching the point where you call listen, but there is some outstanding event or Promise that is preventing the application from exiting.

Is there more information in the logs? If not, my normal approach to debugging situations like these is to insert console.log statements into the application to make sure that it gets to the point where the server is started.

@rubys thanks Sam!

Actually, looking at the logs, app started fine and listening okay.

I could curl -k https://localhost:4000/welcome after ssh’ing into the app and that works fine.

However, curl http://localhost:4000/welcome doesn’t work. So, for the https version to be working seems app is available and listening after its deployed to fly. But looks like some processes on your side could perhaps be altering the connection?

See the attached for the two cases.

If you connect externally to .fly.dev via https, that talks to our proxy. That proxy will attempt to connect via http to your app.

Can you change your app to expect http when run in production?

1 Like

I found the issue.

When creating an httpServer instance, we use the https module which allows us to create an instance like so:

const httpsServer = https.createServer(
   {
     cert: fs.readFileSync(path.resolve(__dirname, "../cert.pem")),
     key: fs.readFileSync(path.resolve(__dirname, "../key.pem")),
   },
  app
);

That’s why fly proxy was failing as it tried to connect via http but we only have https by default as explained above.

To test this, I changed to http module and created the server instance which works.

Could you please assist with some of the unknown questions:

  • How can we use https module so we can secure the connection as shown in my example above?
  • In docker-compose, we were able to pass ---scale=4 which created 4 instances of the same api with different addresses. Is there anyway we can achieve similar or we have to deploy individual apps with the same codebas?
  • Rather than accessing via *.fly.dev, do we have an internal address we can use inside nginx for reverse proxy so that if we deploy for instances of the same app, we can have something like so:
// ...other nginx config
  upstream nodes
  {
    hash $remote_addr consistent;
    # server api_1:4000; // internal address of fly api_1
    # server api_2:4000; // internal address of fly api_2
    # server api_3:4000;
    server api_4:4000;
  }
server
{
  listen 443;
  server_name domain.com;
  if ($host = www.$server_name)
  {
    rewrite ^(.*) https://$server_name$request_uri? permanent;
  }
  # if ($host = http://www.$server_name) {
  #   rewrite ^(.*) https://$server_name$request_uri? permanent;
  # }

  client_max_body_size 100M; #100mb

  location ~ ^/login/.+
  {
    proxy_pass http://nodes;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "Upgrade";
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $upgr;
    proxy_redirect off;
  }

  location /graphql
  {
    proxy_pass http://nodes;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "Upgrade";
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Upgrade $upgr;
  }
  • The plan is to have only one dedicated ip-address mapped to the nginx-proxy which is the reverse-proxy, and all the other apps will be routed via their internal address.
  • We have Redis (from Upstash directly) and Postgres cluster, how do you recommend we use this with the 4 apps instance for low latencies?

I know that’s quite abit, but your guidance will help greatly.

It’s a big relief considering the small progress :slight_smile:

Assuming you want IPv4 (and pretty much everybody does), you will need a dedicated ip v4 address: Public Network Services · Fly Docs. Use fly ips list to see what you have today.

Next take a look at your fly.toml. What you likely have there is a http_service section. As your needs are different, you will need to replace this with list of ['[[services]]`](Fly Launch configuration (fly.toml) · Fly Docs) - even though there will be only one service. Make sure to omit the `tls` handler as you will be handling this.

You can do that with fly scale count, with fly machine clone or with the API. Different machines can be in different regions and hanve different memory allocations.

I use nginx with proxy-pass myself, it works just fine. Take a moment and review how we set up DNS to make finding machines on the internal network easy: Private Networking · Fly Docs

You are getting outside of my area of expertise, but there are plenty of people here who can help you; but the answer will depend on where you locate your machines. If, for example, you put them all in one region then if you place redis and postgres in the same region you will be fine.

Nice one, thanks!

…you will need to replace this with list of ['[[services]]`](Fly Launch configuration (fly.toml) · Fly Docs) - even though there will be only one service. Make sure to omit the `tls` handler as you will be handling this.

I’ve decided to use an http module and allow fly.io to handle TLS termination considering they’ll provide a more robust support than a self-signed client/key provision we’re using with the https module.

  • Postgres-cluster related question:

As regards multi-region database instance, do you know if anything has changed as to determining when to use a read-replica or write-replica? There’s an ongoing thread here Multi region database guide - #37 by greg and we need to do something similar.

Thanks man!

At this point, you are outside my area of expertise. I suggest creating a new thread with a subject line that gets noticed by the right people.

I’m glad to have helped you get this far, and good luck!

1 Like

Understood! @rubys

Thought you might have some thoughts on this, its okay otherwise, I’ll email support or create another thread.

Whilst sorting out nginx reverse proxy, I’ve configured like so:

 // other nginx config
upstream nodes 
{
  hash $remote_addr consistent;
  server appname.internal.porrt // e.g sample-app.internal.8080
}

// location block

location / 
 {
  proxy_pass http://nodes;
  // additional config
 }

The above doesn’t work. It was returning a 404:

[error] 281#281: *2 "/etc/nginx/html/index.html" is not found (2: No such file or directory), client: 172.x.x.x, server: some_domain.com, request: "GET / HTTP/1.1", host: "nginxproxy.fly.dev"
2023-08-28T20:42:06Z app[3d8d9e3exxx] sjc [

Excerpts from my nginx config for my showcase app (see Around the World With SQLite3 and Rsync · The Ruby Dispatch):

server {
  listen 3000;
  listen [::]:3000;
  server_name smooth.fly.dev;

. . .

  location /showcase/2023/chicago {
    proxy_set_header X-Forwarded-Host $host;
    proxy_pass http://ord.smooth.internal:3000/showcase/2023/chicago;
  }

. . .

}

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.