I’m having trouble accessing my deployed app. This is what console shows following deployment:
WARNING The app is not listening on the expected address and will not be reachable by fly-proxy.
You can fix this by configuring your app to listen on the following addresses:
- 0.0.0.0:4000
Found these processes inside the machine with open listening sockets:
PROCESS | ADDRESSES
-----------------*---------------------------------------
/.fly/hallpass | [fdaa:0:a8c2:a7b:16b:9748:b117:2]:22
Finished deploying
Despite the fact that I’ve already bounded to 0.0.0.0:4000.
The error message I get in the logs when I hit the deployed app in the url is: [error]could not make HTTP request to instance: connection closed before message completed
I see nothing obviously wrong with your fly.toml or Dockerfile.
You can configure the grace period in fly.toml. I believe the default is one second. What this message, combined with the list of ADDRESSES below it, means is that after the grace period expired your app was not listening on ANY address. So all this means is that your app hasn’t started yet. But I suspect that’s not the problem.
This generally means that nothing is listening on the expected port; which is another way of saying that your app still hasn’t started yet.
Based on the available information, it seems likely that your application got an exception which prevented it from reaching the point where you call listen, but there is some outstanding event or Promise that is preventing the application from exiting.
Is there more information in the logs? If not, my normal approach to debugging situations like these is to insert console.log statements into the application to make sure that it gets to the point where the server is started.
Actually, looking at the logs, app started fine and listening okay.
I could curl -k https://localhost:4000/welcome after ssh’ing into the app and that works fine.
However, curl http://localhost:4000/welcome doesn’t work. So, for the https version to be working seems app is available and listening after its deployed to fly. But looks like some processes on your side could perhaps be altering the connection?
That’s why fly proxy was failing as it tried to connect via http but we only have https by default as explained above.
To test this, I changed to http module and created the server instance which works.
–
Could you please assist with some of the unknown questions:
How can we use https module so we can secure the connection as shown in my example above?
In docker-compose, we were able to pass ---scale=4 which created 4 instances of the same api with different addresses. Is there anyway we can achieve similar or we have to deploy individual apps with the same codebas?
Rather than accessing via *.fly.dev, do we have an internal address we can use inside nginx for reverse proxy so that if we deploy for instances of the same app, we can have something like so:
The plan is to have only one dedicated ip-address mapped to the nginx-proxy which is the reverse-proxy, and all the other apps will be routed via their internal address.
We have Redis (from Upstash directly) and Postgres cluster, how do you recommend we use this with the 4 apps instance for low latencies?
I know that’s quite abit, but your guidance will help greatly.
Assuming you want IPv4 (and pretty much everybody does), you will need a dedicated ip v4 address: Public Network Services · Fly Docs. Use fly ips list to see what you have today.
Next take a look at your fly.toml. What you likely have there is a http_service section. As your needs are different, you will need to replace this with list of ['[[services]]`](Fly Launch configuration (fly.toml) · Fly Docs) - even though there will be only one service. Make sure to omit the `tls` handler as you will be handling this.
You can do that with fly scale count, with fly machine clone or with the API. Different machines can be in different regions and hanve different memory allocations.
I use nginx with proxy-pass myself, it works just fine. Take a moment and review how we set up DNS to make finding machines on the internal network easy: Private Networking · Fly Docs
You are getting outside of my area of expertise, but there are plenty of people here who can help you; but the answer will depend on where you locate your machines. If, for example, you put them all in one region then if you place redis and postgres in the same region you will be fine.
…you will need to replace this with list of ['[[services]]`](Fly Launch configuration (fly.toml) · Fly Docs) - even though there will be only one service. Make sure to omit the `tls` handler as you will be handling this.
I’ve decided to use an http module and allow fly.io to handle TLS termination considering they’ll provide a more robust support than a self-signed client/key provision we’re using with the https module.
Postgres-cluster related question:
As regards multi-region database instance, do you know if anything has changed as to determining when to use a read-replica or write-replica? There’s an ongoing thread here Multi region database guide - #37 by greg and we need to do something similar.
[error] 281#281: *2 "/etc/nginx/html/index.html" is not found (2: No such file or directory), client: 172.x.x.x, server: some_domain.com, request: "GET / HTTP/1.1", host: "nginxproxy.fly.dev"
2023-08-28T20:42:06Z app[3d8d9e3exxx] sjc [