working with directories during deployment on fly.io

Hello guys, this is a question, in my root directory I have the main launch file main.py and a separate folder with the bot, where handlers, etc. are stored.

/
  /tgbot
  /main.py
  /fly.toml
  /Procfile
  /requirements.txt

But I just need to deploy and run this miracle, but nothing works. what do you advise? I will be very grateful for your answer.

Author a Dockerfile that will pack your app for deployment.

1 Like

Can you try fly launch on the directory? It will write Dockerfile for you.

Update: Just learned from @rubys that fly launch uses Buildpack instead of Dockerfile if a project is non-Django Python app.

So, how did you write fly.toml you have? Is it generated by fly launch? It would be helpful if you copy-and-paste the errors you got.

I would check out this page, near the bottom.

It walks through the steps to get it deployed. This does use a buildpack. If you want a docker image, you can create a Dockerfile next to the fly.toml and it should be picked up. You may need to remove the buildpack from the build step and point to the Dockerfile.

You can also delete the fly.toml and run fly launch again and see if it generates it withou the buildpack, if that is the issue.

I had no issues with buildpacks when I have used them in the past. I typically use Docker now though.

A sample docker file might look like -

FROM python:latest

WORKDIR /usr/app/src

COPY main.py ./
COPY /tgbot ./tgbot

CMD [ "python", "./main.py"]

I don’t know what is in the /Procfile.

Procfiles are support in Buildpacks I think, but that may be V1, and not machines? Someone from fly would need to answer.

But if they are processes and different commands, you can use the process group setup.

[processes]
  web = "bundle exec rails server -b [::] -p 8080"
  worker = "bundle exec sidekiqswarm"

In your fly.toml, you can add a process section for each one, and define your CMD. The CMD will replace the CMD in the Dockerfile. Or be added to the ENTRYPOINT and joined too it.

Every process will become a separate VM that has its own life cycle (auto start, scale, shutdown, etc). And can be scales horizontally and vertically separate.

The only downside to this approach is all VMs within the app share the same ENV files, and the Dockerfile may be more complex to handle supporting all processes, even if its only running one.

For example I launch a node container to run a compiled Go app, because the backend database GUI is in Node. So the image is larger for the Go app. And the Go app is exposed to all the ENV for the Node app.

Every process group needs a different port, because the proxy knows about the app, but not the process groups. So web might be 8080 and worker might get exposed on 3000. It needs atleast an internal port. You can communicate with process groups using internal networking,

<process_group>.process.<appname>.internal

See more details

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.