I would like to run two different programs inside each machine in order for one program to update the filesystem (AI agent) and another program to be the vite server running and doing HMR.
How would this be best to appraoch, I am looking at using something like Overmind to run the both programs inside one machine, or is there any other better solution for this? The scripts need to share filesystem.
Hi,
Wrapping the processes you need to do in something like overmind or supervisord is one approach and itâs probably good if youâre OK to do it.
I sometimes just create a quick bash wrapper script that starts all the processes in daemon mode, this has the problem that if a process crashes then the wrapper continues running and the machine would not shut down (as it usually does when the main process exits). Thereâs some bash trickery you can do for this or you can just use a supervisor like the ones you mentioned.
The wrapper / supervisor is called in your Dockerfileâs CMD instead of your usual main process.
I did try to run a bash exec command in init as a oneliner but it end up to only let one of the two scripts running. I was not able to get it to work with â& and &&â however I did, and I would prefer if there was a way to do it without creation of additional bash scripts or procfiles. One of the scripts is a Deno server (which listen for POST req on a port), and the other is a Vite script (running a server for a React project).
Command I try to run on exec was: âgit clone x@github.com/x/x.git /workspace/Template && cd /workspace/Template && bun install && bun run dev && deno run --allow-all Backend/server.ts && ls -lah && cat package.json && waitâ
It works great only if I decide to run either the âdeno runâ or the âbun run devâ.
&& means âonly run the second command if the first one succeededâ. So chaining them like this:
git clone blah && cd blah && bun run dev
means: git clone, then cd but only if the git clone succeeded; then bun run dev but only if the cd succeded.
& means ârun the command as a background processâ. I donât see you doing that here, but it could be something like bun run dev & deno run ... & wait.
If you want to combine && and & you need to be careful with your shell syntax; this guy explains it better than I can.
BTW none of this is Fly-specific, itâs just bash magic
Regards,
So there is not limitation of actually running these two scripts who both expose one port each to the end user like this? I am new to here and still exploring the possibilties of what is possible with machines. If its just a question of getting the bash script in right order I might continue explore that way.
I mean each app should have like c1.fly.dev and then contain a public port 80 and public port 3000, accessible like this: c1.fly.dev:3000/ping.
You can definitely run 2 things that are exposed on 2 different ports. For each port your actual machine listens on, youâll need a new service defined with internal_port specified.
The simplest way to get 2 processes running is to use & to background one or both, as mentioned above. Though if you want the machine to crash in the event that one of the backgrounded processes crashes, youâll need something more advanced.
I noticed your exec command above was a 1-liner, but if you need complex logic at startup it might make your life easier to define an ENTRYPOINT in your dockerfile that invokes a real bash script.
Thanks for your valueable input. Thatâs what I did. I actually ended up do the execution with the init.exec send as an API to call the entrypoint.sh, which does start the both scripts in the background (like the example bash script example on the website). I am now trying to figure out how to detect and shut down the machine when it has been inactive. The TOML like this: