@jerome - so we should trap any errors from the script_check if we don’t want to cause a general restart. That makes sense.
Hey guys, looks like it has been a few months, just checking in on this, what is the best way to run cron scripts today?
I have a new thread going here as this was “solved”:
Script checks are going away, so are not a good answer anymore for this. There’s nothing built-in yet, but you could use an external scheduler. Most runtimes have something that can run periodic jobs. Node has Bree and others.
Thanks for that resource, will definitely take a look at this, it looks perfect.
Any plans to have something a bit more native to Fly similar to Heroku?
Let me ask you this, how is Fly.io doing their internal billing with the Stripe API for metered billing?
I assume you are on a monthly basis running the VM usage calculation and then creating a metered subscription invoice, is this automated? If so, are you using billing thresholds internally? Would love to know more on how you guys are using fly internally to handle billing, etc as this pertains to a use case for us as well.
We do all our scheduling in process. Periodically triggering VMs is harder than you might think, because you run into issues where one takes too long, another starts, and things get weird.
The simplest thing to do is just run an app with an interval/timer that fires things off every so often.
Our billing is pushing Stripe way past its limits. Each organization gets a subscription with multiple metered products on it. Every hour, we compute usage for disk, memory, CPU, and bandwidth. We send those to Stripe as usage records and it handles the rest.
Stripe billing is not great for metered products. We’ll likely end up doing our own accounting and just create invoices from our code.
Looking for the best way to pull off scheduled jobs similar to Heroku (Heroku Scheduler | Heroku Dev Center).
We have a node run time, ideally we would like to run a script from
package.json every hour for example.
When are script checks going away?
There is a heisenbug with script checks that causes our VM init to hang. They’re not going away any time soon, but we haven’t been able to track that bug down so we’ve stopped using them. If you have an app that relies on them and they haven’t caused issues, it’s safe to keep using them.
Thanks so much for the explanation, this is exactly what we are looking to achieve as well as we have a a subscription for the monthly flat rate, and another subscription that we add for the metered use (transaction fees / revenue shares). We are planning on taking advantage of billing thresholds with this so that AP doesn’t get too high mid month.
To be honest, I honestly would think spinning up a VM on a schedule, running a task, and then shutting down does seem simple from me as a customer haha - This is probably just because in the past in the rails world I would just create different rake tasks, and then use heroku scheduler to run that task on a hourly basis / daily basis / monthly basis, etc.
I would love to learn more on this subject to better understand the complexities of these processes.
Something to be said about using something like Bree is that this requires to have machines running 24/7 possibly doing nothing 99% of the time. From a billing / usage standpoint this is sort of a waste.
This is true - but if your tasks don’t require many resources, a 24/7 small VM comes in at a negligible cost.
Another approach would be to run something like Bree in the same VM as your app. Our multiple processes guide gives you a few options to do that. I’m a fan of either bash (simple) or overmind which uses the familiar Procfile syntax.
Gotcha, makes sense. I agree that cost is negligible, just thought I would bring this up as a concern.
We just spun up a new app and run the bree process (on fly this is $2/month )
I have some use cases where the batch job wants to run with higher privileges than the normal. To illustrate, consider a service with a read-only connection to a data store, and a batch job that updates the data store (in reality, it’s less write access vs more write access).
In another case, they’re even using different software stacks – the always-on VM is a webapp dictated by frameworks to be NodeJS, while the batch jobs are Deno or Go.
It would be great to schedule a “start this VM every X minutes” kind of a thing, where the VM exits when the batch is done.
If you’re worried about old school cron style overruns (the previous instance is still alive when a new one is started), well, you already need to handle that right with block devices! Use that solution. (Ideally in a way that doesn’t waste 1GB for no reason.)
Any updates on this? It seems like it was “a few months away” more than a year ago now
This is something that would be great, like what render has.
I won’t judge if someone uses “local”
checks to accomplish this.
[[services]]: Healthchecks and private networks - #2 by kurt
[[services]]: Non-service health checks - #3 by jerome
The ambitious amongst us might want to experiment with Cloudflare’s Scheduled Durable Objects with Fly Machines FaaS.
The only reason I’d ask for this is knowing that underlying orchestrator (Nomad) has built-in cron.
Meanwhile, I’m just using image with
crond -f -d 0 cmd. Works just fine
We just released scheduled machines Any and all feedback appreciated → New feature: Scheduled machines
Running yacron as a secondary process (with this caveat in mind) has been working well for me.
I wrote a guide for running
superchronic in a Fly container at Crontab with Supercronic · Fly Docs, which is a drop-in replacement for cron. If you need specificity beyond Fly’s Scheduled Machines feature you’ll want to go down this path since Fly doesn’t have any immediate plans of full-blown cron for scheduling Machines.
I tried to follow the above guide but I get the following error:
Error not enough volumes named app_name_data (1) to run 2 processes
I have the following in my fly.toml file:
`[mounts] source="app_name_data" destination="/data" [[services]] http_checks =  internal_port = 8080 processes = ["web"] protocol = "tcp" script_checks =  [services.concurrency] hard_limit = 25 soft_limit = 20 type = "connections" [processes] # The command below is used to launch a Rails server; be sure to # replace with the command you're using to launch your server. web = "bin/rails fly:server" cron = "supercronic /app/crontab"`
I tried autoscaling, but this didn’t work. Any idea why this might be?