How to to configure deployment of a a Node CronJob which updates a MongoDB Atlas DB

Hey Team, I am running a Node CronJob using awesome Fly.IO for a while know. Everything works like a charm, except… the health checks fail all the time and Fly tries to kill my app with a “SIGINT” (which I’m not listening to :wink: ). Needless to say, I don’t have any HTTP or TCP servers running - it’s just a CronJob which reads, analyses and updates data in my MongoDB hosted database. How would you recommend I get rid of the healthchecks failures? Can I get rid of all the services in the configuration file (or will that mess up my connection to MongoDB)?


Is the MongoDB hosted elsewhere or are you running that as a separate app in Fly?
EDIT: Whoops, didn’t see MongoDB Atlas in the title.

If it’s just cron and bash stuff to work with your Mongo data, you could use GitHub - fly-apps/supercronic: Run periodic jobs on Fly with supercronic instead.

Health checks:
Removing them from your fly.toml will not stop them, and that they’ll happen anyway.
Ref: Failing Health Checks after Deploy - #4 by kurt

Maybe you can ‘fool it’ with a custom health check script, as shown here?

Thanks @FrequentFlyer ! The MongoDB is hosted at - so elsewhere. I will look into the links you shared too. Much appreciated.

Cheers, that supercronic thing should be all you need if you’re only working with Mongo data, and there’s no actual requirement of Node (since you mentioned Node CronJob).

If you’re instead using Node app for something else and also doing cron with something like node-cron, then fooling the health check will be required, I think.

Cool. My Node app does a tonne of data wrangling during the cronjob, so it seems fooling the health check is the route to try.

Hey Martin,

I would not rely on the periodic script health checks. They don’t behave well in our current system.

Can you post your fly.toml for node-cron? If you remove the entire services block, no health checks will be run besides checking that the VM is running. Also, you can change the kill signal to whatever you need, i.e. kill_signal = "SIGTERM".

Thanks @jsierles! I’ve basically not touched it since flyctl launch created it for me. I’m a noob around here. Cheers.

app = "news-scraper-analyzer"

kill_signal = "SIGINT"
kill_timeout = 5
processes = []

  builder = "heroku/buildpacks:20"

  PORT = "8080"
  NODE_ENV = "production"

  allowed_public_ports = []
  auto_rollback = true

  http_checks = []
  internal_port = 8080
  processes = ["app"]
  protocol = "tcp"
  script_checks = []

    hard_limit = 25
    soft_limit = 20
    type = "connections"

    handlers = ["http"]
    port = 80

    handlers = ["tls", "http"]
    port = 443

    grace_period = "1s"
    interval = "15s"
    restart_limit = 0
    timeout = "2s"

No worries! Removing the entire services block should work.

Wouldn’t TCP connection check still happen per Kurt’s comment there?

In that post, only the health checks were removed, leaving the services intact, which need some kind of health check to function in our system today.

Here, we’d be removing the services themselves, so no health checks.

Ahhh, thank you!

EDIT: So your suggestion worked as a charm. No more health checks… But now the VM will try and kill my app every “KILL_TIMEOUT = X” minutes. I could just ignore the SIGINT, of course, but that seems a bit like an antipattern?

Thanks @jsierles , I will try that. Any advice on where I could learn more about the kill_signal and kill_timeout settings and how they affect the deployment?

Check our docs for an explanation: App Configuration (fly.toml)

kill_timeout should not be killing your VM. If that’s happening, it usually means something else is up. Could you post your package.json?

Cheers, @jsierles ! Strap in, we’re going down the rabbit hole.

The actual app I’m having issues with has the following package.json:

  "name": "news-scraper-analyzer",
  "version": "1.0.0",
  "description": "",
  "main": "src/index.ts",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "dev": "NODE_ENV=development nodemon src/index.ts",
    "build": "rm -rf build && tsc",
    "start": "NODE_ENV=production node build/index.js",
    "seed": "ts-node src/testData/seeder.ts"
  "keywords": [],
  "author": "",
  "license": "ISC",
  "devDependencies": {
    "@types/cron": "^1.7.3",
    "@types/luxon": "^2.0.9",
    "@types/node": "^17.0.12",
    "ts-node": "^10.4.0",
    "typescript": "^4.5.5"
  "dependencies": {
    "axios": "^0.25.0",
    "cron": "^1.8.2",
    "dotenv": "^14.3.0",
    "luxon": "^2.3.0",
    "mongoose": "^6.1.8"

The app itself, in a nutshell, does this:

const start = async () => {
  // This should run every day at 05:00:00am CET.

    'Starting analyzer CronJob, scheduled to run every day at 5:00:00am CET'

  console.log('Node environment is set to ' + ENV);

  const job = new CronJob(
    '0 0 5 * * *',
    async () => {
      await connectToDB();
      const todayCET ='Europe/Paris');
      const yesterdayStartCET = todayCET.minus({ days: 1 }).startOf('day');
      const headlines = await getHeadlines(yesterdayStartCET);
      const marketData = await getMarketData(yesterdayStartCET);
      const headlineAnalysis = createAnalysis(
      await saveAnalysis(headlineAnalysis);



What happens to it overnight is this:

And when I log in to the application using flyctl ssh console I see that the Fly has redeployed an older version of my app which is causing the issues that I ultimately experience in the front-end of my little stack.

So following your earlier advice, I should delete all services entries from the .toml - but before I try that, I’m wondering whether the kill_timeout would still cause the system to redeploy an earlier version of my app.

Sorry for a loooong e-mail. Thank you a bunch for any support.


Hi Martin,

When all health checks are removed (deleting services block), since there won’t be any failing checks, Fly won’t try to fix it by reverting to a previous version.

So then the kill_signal and kill_timeout will only come into picture when the VM needs to be shutdown for new deploys, VM restarts, etc.

Awesome. With that clarification, I should be ready to go. I owe you a beer.