What is an instance and how do I create multiple instances?

Hi all.
I’m new to fly.io, and a few other things. After some discussion on twitter with a staffer, I have some questions.

In the discussion the staffer mentions an “instance”. How is this defined within fly.io?
How do I create and run multiple instances?
Is this different from a process?
Is it related to docker?

I’m also new to docker, so I’m not familiar with any termonolgy there either.

I managed to get set up with an example project, running node.

Hobby project concept:
I’m creating a bot to enable people to play a game via twitter.
Using a cron based on twitter rate limit, fetch new mentions and process.
Processing tweets creates jobs on cloudflare workers.
Cloudflare workers must run fast, and so will mostly make requests to the fly.io server.
Server will run worker task, accessing database, and potentially creating more jobs on the worker system.

I had planned to use PM2 to manage the cron(using a cron style reset config), but the staffer suggested I should use two “instances”, as the cron and the http server didn’t interact.

(I’m aware I could use another worker system, but part of the technical challenge to myself is to see if I can use cloudflare workers to make this happen. I’m also hosting the database elsewhere.)

Background on me:
I have almost a decade of experience in web development and currently work on a large node application. I do not however have experience with devops, as our org has their own team to handle such things. (I tried docker many years ago, so I’m familiar with what it does, just not how to use it.)

1 Like

An application on Fly is, effectively a process running on its own VM. This is an instance of the application. When the capacity of an instance, defined by the number of concurrent connections, is exceeded, Fly automatically creates a new instance to run alongside it in the same region. Instances are created in regions selected by the user and created in response to rules. It is different from a process on a typical OS, because there’s only one process running in an instance and generally there’s no forking or launching other processes.

Docker implements a similar model for its own runtime. Docker-compose is a Docker tool which lets you coordinate multiple different instances to give the appearance of multi-process applications running.

On Fly, we just let you make another app, which you can deploy and coordinate with other components (in their own apps) as needed.

Thanks for your reply.
So if I understand correctly, if I want to run two processes, I should run them as two separate applications?

How does this effect the CPU and Memory tier I use? Will multiple applications share CPU and Memory, or will I in effect, be using at least 2x micro-1x?

If I’ve missunderstood, and you’re not suggesting two applications, how do I go about running two processes?

Yes, if they are independent of each other, then run two applications.

Multiple applicatons do not share CPU and memory. It will be (if untouched x2 micro2x VMs… scale the vm size down to get x2 micro1x instances)

Thanks for the info.
How do I go about scaling it down by default?
Assume it’s an option or config I’ve missed?

Run flyctl scale vm micro-1x

The scaling options are under the flyctl scale command - See https://fly.io/docs/scaling/

Hi there,
I didn’t find more related topic for my question so decided to ask here.
I got a django app with multiple other docker images like postgres, redis, selenium and a data volume.
currently i use docker-compose to run multiple images.
so in an specific app i can’t have more than one docker image,
but can i have multiple app that communicate with each other on different (internal) ports?

@mahdikhashan1 not yet! We’re rolling out private networking very soon, which will solve part of that for you. We’re hoping to make “launch Docker compose on Fly” reasonably easy soon, but it’s not something you can do very well yet.

@kurt When will be docker-compose app be launched?

We still haven’t gotten to it, sadly. It turns out that Docker Compose includes a _lot _ stuff that makes it hard to support. We do have everything you’d need to run the containers defined in a docker-compose.yml, so if you want help figuring out how to make a cluster of apps from a compose file, we can at least explain what you need to do!

we can at least explain what you need to do!

Hi @kurt, I’d like to try this for a docker compose setup, if possible. What do I need to do?

Would you mind sharing your docker-compose file here?

In Fly world, you’ll need to create one app per compose entry and then make sure they know each others’ internal hostnames (like <app>.internal). You’ll also need to manually create volumes if your compose app relies on disks.

Hey, sure thing! Thanks for the quick response, let’s use this as an example.

If I understand correctly it sounds like you’re saying each of the below containers should be their own app?

version: '3.7'
services:
  nginx:
    image: nginx:1.17
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
    ports:
      - 8000:80
    depends_on:
      - backend
      - frontend

  redis:
    image: redis
    ports:
      - 6379:6379

  postgres:
    image: postgres:12
    restart: always
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: password
    ports:
      - '5432:5432'
    volumes:
      - db-data:/var/lib/postgresql/data:cached

  worker:
    build:
      context: backend
      dockerfile: Dockerfile
    command: celery --app app.tasks worker --loglevel=DEBUG -Q main-queue -c 1

  flower:  
    image: mher/flower
    command: flower --broker=redis://redis:6379/0 --port=5555
    ports:  
        - 5555:5555
    depends_on:
      - "redis"

  backend:
    build:
      context: backend
      dockerfile: Dockerfile
    command: python app/main.py
    tty: true
    volumes:
      - ./backend:/app/:cached
      - ./.docker/.ipython:/root/.ipython:cached
    environment:
      PYTHONPATH: .
      DATABASE_URL: 'postgresql://postgres:password@postgres:5432/postgres'
    depends_on:
      - "postgres"

  frontend:
    build:
      context: frontend
      dockerfile: Dockerfile
    stdin_open: true
    volumes:
      - './frontend:/app:cached'
      - './frontend/node_modules:/app/node_modules:cached'
    environment:
      - NODE_ENV=development


volumes:
  db-data:

Each of these containers would need to be a separate app, yes. As mentioned above, you’d have to setup volumes and env vars/secrets manually. There is no way to do depends_on either, but this may not be a problem for a production environment.

What people seem to be doing now is create a toml file for each app in a single repo/directory and use some scripting around fly deploy -c target.toml --dockerfile some/Dockerfile.

1 Like