Hosting Next.js project on Fly.io vs Vercel

Hi,

Im new to NextJS and is currently using Vervel as hosting provider. But when looking into the pricing plan and cost-scalability I’m getting more curious on hosting my project containerized with Docker using fly.io.

NextJS in latest version is running React Server Components and I’m using the newer app router paradigm.

I have read fly.io docs about NextJS, here.

I want to ask you what kind of infrastructure/features is unique to Vervel hosting.

Some of the above is easy to fix for example Image Optimization, Cron Jobs, and I think some would be very expensive and time consuming to host using fly.io for example the serverless functions and edge features.

Are there any limitations in what I can do in a Next.js project when hosting on fly.io?

Are there any examples/templates using NextJs SSR, SSG, ISR?

The Edge * are irrelevant on fly because you can deploy to multiple regions close to your users (just make sure your DB are next to each region).
Fly lets you scale to 0 to that’s equivalent to the “serverless” feature if you’re worried about costs but don’t mind the 1-3s cold starts.

Image Optimization is natively supported in the next framework, you just need to install sharp in your docker. This requires lots of RAM b/c the next optimization library stores images in memory instead of file. You can create another fly app to handle this exclusively (point the images to this app Components: <Image> | Next.js).

For Cron jobs, you can set up your own service to handle this, eg temporal.

ISR and PPR are the only true proprietary features on Vercel. You can implement ISR by using next.config.js Options: incrementalCacheHandlerPath | Next.js
PPR is behinds closed doors but is probably useless when running next in a container since everything loads so quick.

3 Likes

Thank you for your reply!

I havent heard of PPR before, Partial Prerendering, I understand its a experimental feature in NextJS 14.

Im having a hard time to understand what NextJS features isnt proprietery on Vercel.

Will these features work on VMs hosted at Fly.io?

Everything is available outside Vercel besides the ISR and PPR. If you have 2 instances of your server, then you have to override the cache stuff I linked above… instead of writing to a local file, you write/read to/from eg S3.

Edge runtime is a special runtime that removes a lot of bloat to make the lambdas load and execute faster, but that doesn’t matter when you’re running next in a container since bootup time only happens once. You wouldn’t want to use that runtime.

Server Actions (mutations), Middleware, and everything else works as I mentioned before.

Here’s a good template for your docker file: https://github.com/vercel/turbo/blob/main/examples/with-docker/apps/web/Dockerfile
it packages the image to its minimum.

You’ll also want to wrap your next server in a cluster to make use of all the CPUs: Cluster | Node.js v21.4.0 Documentation, eg a shared-cpu-8x@2048 (256MB each core) should be optimal for most production apps.

1 Like

Is there anyway to reduce the cold start? Any guidelines besides running at least least one machine?

There’s a few but the biggest bottleneck is nodejs, which takes about a second ± some delta. You can try to stay up to date w/ the latest nodejs version, as performance is on their priority list. We might see something in node 22+, or try using Bun (probably needs another year in the oven for it to parity node features.)

Firecracker takes like 500ms ± 200ms so that brings the min coldstart to about 1.5s

Any time you can shave off at application runtime might help (like dynamically loading big libraries) but if it’s only saving a dozen ms here and there, that won’t be significant enough to matter.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.