Slow Laravel bootstrapping on Fly.io

Hi all,

I’m working on moving an app that’s running on an AWS EC2 machine provisioned using Forge to Fly.io.

I got everything working now, but I noticed all the pages load significantly slower (about 50% slower). After turning on performance monitoring on Sentry, I saw that bootstrapping takes ~180ms (49% of request time) on Fly.io and ~28ms (4% of request time) on the AWS machine.

Any ideas as to why this might be the case? I know I can use Octane (I probably will at some point) but I feel like it’s a bandaid solution. I must be doing something seriously wrong.

I am approximately using the default configuration that’s given by fly launch in Laravel app.

Greatly appreciate any help :slight_smile:

First thing I’d check would be the location of the server to compare like-with-like. I assume your new Fly vm is in a similar region as your EC2, else latency will differ and cause additional delay.

Next, sounds like if the bootstrap time is much slower, your app would benefit from caching. First, make sure all of Laravel’s cache stuff is being done (routes, config, views). Has to be at run-time, rather than build-time.

Next, Laravel Forge provides opcache. No idea if Fly’s out-of-the-box Laravel does … I’d assume not as it is probably adds the essentials. If not, that could well explain the difference in speed.

So maybe see if it does, and if not, give it a try:

Thank you for your reply!

The Fly VM is in a region that’s 2x closer to me than the AWS one. Anyway, the latency does not seem to be the problem. The bootstrapping is what is taking a long time, not the latency between VMs and my laptop.

First, make sure all of Laravel’s cache stuff is being done (routes, config, views)

This is already turned on, I have this in my entrypoint:

/usr/bin/php /var/www/html/artisan config:cache --no-ansi -q
/usr/bin/php /var/www/html/artisan route:cache --no-ansi -q
/usr/bin/php /var/www/html/artisan view:cache --no-ansi -q

Next, Laravel Forge provides opcache. No idea if Fly’s out-of-the-box Laravel does … I’d assume not as it is probably adds the essentials. If not, that could well explain the difference in speed.

Yes I actually checked this already, opcache is enabled by default. I confirmed this using phpinfo(). It says it’s enabled. I will check if there’s optimisation possible still though, there’s some options mentioned in the post you linked.

Thanks!

1 Like

This is what Sentry is reporting btw:

Yeah that certainly looks too long. I’d certainly look at the opcache options to see if anything can be adjusted. The cache hits/misses for the opcache on the phpinfo screen should give some idea if it’s doing its thing.

I’d assume the SSD disks Fly uses are just as fast as any AWS would use so would seem unlikely to be that either.

Octane should be much faster as it keeps it in memory. But yes, that shouldn’t be necessary.

Any chance you’re making a network connection on each request to something far-ish away? (cache, database, ec2 services like dynamodb)

Oh - are you using a VM of similar size on Fly, or the default free-tier size?

As a quick/easy experiment, it might be worth scaling that up to see if it changes anything.

The AWS machine is heavier than the Fly.io machine. It has 4 gigs of RAM and 2vCPUs. Currently the Fly VM has 2GB of ram on shared-cpu-1x. I’ve tried VMs with a lot more resources, but it doesn’t make a lot of difference.

Nope, I do none of that.

The only “external” services I am using are:

  • Another Fly app running mysql:8.
  • A Redis upstash cluster. The 10$ one.

On the Fly VM I have this under section “Zend OPcache”:

On AWS I have this:

Is there anything interesting visible? I am not sure if I understand all of it myself

It looks like on the Forge/AWS machine the cache hit ratio is better, but that makes sense considering the Forge/AWS machine has been running for like a year already.

In Octane/Swoole I get ~30ms response times, without Octane/Swoole ~200ms (for a static landing page).

200ms is acceptable-ish, but I can’t understand why I get ~60ms on AWS.

Seems like I’m also experiencing the same thing, however I was using vercel-php on aws lamda