Help getting Horizon running

I’m at a loss; I followed the directions for setting up queue workers but swapped queue:listen with horizon. I cannot get Horizon to stay up. Even tried dong an ssh console connection to run horizon and it would kill the process.

root@6e82559a7ee2e8:/var/www/html# php artisan horizon:status
Horizon is inactive.
root@6e82559a7ee2e8:/var/www/html# php artisan horizon       
Horizon started successfully.
root@6e82559a7ee2e8:/var/www/html# php -d memory_limit=-1 artisan horizon
Horizon started successfully.

Here is my fly.toml:

# fly.toml app configuration file generated for longhornphp-app on 2023-08-12T16:35:41-05:00
# See for information about how to use this file.

app = "longhornphp-app"
primary_region = "iad"
console_command = "php /var/www/html/artisan tinker"

    NODE_VERSION = "18"
    PHP_VERSION = "8.1"

  APP_ENV = "production"
  LOG_CHANNEL = "stderr"
  LOG_LEVEL = "info"
  LOG_STDERR_FORMATTER = "Monolog\\Formatter\\JsonFormatter"
  SESSION_DRIVER = "cookie"

  internal_port = 8080
  force_https = true
  auto_stop_machines = true
  auto_start_machines = true
  min_machines_running = 1
  processes = ["app"]


  app = ""
  cron = "cron -f"
  worker = "php -d memory_limit=-1 artisan horizon"

Could use a nudge in the right direction.


What’s your setup with Redis? Since Horizon needs it, I wonder if that’s the missing piece.

In general tho, I’d be looking for errors - perhaps within the app logs (fly logs or in the fly web console) output.

The managed Redis on Fly is provided by Upstash, but they’re running a redis-like thing that’s not 100% on feature parity with Redis, so it’s possible Horizon does something wonky that it doesn’t like. However I’ve never heard that reported so far!

But in either case, the thing to do is to find an error message!

You can remove the the env setting the LOG_CHANNEL to stderr to regular file-based logs, redeploy, and then then tail the log file if you’d like - that might make more sense to do if trying this out in an SSH session.

However be aware that by default, machines turn off after HTTP request inactivity - unless fly.toml is instructed not to - and disks are ephemeral, so you can lose those log files when a machine stops / restarts back up unless you add a volume to persist your storage dir).

Here’s the error I have when I tail the logs:

[2023-08-16 22:34:40] production.ERROR: socket error on read socket {"exception":"[object] (RedisException(code: 0): socket error on read socket at /var/www/html/vendor/laravel/framework/src/Illuminate/Redis/Connections/Connection.php:116)
#0 /var/www/html/vendor/laravel/framework/src/Illuminate/Redis/Connections/Connection.php(116): Redis->eval()
#1 /var/www/html/vendor/laravel/framework/src/Illuminate/Redis/Connections/PhpRedisConnection.php(531): Illuminate\\Redis\\Connections\\Connection->command()
#2 /var/www/html/vendor/laravel/framework/src/Illuminate/Redis/Connections/PhpRedisConnection.php(448): Illuminate\\Redis\\Connections\\PhpRedisConnection->command()
#3 /var/www/html/vendor/laravel/framework/src/Illuminate/Queue/RedisQueue.php(264): Illuminate\\Redis\\Connections\\PhpRedisConnection->eval()
#4 /var/www/html/vendor/laravel/horizon/src/RedisQueue.php(149): Illuminate\\Queue\\RedisQueue->migrateExpiredJobs()

I am using the upstash redis, and I configured my REDIS_ secret vars to the connection details it generated

Pulled redis connections the app loads:

    "default" => [
      "url" => null,
      "host" => "",
      "username" => null,
      "password" => "*****************************************",
      "port" => "6379",
      "database" => "0",

Alright, I’ve confirmed that Redis is working normally, but Horizon is what doesn’t work at all. What would you suggest @fideloper-fly ?


I want to see if Upstash redis has some incompatibility with Horizon - since Horizon does a bunch of crazy things with Redis to store its data, that’s totally possible.

The way to test that would be to spin up an app that runs Redis, and then connect to that redis instead and THEN test horizon.

You’ll need to update your app config to use the new redis you spun up manually. The article shows what that all looks like: Full Stack Laravel · Laravel Bytes

Looks like using a redis app worked very briefly, and then crashed out. Noticed this in the main applications logs though:

2023-08-22T00:05:23Z app[286595eae92438] iad [info][ 1075.187062] Out of memory: Killed process 1324 (php) total-vm:146380kB, anon-rss:2280kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:264kB oom_score_adj:0
2023-08-22T00:14:16Z app[286595eae92438] iad [info][ 1608.128983] Out of memory: Killed process 1954 (php) total-vm:146328kB, anon-rss:2868kB, file-rss:0kB, shmem-rss:0kB, UID:33 pgtables:264kB oom_score_adj:0
2023-08-22T00:14:22Z app[286595eae92438] iad [info][ 1614.121669] Out of memory: Killed process 1903 (php) total-vm:146380kB, anon-rss:2308kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:260kB oom_score_adj:0

Here’s our horizon configuration with memory details:

    | Memory Limit (MB)
    | This value describes the maximum amount of memory the Horizon master
    | supervisor may consume before it is terminated and restarted. For
    | configuring these limits on your workers, see the next section.

    'memory_limit' => 64,

    | Queue Worker Configuration
    | Here you may define the queue worker settings used by your application
    | in all environments. These supervisors and settings handle all your
    | queued jobs and will be provisioned by Horizon during deployment.

    'defaults' => [
        'supervisor-1' => [
            'connection' => 'redis',
            'queue' => ['default'],
            'balance' => 'auto',
            'maxProcesses' => 1,
            'maxTime' => 0,
            'maxJobs' => 0,
            'memory' => 128,
            'tries' => 1,
            'timeout' => 60,
            'nice' => 0,


Assuming the machine in IAD with id 286595eae92438 - with the logs you pasted:

2023-08-22T00:05:23Z app[286595eae92438] iad [info][ 1075.187062] Out of memory: Killed process 1324 (php) total-vm:146380kB, anon-rss:2280kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:264kB oom_score_adj:0`)

Assuming that’s the Redis instance and not the application, it’s likely that 256m RAM isn’t enough for Verizon. Try scaling that out to something larger via fly scale

Something like:

cd /path/to/project/with/redis/fly.toml

fly scale memory 1024

You may need to re-deploy after that for the change to take effect, I don’t remember exactly if that auto-updates for you! (The output of the command will say if it re-deploys that for you).

But assuming that works - great! Except, that also means Upstash isn’t compatible with Horizon, which is less great :confused:

I already did some redeploys but that wasn’t the redis app, it was one of the machines in the main apps process group. I believe it was the ID assigned to our worker process ("worker" = "PHP artisan horizon").

I bumped up that processes memory to 1024MB and so far doing a check through SSH shows Horizon is staying online now.

➜  longhornphp-2020-app git:(feature/flyio-migration) ✗ fly ssh console -C "php artisan horizon:status"
Connecting to ******************************... complete
Horizon is running.

Well it looks like we got Horizon running; now just running into issues that are probably more on PlanetScale’s side. I will share the error message here anyway, in case you’ve seen it before and know a quick fix. Otherwise will wait on their support team (and appreciate the help).

PDOException: SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute. in /var/www/html/vendor/laravel/framework/src/Illuminate/Database/Connection.php:414

@fideloper-fly We are good to go now, looks like it was a memory issue and bumping things up made it work appropriately. Thanks for the help!

Sounds good!

I haven’t seen that query error before myself, so I’m not 100% sure where it’s coming from. You can enable stack traces in your log output tho to get more information in your logs on error.

Note that before I was talking about adding more ram to the Redis instance, but it sounds like adding ram to the Horizon process (the VM running Horizon) is what you did to resolve the issue, right? (Doesn’t matter, I’m just curious - but it might mean the Redis instance is small, so you should keep an eye on log output there to make sure Redis isn’t evicting data when it’s close to running out of RAM).

It was a collation issue on PlanetScale’s side. All fixed up now, and yea I just increased the worker memory scaling and that seems to have worked with the Upstash instance

@longhornphp I am having the same issue with PlanetScale, do you remember how you solved it?

My collation is: utf8mb4_unicode_ci