Laravel LiteFS example cannot acquire lease or find primary

I followed the setup in the example https://github.com/fly-apps/fly-laravel-litefs project with litefs for laravel.

I had the following error:

cannot acquire lease or find primary, retrying: fetch primary url: Get “http://127.0.0.1:8500/v1/kv/litefs/my-project”: dial tcp 127.0.0.1:8500: connect: connection refused

After searching for a while, I had to run the command fly consul attach before the application started to work.

But I am now experiencing errors with the start or even deploy of the application from github:

Example error messages are seen here:

Smoke checks for 148e2d93ad2028 failed: the app appears to be crashing
Check its logs: here's the last lines below, or run 'fly logs -i 148e2d93ad2028':
  Successfully prepared image registry.fly.io/cloud-browser:deployment-01HY9Z5414YJ82N300XXN76XVJ (8.634258489s)
✖ [3/6] Machine 148e2d93ad2028 [app] update failed: smoke checks for 148e2d93ad2028 failed: the app appears to be crashing
✖ [1/6] Machine e2867540c7e308 [worker] update canceled while it was in progress
  Setting up volume 'storage_vol'
✖ [5/6] Machine 148e2d94b37408 [ssr] update canceled while it was in progress
  Opening encrypted volume
  Configuring firecracker
  2024-05-20T03:03:15.944797252 [01HY9Z7T7HYEGHDGSN7P4GK7QM:main] Running Firecracker v1.7.0
  [    0.037262] PCI: Fatal: No config space access function found
   INFO Starting init (commit: d772ddd9)...
   INFO Mounting /dev/vdc at /var/www/html/storage w/ uid: 0, gid: 0 and chmod 0755
   INFO Resized /var/www/html/storage to 1056964608 bytes
   INFO Preparing to run: `/entrypoint` as root
   INFO [fly api proxy] listening at /.fly/api
  2024/05/20 03:03:16 INFO SSH listening listen_address=[fdaa:9:4626:a7b:87:41cc:7701:2]:22 dns_server=[fdaa::3]:53
  Machine created and started in 9.6s
  Health check on port 8080 is in a 'warning' state. Your app may not be responding properly. Services exposed on ports [80, 443] may have intermittent failures until the health check passes.
     INFO  Nothing to migrate.
  Health check on port 8080 has failed. Your app is not responding properly. Services exposed on ports [80, 443] will have intermittent failures until the health check passes.
  config file read from /etc/litefs.yml
  LiteFS 251/merge, commit=01a22eb32202c5e16021f002c5cbba42a7dde1da
  Using Consul to determine primary
  initializing consul: key=litefs/cloud-browser url=https://:95bb01f1-b29f-afab-e6d7-72dc04f3e1f6@consul-fra-11.fly-shared.net/cloud-browser-z4k69dzkp04qp5mx/ hostname=148e2d93ad2028 advertise-url=http://148e2d93ad2028.vm.cloud-browser.internal:20202
  mount helper error: fusermount: mountpoint is not empty
  mount helper error: fusermount: if you are sure this is safe, use the 'nonempty' mount option
  ERROR: cannot init file system: cannot open file system: fusermount: exit status 1
  AF3948547478AC522FB8CB23: existing primary found (e7843ed9b425e8), connecting as replica
  AF3948547478AC522FB8CB23: disconnected from primary with error, retrying: connect to primary: Post "http://e7843ed9b425e8.vm.cloud-browser.internal:20202/stream": context canceled ('http://e7843ed9b425e8.vm.cloud-browser.internal:20202')

2024-05-20T03:29:08.109 app[90801e94c61d48] cdg [info] B2F0D6EB0DB71AD7DEF6DDD7: existing primary found (e7843ed9b425e8), connecting as replica

It just keeps failing several times like this and I am not sure now, what could be the issue…
I removed the ssr and worker processes but still getting the same issue with the previous errors and others as below:


2024-05-20T03:29:08.096 app[90801e94c61d48] cdg [info] initializing consul: key=litefs/cloud-browser url=https://:95bb01f1-b29f-afab-e6d7-72dc04f3e1f6@consul-fra-11.fly-shared.net/cloud-browser-z4k69dzkp04qp5mx/ hostname=90801e94c61d48 advertise-url=http://90801e94c61d48.vm.cloud-browser.internal:20202

2024-05-20T03:29:08.099 app[90801e94c61d48] cdg [info] mount helper error: fusermount: mountpoint is not empty

2024-05-20T03:29:08.099 app[90801e94c61d48] cdg [info] mount helper error: fusermount: if you are sure this is safe, use the 'nonempty' mount option

2024-05-20T03:29:08.099 app[90801e94c61d48] cdg [info] ERROR: cannot init file system: cannot open file system: fusermount: exit status 1

2024-05-20T03:29:08.109 app[90801e94c61d48] cdg [info] B2F0D6EB0DB71AD7DEF6DDD7: existing primary found (e7843ed9b425e8), connecting as replica

2024-05-20T03:29:08.111 app[90801e94c61d48] cdg [info] B2F0D6EB0DB71AD7DEF6DDD7: disconnected from primary with error, retrying: connect to primary: Post "http://e7843ed9b425e8.vm.cloud-browser.internal:20202/stream": context canceled ('http://e7843ed9b425e8.vm.cloud-browser.internal:20202')
1 Like

Added litefs

Hello again @discoverlance!

The above repository pulls in a previous version of LiteFS. But! Thanks to your report here, I’ve updated the repository’s Dockerfile, etc/litefs.yml, fly.toml, and the repository’s README.md file to get updated with the latest integration of a Laravel Fly app with LiteFS. You can check that out and see if the updated setup works out! For a more complete reference, you can refer to the official docs we have here: Getting Started guide.

Checking the reported issue

As to the specific error you’re getting, can you check if there are contents in the directory specified in your etc/litefs.yml file’s fuse.dir path? The error is complaining about a nonempty location during mounting, so please try to delete the contents of the folder.

Then afterwards, in your etc/litefs.yml file, you can run your migration under exec, before running supervisor to run the Laravel server, likeso:

exec:
  # That's right we can run our migration as well!
  - cmd: "php /var/www/html artisan migrate --force"
  # Make sure the last command run is our running our server
  - cmd: "supervisord -c /etc/supervisor/supervisord.conf"

Or if you want to migrate your local database.sqlite, after deleting the contents of your dir as I’ve mentioned above, doing a fly deploy to reflect changes to your app, follow the import steps found here

Thanks for the update. In the laravel updated codebase, there’s a mismatch I am not sure of which one to use. There’s a different dir specified in the litefs.yml file and the readme also specifies a different dir.

# in the file https://github.com/fly-apps/fly-laravel-litefs/blob/main/etc/litefs.yml

fuse:
  # This is the folder where the database.sqlite our app uses will be located on
  # This should match the DB_CONNECTION value in our fly.toml file!
  dir: "/var/www/html/storage/database/database"
  allow-other: true

# In the README 
fuse:
    # This is the folder our database.sqlite is located in, as we've specified in our fly.toml file's env.DB_DATABASE attribute 
    dir: "/var/www/html/storage/database"

So which one is correct, var/www/html/storage/database/database or var/www/html/storage/database

Also, for my application, I used the one in the readme. But now, I am getting a different error in my machine:

2024-05-31T11:20:23.385 app[185e772b453498] lhr [info] level=INFO msg="cannot find primary, retrying: no primary"

2024-05-31T11:20:23.923 app[90801e9df26148] lhr [info] level=INFO msg="cannot become primary, local node has no cluster ID and \"consul\" lease already initialized with cluster ID LFSC0C41533E99503312"

Oh! Thanks for pointing this mismatch, I’ve updated the repository again so that the Readme and the contents of files in the repository match properly. Thank you!

Now, to answer your questions:
1. Which one is the correct path for fuse.dir?
Basically, the fuse.dir value is the folder where the database.sqlite used by our app will be found in. So, this would depend on the value of the DB_DATABASE you specify in your fly.toml file’s [env] section. So make sure the path in your fly.toml file’s env.DB_DATABASE value matches with this!

2. How to fix the error: cannot find primary… cannot become primary, local node has no cluster ID

Can you update your litefs.yml’s lease.consul.key value to a different, unique value to your app, then deploy your changes? These should hopefully re-generate a proper cluster id in your machine. We also have a page you can follow steps in to fix errors you might encounter with setting up LiteFS here

1 Like