Rails 8 / Sqlite / Anthropic

Any reason why a call to anthropic (‘https://api.anthropic.com’) would fail? Silently - I can’t see anything in the logs - only info. Wish I could see errors. Surely there are some.

I’m trying run a job. I’m sticking to the non ‘redis’ version - declining to add it when fly tried to add it.

Rails 8 / Sqlite all the way baby!

My secret key is added with secrets. I can see it with secrets list.

It does run locally.

Am I doing something stupid again?

Fly launch only suggests redis when it detects something in your configuration that requires it.

To run a job you need something like SideKiq (which requires redis) or Solid Queue (which does not). Either way, you need to run that tool in a separate process.

Try running the following in an empty directory:

rails new . --main
bin/rails generate job cleanup
fly launch

Note that redis is not prompted for. Then check your fly.toml, specifically for:

[env]
  DATABASE_URL = 'sqlite3:///data/production.sqlite3'
  HTTP_PORT = '8080'

[processes]
  app = 'bundle exec thrust ./bin/rails server'
  solidq = 'bundle exec rake solid_queue:start'

[[mounts]]
  source = 'data'
  destination = '/data'

I will not need redis but I will need another machine?

Adding…
[processes]
app = ‘bundle exec thrust ./bin/rails server’
solidq = ‘bundle exec rake solid_queue:start’

…to toml added a couple machines. That must not be all the config nec. I get one of these errors on the machines - that is something!

Errno::EACCES: Permission denied @ dir_s_mkdir - /data (Errno::EACCES)

For now I’m going to kill these machines, work locally for awhile and see what shakes from the trees when Rails 8 is launched.

Thanks for your help.

SQLite is a file-based database. To access the same database for web and worker, either use a shared volume or synchronize two SQLite instances. However, Fly cannot share volumes, and synchronizing SQLite requires some advanced knowledge LiteFS - Distributed SQLite · Fly Docs

I think adding a postgres instance is simpler.

Good point! I’ve started working on a pull request to create a procfile to run solidq in this scenario:

Current draft of a Dockerfile for this scenario can be found at:

Wish I knew more about all this. I see the github convo with dhh.

If I were to try to get this into my app I’d add it as a gem or will fly do something with launch/deploy?

[Update] Seems launch handled a lot of this…

From the fly.toml processes = [“app”] was getting flagged as redundant.

[http_service]
processes = [“app”]

Fly still tries to add redis.

Still added 2 machines.

Some may be interesting things in logs:

2024-10-21T17:16:18Z app[0801e06b051398] dfw [info] WARN could not unmount /rootfs: EINVAL: Invalid argument
2024-10-21T17:16:18Z app[0801e06b051398] dfw [info][ 3.883422] reboot: Restarting system

This is the best thing I could find…

2024-10-21T17:17:56Z app[e784930b695348] dfw [info]{“time”:“2024-10-21T17:17:56.36050397Z”,“level”:“INFO”,“msg”:“Request”,“path”:“/chats”,“status”:500,“dur”:14,“method”:“POST”,“req_content_length”:338,“req_content_type”:"multipart/form-data; b

Both machines are stopped: Perhaps never started. I don’t know I’m just faffing around.

I started the machine from fly.io and got some more interesting messages: maybe all this is known known.

2024-10-21T17:32:39.779 app[d8dd953b1e3378] dfw [info] INFO [fly api proxy] listening at /.fly/api

2024-10-21T17:32:39.793 runner[d8dd953b1e3378] dfw [info] Machine started in 878ms

2024-10-21T17:32:40.054 app[d8dd953b1e3378] dfw [info] 2024/10/21 17:32:40 INFO SSH listening listen_address=[fdaa:a:6a0f:a7b:67:9d92:f7c8:2]:22 dns_server=[fdaa::3]:53

2024-10-21T17:32:42.204 app[d8dd953b1e3378] dfw [info] SolidQueue-1.0.0 Error registering Supervisor (10.3ms) pid: 322, hostname: “d8dd953b1e3378”, name: “supervisor-bd0c3b685aa4090c6572”, error: “ActiveRecord::StatementInvalid Could not find table ‘solid_queue_processes’”

2024-10-21T17:32:42.204 app[d8dd953b1e3378] dfw [info] SolidQueue-1.0.0 Started Supervisor (215.5ms) pid: 322, hostname: “d8dd953b1e3378”, process_id: nil, name: “supervisor-bd0c3b685aa4090c6572”

2024-10-21T17:32:42.204 app[d8dd953b1e3378] dfw [info] rake aborted!

2024-10-21T17:32:42.204 app[d8dd953b1e3378] dfw [info] ActiveRecord::StatementInvalid: Could not find table ‘solid_queue_processes’ (ActiveRecord::StatementInvalid)

2024-10-21T17:32:42.204 app[d8dd953b1e3378] dfw [info] /usr/local/bundle/ruby/3.3.0/gems/activerecord-8.0.0.rc1/lib/active_record/connection_adapters/sqlite3_adapter.rb:512:in `table_structure’

That suggests that the migration was not run.

I know that solid queue works with sqlite3 on Rails 8 as I’ve tested it, and that it won’t prompt for redis on a clean sqlite3 Rails 8 app; so I’m guessing that you have taken an app that was created prior to Rails 8 and are adding solid queue to it?

And, by the way, if you set SOLID_QUEUE_IN_PUMA = 1 in the [env] section of your fly.toml, you can delete the solidq process.

I encourage you to try the instructions here: Rails 8 / Sqlite / Anthropic - #2 by rubys ; and take a look at the resulting config/database.yml.

It is a worthy goal to make sure Rails/Sqlite and Fly work well together. I’d like to help but the honest truth is I may be unqualified. Perfect subject for such an endeavor :slight_smile: !

That suggests that the migration was not run.

Yup, but I don’t know why. The app db is there.

I know that solid queue works with sqlite3 on Rails 8 as I’ve tested it, and that it won’t prompt for redis on a clean sqlite3 Rails 8 app; so I’m guessing that you have taken an app that was created prior to Rails 8 and are adding solid queue to it?

I don’t think so. I’ve just been using 8.

And, by the way, if you set SOLID_QUEUE_IN_PUMA = 1 in the [env] section of your fly.toml, you can delete the solidq process.

I tried setting that. Perhaps that is why this failed.

I encourage you to try the instructions here: Rails 8 / Sqlite / Anthropic - #2 by rubys ; and take a look at the resulting config/database.yml .

To be clear you want me to fire a bare 8 app with a job and see how it launches?

It did NOT ask to install redis.

Running: bin/rails generate dockerfile --label=fly_launch_runtime:rails --skip

Ah yes I see that now. but --skip?


  PROCESS                                 | ADDRESSES                            
------------------------------------------*--------------------------------------
  /.fly/hallpass                          | [fdaa:a:6a0f:a7b:6b:3614:dafa:2]:22  
  puma 6.4.3 (tcp://0.0.0.0:3000) [rails] | 0.0.0.0:3000                         

WARN failed to release lease for machine 0801e01f019738: lease not found
-------

-------
 ✖ Failed: timeout reached waiting for health checks to pass for machine 0801e01f019738: failed to get VM 0801e01f01973…
-------
Error: timeout reached waiting for health checks to pass for machine 0801e01f019738: failed to get VM 0801e01f019738: Get "https://api.machines.dev/v1/apps/flytest-withered-cherry-7701/machines/0801e01f019738": net/http: request canceled

https://api.machines.dev/v1/apps/flytest-withered-cherry-7701/machines/0801e01f019738

The first time I ran it …

  • create 2 “app” machines

Second

  • create 1 “app” machine
  • create 1 “solidq” machine and 1 standby machine for it

Hmm, tried again same result. Failed after a few minutes.

That’s not the db that solid queue uses. Here’s an excerpt from config/database.yml from a fresh rails 8 application:

# Store production database in the storage/ directory, which by default
# is mounted as a persistent Docker volume in config/deploy.yml.
production:
  primary:
    <<: *default
    database: storage/production.sqlite3
  cache:
    <<: *default
    database: storage/production_cache.sqlite3
    migrations_paths: db/cache_migrate
  queue:
    <<: *default
    database: storage/production_queue.sqlite3
    migrations_paths: db/queue_migrate
  cable:
    <<: *default
    database: storage/production_cable.sqlite3
    migrations_paths: db/cable_migrate

If you want to use solid queue, you are going to need to have the queue database defined, along with its migrations.

Moving to production, I guess I was assuming these migrations were run automatically.

So, I started to try to run it in prod locally. That has been interesting.

I managed to get the migrations and stuff there. I stuffed the solid_cache_entries into the wrong db - since the proper one as you can see in the screenshot is there.

The one thing I see as different is the url in my database.yml


# Store production database in the storage/ directory, which by default
# is mounted as a persistent Docker volume in config/deploy.yml.
production:
  primary:
    <<: *default
    database: storage/production.sqlite3
    url: <%= ENV["DATABASE_URL"] %>
  cache:
    <<: *default
    database: storage/production_cache.sqlite3
    migrations_paths: db/cache_migrate
    url: <%= URI.parse(ENV["DATABASE_URL"]).tap { |url| url.path += "_cache" } if ENV["DATABASE_URL"] %>
  queue:
    <<: *default
    database: storage/production_queue.sqlite3
    migrations_paths: db/queue_migrate
    url: <%= URI.parse(ENV["DATABASE_URL"]).tap { |url| url.path += "_queue" } if ENV["DATABASE_URL"] %>
  cable:
    <<: *default
    database: storage/production_cable.sqlite3
    migrations_paths: db/cable_migrate
    url: <%= URI.parse(ENV["DATABASE_URL"]).tap { |url| url.path += "_cable" } if ENV["DATABASE_URL"] %>

Thanks rubys.

The database yml file for production was updated by fly launch to ensure that all of the databases are placed on a volume, and therefore survive a restart (including a restart as a result of a deploy)

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.