Hi,
I hope you are well.
My Rails application is restarting all the time due to this error:
Connection lost (ECONNRESET) (Redis::ConnectionError
Reading the documentation in Fly, they recommend patching ActionCable to restart the server every time this happens.
The problem is that under that assumption, I would be restarting my server all the time. Restarting the ActionCable server also means that clients lose connection.
Why is the connection to Redis lost?
I don’t think this is feasible to implement in production with this issue, what could I do?
Thanks!
upstash redis is designed for serverless clients which makes it a good match for Rails caching but isn’t quite as good of a match for web sockets. The patch gets by, but perhaps installing redis in your own image is a better solution.
If you generated your Dockerfile with a fairly recent flyctl launch
, I can walk you through the steps:
Add ruby-foreman redis-server
to DEPLOY_PACKAGES
.
Add the following some place after the last apt-get install
:
# configure redis
RUN sed -i 's/^daemonize yes/daemonize no/' /etc/redis/redis.conf &&\
sed -i 's/^bind/# bind/' /etc/redis/redis.conf &&\
sed -i 's/^protected-mode yes/protected-mode no/' /etc/redis/redis.conf &&\
sed -i 's/^logfile/# logfile/' /etc/redis/redis.conf
Change the server task in lib/tasks/fly.rake
to run
Bundler.with_original_env do
sh "foreman start --procfile=Procfile.fly"
end
Add a Procfile.fly
:
web: bin/rails fly:server
redis: redis-server /etc/redis/redis.conf
Finally either change the REDIS_URL
secret to redis://localhost:6379/1
, or better yet delete the secret entirely and add REDIS_URL
as an environment variable in fly.toml
.
I realize this is a lot of manual steps, but I’m working on a Rails generator that will help you with changes such as these. Meanwhile, I’ve tested the above, and it works (and will be what I use for my personal application).
Thanks!
Do you think there is a way to do the same but with Fly’s processes?
To me, that seems better than using foreman in the same container.
I don’t know how I would connect to Redis in this way.
What do you think?
In my explorations, I’m actually doing it with machines, so the good news is that it should work with fly processes. In fact, the sed
commands I included above turns off bind and protected mode to enable this and aren’t necessary in a single VM deployment.
To get that to work, obviously drop the install of ruby-foreman
, don’t make changes to fly.rake
and don’t add a Procfile.fly
.
I said that that was the good news. The one part that I haven’t worked out yet is the service discovery. In short, the remaining issue is what you set the REDIS_URL
to for such a configuration. The above assumes that you want to run redis on a port that is not exposed to the internet, and makes use of Private Networking · Fly Docs only.
For now, I get around this by creating a small rake task that I invoke via rails deploy
instead of fly deploy
but I’m continuing to look for a better way. If you want to see this in action, I have a demo that you can run: Progress update on scaling a Rails Application - #2 by rubys
Thanks for your help.
It is a bit overwhelming all the options out there.
For now then the only way to do it is to use foreman?
If not, do you have any example configuration for this case?
Would the fly:redis image have to be used?
Indeed!
I can walk you through a number of different options, but the one I described in my initial post is likely the best fit for now. I won’t overwhelm you further by describing all the things I’ve tried and why I’ve settled on this for now, but if you suggest alternatives like you did with fly.toml’s processes I can identify what is currently blocking me on making progress using that path.
The good news is that if/when better options become available you won’t have to change your code, just the value of REDIS_URL
. And you will be able to delete your Procfile at that time.
As I said, there are other solutions I don’t quite recommend – yet. My demo post shows my progress.
No. What I described above is installing the redis-server
using the debian package manager and using it.
Thanks!
What makes me sad is not being able to use the Fly Redis managed solution, which seems so much better. It feels like a step backwards.
From the PR you point out in the Fly documentation, Palkan suggested an approach to not having to restart the Action Cable server, but there has been no progress on that for a while now.
I’m really a beginner at this still and don’t know how that could be advanced, but maybe it would be a good option to be able to use Upstash with Action Cable, instead of having to manually manage Redis.
I’ll keep an eye out for news about doing this with Fly processes as long as the Action Cable solution doesn’t come.
It is not clear to me why Action Cable development has stalled, but perhaps you may be interested in an alternative: AnyCable? They have published documentation using FLy processes as well as a separate app: AnyCable Docs
Thank you for posting this. I believe in the Procfile you meant web: bin/rails server
to run the web server instead of the fly:server rake task?
Also FWIW, given the Dockerfile, fly.rake, fly.toml, etc generated with fly launch
a few days ago, I had to explicitly specify port 8080 in the procfile to get a successful deploy: web: bin/rails server -p 8080
.
The fly:server
rake task will run your server, but will also set up a swapfile. It is also a place where you can add additional startup instructions.
@rubys not sure if I’m missing something here, but isn’t it a problem if fly:server
points to your Procfile.fly
…
sh "foreman start --procfile=Procfile.fly"
…but then the Procfile.fly
itself points back to fly:server
:
web: bin/rails fly:server
redis: redis-server /etc/redis/redis.conf
I think this may have been what @z1lk was referring to?
@andrew-erlanger I have been experimenting and tinkering so I have probably said inconsistent things over time. What I have found is that if you want to have a Procfile, your server task should look something like:
task :server => %i(swapfile) do
Bundler.with_original_env do
sh "foreman start --procfile=Procfile.fly"
end
end
And your Procfile.fly
should launch the apps directly (example: bin/rails server
) rather than invoke Rake tasks.
@rubys thanks for clarifying. Think there may have just been a typo in an earlier comment but all makes sense now. Keep up the great work on the Redis front, looks like much of what you’re working on at the moment will be a huge help for many apps
@rubys, thanks for the recommendation. For now I decided to use Anycable, these are the steps I followed, maybe it can be useful to be in the guide.
I added the anycable-rails gem and ran the setup task, rails g anycable:setup.
I configured the action cable adapter in any_cable, and in the production environment I put the action cable url like this:
config.action_cable.url = “wss://appname.fly.dev:8080/cable”
In my case, since I am only occupying turbo streams, I only need to run anycable-go, so I installed the anycable-rails-jwt gem as well.
In views/layouts/application.html.erb I added this in the header:
<%= action_cable_with_jwt_meta_tag %>
If I didn’t occupy jwt, it would be this:
<%= action_cable_meta_meta_tag %>
The following is almost all for use with Turbo:
Also, in anycable.yml, you need to add this, I put it below redis_url:
jwt_id_key: <%= ENV.fetch(‘ANYCABLE_JWT_ID_KEY’, ‘some-secret-key’) %>
This also needs to be configured in the application:
config.turbo.signed_stream_verifier_key = ENV.fetch(‘ANYCABLE_TURBO_TURBO_RAILS_KEY’, ‘some-secret-key’)
It all depends if Action Cable will be used in development, or if Any Cable will be used in development and production.
It remains to be considered that since I only use Turbo Streams for now, I don’t need the RPC server, but it must be considered as well.
You have to install anycable-go, for this you can run rails g anycable:download --bin-path=/home/user/rails_app/bin/. This is an example.
Then, to start the application in development I use the following Procfile:
web: bin/rails server -b 0.0.0.0 -p 3000
tailwindcss: rails tailwindcss:watch
dartsass: bin/rails dartsass:watch
ws: bin/anycable-go --port=8080 --jwt_id_key=some-secret-key --turbo_rails_key=some-secret-key --disable_disconnect
The Dockerfile I have is the following:
FROM quay.io/evl.ms/fullstaq-ruby:3.1.2-jemalloc-bullseye-slim
WORKDIR /app
RUN apt-get update -q \
&& apt-get install --assume-yes -q --no-install-recommends build-essential libpq-dev libvips \
&& apt-get autoremove --assume-yes \
&& rm -Rf /var/cache/apt \
&& rm -Rf /var/lib/apt/lists/*
ENV RAILS_ENV production
ENV RAILS_SERVE_STATIC_FILES true
ENV RAILS_LOG_TO_STDOUT true
COPY Gemfile /app/
COPY Gemfile.lock /app/
RUN bundle config --global frozen 1
RUN bundle config set --local without 'development test'
RUN bundle install
RUN mkdir /app/bin/
RUN curl -L https://github.com/anycable/anycable-go/releases/latest/download/anycable-go-linux-amd64 -o /app/bin/anycable-go
RUN chmod +x /app/bin/anycable-go
COPY . /app
RUN SECRET_KEY_BASE=dumb bundle exec rake DATABASE_URL=postgresql:does_not_exist assets:precompile
EXPOSE 3000
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0", "-p", "3000"]
The Fly.toml I have is as follows. I am using GoodJob as a background job backend, it would be interesting if you add it to the guides because it avoids having to use Redis. I put Any Cable as a process, through public port 8080.
# fly.toml file generated for appname on 2022-08-31T03:37:30-04:00
app = "appname"
kill_signal = "SIGINT"
kill_timeout = 5
[processes]
app = "bundle exec rails server -b 0.0.0.0 -p 3000"
ws = "bin/anycable-go --port=8080 --disable_disconnect"
worker = "bundle exec good_job start --probe-port=7001"
[env]
CABLE_URL = "wss://appname.fly.dev:8080/cable"
ANYCABLE_HOST = "0.0.0.0"
[experimental]
allowed_public_ports = []
auto_rollback = true
[deploy]
release_command = "bundle exec rails db:migrate"
[[statics]]
guest_path = "/app/public"
url_prefix = "/"
[[services]]
internal_port = 3000
processes = ["app"]
protocol = "tcp"
script_checks = []
[services.concurrency]
hard_limit = 25
soft_limit = 20
type = "connections"
[[services.ports]]
force_https = true
handlers = ["http"]
port = 80
[[services.ports]]
handlers = ["tls", "http"]
port = 443
[[services.tcp_checks]]
grace_period = "1s"
interval = "15s"
restart_limit = 0
timeout = "2s"
[[services.http_checks]]
interval = "10s"
grace_period = "5s"
method = "get"
path = "/healthcheck"
protocol = "http"
restart_limit = 0
timeout = "2s"
tls_skip_verify = false
[services.http_checks.headers]
[[services]]
internal_port = 8080
processes = ["ws"]
protocol = "tcp"
script_checks = []
[services.concurrency]
hard_limit = 10000
soft_limit = 10000
type = "connections"
[[services.ports]]
handlers = ["tls", "http"]
port = 8080
[[services.tcp_checks]]
grace_period = "1s"
interval = "15s"
restart_limit = 0
timeout = "2s"
[[services.http_checks]]
interval = "10s"
grace_period = "5s"
method = "get"
path = "/health"
protocol = "http"
restart_limit = 0
timeout = "2s"
tls_skip_verify = false
[services.http_checks.headers]
[[services]]
internal_port = 7001
processes = ["worker"]
protocol = "tcp"
script_checks = []
[services.concurrency]
hard_limit = 25
soft_limit = 20
type = "connections"
[[services.tcp_checks]]
grace_period = "1s"
interval = "15s"
restart_limit = 0
timeout = "2s"
[[services.http_checks]]
interval = "10s"
grace_period = "5s"
method = "get"
path = "/status"
protocol = "http"
restart_limit = 0
timeout = "2s"
tls_skip_verify = false
[services.http_checks.headers]
[[services.http_checks]]
interval = "10s"
grace_period = "5s"
method = "get"
path = "/status/started"
protocol = "http"
restart_limit = 0
timeout = "2s"
tls_skip_verify = false
[services.http_checks.headers]
[[services.http_checks]]
interval = "10s"
grace_period = "5s"
method = "get"
path = "/status/connected"
protocol = "http"
restart_limit = 0
timeout = "2s"
tls_skip_verify = false
[services.http_checks.headers]
It remains to add the RPC server if necessary.
It’s several steps, it would be great to see this integrated in the guide. Anyway, it’s unfortunate that Action Cable doesn’t work with Fly’s managed Redis. For small projects, it puts a bigger burden to have to host the rpc server and anycable-go. Hopefully a PR will be accepted, or why not, maybe you guys could help with that too?
I think for now there is a bit of inconsistency in the guides, lots of ways to do the same thing, it was a bit discouraging for me.
I really appreciate your help, it helped me find a solution for now that works well for me.
Also, I did a pull request on the docs. It is a suggestion to remove a part of the Turbo streams guide. In one part, you suggest to create an Action Cable channel called names. However, Turbo streams don’t work with conventional Action Cable channels, they work with Turbo::StreamsChannel, so you don’t need to create separate channels, in fact, you don’t even use them. At least that’s my understanding, correct me if I’m wrong.
Thanks and I hope this has been a contribution
Pull request merged.
I agree that there are too many choices, and it is disappointing that the default choice (ActionCable with Upstash Redis) doesn’t just work out of the box. There are plans to make Upstash Redis work better. I’ll continue to monitor ActionCable progress.
The goal is to make the default choice work (even if we eventually have to change the default choice to make it so), and to make it easy to switch to other configurations depending on your needs.
Meanwhile, I’m glad that you found a configuration that works for you, and thanks for the pull request!
Thanks for this!
I have a ton of problems with upstash redis and this feels like a good alternative.
I had to change one thing in Procfile.fly
:
- web: bin/rails fly:server
+ web: bin/rails server -p 8080
Otherwise you get in a loop: Procfile calls fly:server which calls Procfile which calls fly:server…
No clue why the ENV[“PORT”] is not being picked up though
The other problem popped up that sidekiq (which I run in a separate process) could now not connect to redis://localhost:6379/1.
So I added that to Procfile as well and it seems to be working well.
I also added a new redis disk with
fly vol create redis
and added
sed -i 's/^dir \/var\/lib\/redis/dir \/redis/' /etc/redis/redis.conf &&\
to use it.
Here’s the full diff: Run Redis internally · miharekar/decent-visualizer@4e4f7c5 · GitHub
I think I now have a working app?
Hi @rubys and @miharekar –
Appreciate your work on this, I’m currently trying to resolve my redis issues using your solution.
Overall deploys seems succeed with the exception of the redis process, which keeps attempting to connect but doesn’t establish a connection.
I’m wondering if I need to expose another port for 6379
in fly.toml?
For context, here is my setup:
Procfile.fly
web: bin/rails server
redis: redis-server /etc/redis/redis.conf
sidekiq: bundle exec sidekiq
fly.toml
# fly.toml file generated for risekit on 2022-12-24T10:50:03-06:00
app = "risekit"
kill_signal = "SIGINT"
kill_timeout = 5
[build]
dockerfile = 'Dockerfile.production'
[build.args]
BUILD_COMMAND = "bin/rails fly:build"
SERVER_COMMAND = "bin/rails fly:server"
[deploy]
release_command = "bin/rails fly:release"
[env]
PORT = "8080"
REDIS_URL = "redis://localhost:6379/1"
[experimental]
allowed_public_ports = []
auto_rollback = true
# [mounts]
# source="redis"
# destination="/redis"
# [processes]
# web = "bin/rails fly:server"
# worker = "bundle exec sidekiq"
[[services]]
http_checks = []
internal_port = 8080
processes = ["app"]
protocol = "tcp"
script_checks = []
[services.concurrency]
hard_limit = 25
soft_limit = 20
type = "connections"
[[services.ports]]
force_https = true
handlers = ["http"]
port = 80
[[services.ports]]
handlers = ["tls", "http"]
port = 443
[[services.tcp_checks]]
grace_period = "1s"
interval = "15s"
restart_limit = 0
timeout = "2s"
[[statics]]
guest_path = "/app/public"
url_prefix = "/"
Dockerfile
FROM ruby:3.1.3
ENV RAILS_ENV=production
RUN gem install "bundler:~>2" --no-document && \
gem update --system && \
gem cleanup
RUN apt-get update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends imagemagick \
libvips libvips-dev libvips-tools libpq-dev ruby-foreman redis-server && \
rm -rf /var/lib/apt/lists/* /var/cache/apt
RUN sed -i 's/^daemonize yes/daemonize no/' /etc/redis/redis.conf &&\
sed -i 's/^bind/# bind/' /etc/redis/redis.conf &&\
sed -i 's/^protected-mode yes/protected-mode no/' /etc/redis/redis.conf &&\
sed -i 's/^logfile/# logfile/' /etc/redis/redis.conf
# App
WORKDIR /app
COPY ./Gemfile* /app/
RUN bundle config --local without "development test omit" && bundle install --jobs $(nproc) --retry 5
COPY . /app
# NOTE: we dont have assets that need precompilation
# RUN bin/rails assets:precompile
CMD ["bin/rails", "s", "-b", "0.0.0.0"]
EXPOSE 3000
You really want to run redis
as a separate app so it’s not affected by deploys of your main application.
Create a new fly app, using this cofig:
app = "myapp-redis"
[mounts]
destination = "/data"
source = "redis_server"
[metrics]
port = 9091
path = "/metrics"
[build]
image = "flyio/redis:6.2.6"
[env]
REDIS_PASSWORD = "my-password" # <-- can move this to secrets if you want
Make sure you create a new volume redis_server
after you create the app, so redis data could be persisted.
Once you deploy the redis app, you should be able to hit it via:
redis://:my-password@myapp-redis.internal:6379
I’ll give this a shot, thank you!
@sosedoff are you pulling this example from this app? GitHub - fly-apps/redis: Launch a Redis server on Fly
Yep!