Scale Count is set to 2 or more, a 404 response will be returned at random.

Hello.

I was trying to build Deno’s Fresh with fly.io.

The application url is

https://test-uyas.fly.dev/

I deployed the default Fresh application with a few minor modifications to the default Dockerfile and fly.toml generated by fly launch.
A strange phenomenon occurred when the scale count was 2 and I went to see the URL where the application was hosted.
It resulted in multiple browser reloads and random 404 errors.
Fresh’s Counter button does not work due to the 404 being raised.


As I was thinking about what to do, I realized some more interesting things.
First of all, this problem is that it will work normally after about 10 minutes. And apart from that, with a fly scale count 1 everything worked fine right after I deployed! It was interesting.

What should I do to ensure stable behavior even when deployed with a scale count of 2 or higher? Below is the Dockerfile and fly.toml. Also, the application

deno run -A -r https://fresh.deno.dev

I’m just taking advantage of the default configuration generated by

Here is my Dockerfile and fly.toml

# Based on https://github.com/denoland/deno_docker/blob/main/alpine.dockerfile
FROM denoland/deno:bin-1.36.4 AS bin

FROM frolvlad/alpine-glibc:alpine-3.13

RUN apk --no-cache add ca-certificates \
   && addgroup --gid 1000 deno \
   && adduser --uid 1000 --disabled-password deno --ingroup deno \
   && mkdir /deno-dir/ \
   && chown deno:deno /deno-dir/

ENV DENO_DIR /deno-dir/
ENV DENO_INSTALL_ROOT /usr/local

ARG DENO_VERSION
ENV DENO_VERSION=${DENO_VERSION}
COPY --from=bin /deno /bin/deno

WORKDIR /deno-dir
COPY.
RUN /bin/deno task build
EXPOSE 8000
ENTRYPOINT ["/bin/deno"]


# fly.toml app configuration file generated for test-uyas on 2023-09-07T00:00:51+09:00
#
# See https://fly.io/docs/reference/configuration/ for information about how to use this file.
#

app="test-uyas"
primary_region = "nrt"

[processes]
   app = "task preview"

[http_service]
   internal_port = 8000
   force_https = true
   auto_stop_machines = true
   auto_start_machines = true
   min_machines_running = 0
   processes = ["app"]

Hey there,

I’m not super well versed in fly scale count and all that it entails, but I did want to provide you with some docs that might help connect some dots and help out a bit.

  1. Scale the Number of Machines · Fly Docs
  2. Run Multiple Process Groups in an App · Fly Docs

If none of this helps at all, hopefully someone else who understands this better than I could help answer your questions. It is definitely odd that scaling the count to 2 or higher brings issues. I’m going to keep digging, and if I find something new, that would be helpful, I’ll be sure to post it here.

1 Like

Other than replacing that line with COPY . ., I ran fresh using your Dockerfile with two machines. I’m not seeing any 404s.

3 Likes

I’m sorry, there was a mistake in the description of the Dockerfile written in my description.

It was really written as COPY . . instead of COPY .

I am very happy that you are interested in my problem.

Summarizing the oddities

  1. It was Happen when scale count is greater than or equal to 2
  2. If scale count 2 or more, it will run stably after about 10 minutes after deployment.
  3. This problem does not occur if scale count 1

No matter how much I dug through the documentation, I couldn’t find a reason that could explain these things…

Sorry, may I ask what version of Fresh you ran here?

When I reported this phenomenon, the version was @1.4.2, and @1.4.3 was released about a day ago!

I changed to the 1.4.3 version on my device and then deployed it again, and the 404’s no longer occur. Perhaps this is now a possible issue on the Fresh side. If the anomaly is reproduced with 1.4.2 and not with 1.4.3, it is definitely caused by fresh…

From my deno.json:

  "imports": {
    "$fresh/": "https://deno.land/x/fresh@1.4.3/",
1 Like

Oh. . . I thought it was caused by Fly.io because it was caused by changing the scale setting of Fly.io. I may have been lucky that a fix was released soon after I discovered this problem. . .

I redeployed it several times, and redeployed it from 1.4.2 to 1.4.3, but it was fixed after I changed the version.

When someone else finds this problem, I’ll look into why this happens and make a note of it.

Thank you for all the communication! ! !

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.