I just read @christopher-fly (Chris Fidao)'s excellent blog post on running Whisper on fly.io : Transcribing on Fly GPU Machines · The Fly Blog
This is great and works but im a little stuck because I also need to run some NodeJS inside of the same container.
Can anyone give me some hints as I might go about doing this?
This is my current docker file:
FROM node:20
# Copy the current directory contents into the container at /app
ARG WORKER_API_KEY
ENV WORKER_API_KEY=$WORKER_API_KEY
ARG VITE_CONVEX_URL
ENV VITE_CONVEX_URL=$VITE_CONVEX_URL
# Install node
ENV NODE_VERSION=18.17.0
ENV NVM_DIR /tmp/nvm
WORKDIR $NVM_DIR
RUN apt install -y curl && \
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash && \
. "$NVM_DIR/nvm.sh" && nvm install ${NODE_VERSION} && \
. "$NVM_DIR/nvm.sh" && nvm use ${NODE_VERSION}
# Set node path
ENV PATH="${NVM_DIR}/versions/node/v${NODE_VERSION}/bin/:${PATH}"
# Verify node installation
RUN node --version && npm --version
# Install bun
WORKDIR /tmp
RUN apt update && apt install -y curl unzip && \
curl -fsSL https://bun.sh/install | bash
# Set bun path
ENV PATH="/root/.bun/bin:${PATH}"
# Verify bun installation
RUN bun --version
# # Clean install of npm
WORKDIR /app
COPY package.json bun.lockb tsconfig.json ./
RUN bun install --frozen-lockfile
COPY shepherd.sh .
ADD convex ./convex
ADD shared ./shared
ADD worker ./worker
# Expose port 3000 to the outside world
EXPOSE 3000
ENTRYPOINT ["bun", "run", "worker"]
Much as I’d love to take credit for Chris’ work, you want @fideloper-fly / @fideloper , not me.
Oh sorry was a bit of a guess.
In the end this works tho im pretty sure its not the right way of doing it:
FROM onerahmet/openai-whisper-asr-webservice:latest-gpu
# Copy the current directory contents into the container at /app
ARG WORKER_API_KEY
ENV WORKER_API_KEY=$WORKER_API_KEY
ARG VITE_CONVEX_URL
ENV VITE_CONVEX_URL=$VITE_CONVEX_URL
# Install node
ENV NODE_VERSION=18.17.0
ENV NVM_DIR /tmp/nvm
WORKDIR $NVM_DIR
RUN apt-get update && apt-get install -y curl && \
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash && \
. "$NVM_DIR/nvm.sh" && nvm install ${NODE_VERSION} && \
. "$NVM_DIR/nvm.sh" && nvm use ${NODE_VERSION}
# Set node path
ENV PATH="${NVM_DIR}/versions/node/v${NODE_VERSION}/bin/:${PATH}"
# Verify node installation
RUN node --version && npm --version
# Install bun
WORKDIR /tmp
RUN apt-get update && apt-get install -y curl unzip && \
curl -fsSL https://bun.sh/install | bash
# Set bun path
ENV PATH="/root/.bun/bin:${PATH}"
# Verify bun installation
RUN bun --version
# Clean install of npm
WORKDIR /app
COPY package.json bun.lockb tsconfig.json ./
RUN bun install --frozen-lockfile
COPY shepherd.sh .
ADD convex ./convex
ADD shared ./shared
ADD worker ./worker
# Expose port 3000 to the outside world
EXPOSE 3000
EXPOSE 9000
# Run both commands
CMD sh -c "bun run worker & gunicorn --bind 0.0.0.0:9000 --workers 1 --timeout 0 app.webservice:app -k uvicorn.workers.UvicornWorker"
#CMD gunicorn --bind 0.0.0.0:9000 --workers 1 --timeout 0 app.webservice:app -k uvicorn.workers.UvicornWorker
Thanks Rubys, ye I probably should have made it clear in my OP that I wanted to run them in a single machine because the node worker code is going to be very light so would be a waste also I want to scale the machines by cloning via the API so I didnt want to complicate it by having to worry about multiple “kinds” of machines.
rubys
June 12, 2024, 11:03pm
6
Then check out (on the same page):
Just use Bash
Use Supervisord
Use a Procfile Manager
Since you are using JS, I’ll add another option: concurrently . Install it using bun (or npm, or whatever), add a new line to the “scripts” section in your package.json, and update the CMD to run the new script.
Yep thats great thanks Ruby. I think my main difficulty was the actual merging of the two docker files. I wish docker made this simpler. Its okay tho as I mentioned I managed to solve it in the end.
Really appreciate your time here.
system
Closed
June 20, 2024, 12:33am
8
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.