I am currently implementing file uploads in my app, written in
ASP.NET Core. From the MS Docs, if a file is larger than 64K it will be buffered to disk.
I am attempting to understand the scaling limits of this approach on Fly from a disk capacity point of view. i.e. How much temporary disk space are the Firecracker VMs allocated? I don’t expect to needing to handle massive concurrent uploads, at least not any time soon, but I do like to know roughly what the ceiling would be
this answer, it’s 5GB:
Fly provides each instance with 5GB of storage
NGINX is a very fast proxy server that has a built-in caching system — useful if you want a caching CDN for your users, or a way to cache objects inside Fly for fast and repeated access from your applications.
The first and simplest configuration is to use a set of ephemeral instances that based on
GitHub - fly-apps/nginx: A fly app nginx config — this will load up each instance with NGINX, and the nginx.conf has a line to set the origin URL:
set $origin_url https://example.org;
We can modify…
recommended solution is to attach a volume. Because …
It’s not! We actually don’t intend for the ephemeral disk space to be written to. It’s writable just because so much stuff breaks without it, but we’d make it readonly if we could.
You can use autoscaling with persistent volumes, though, so that’s worth a try. We’ll have a better way to autoscale with disks in a few months.
OK that’s great. Kurt then goes on to say it’s fine for scratch space, which this would be for my use case, prior to the file being stored in my storage layer.
Oh yeah! We’re totally fine with that. The problem is mostly that it’s a surprising UX for people who don’t know how it works. If a VM crashes it comes back with a fresh image rootfs.
Ephemeral disks on AWS retain files between restarts.
This is all to say we probably won’t make it so you can run larger rootfs mounts, but you’re totally fine writing to it for scratch space.