Hello there!
I intend to use fly.io for large file processing for my application, but in some cases files might not be suitable for streaming while processing, and thus I might need to store them in a file system accessible from a machine the application is running on. However, these files can also get quite large, and at peak the space taken might be almost twice as large. This could lead to needing 10 or even 20GB of space ready for the processing in some cases.
I only realistically need the storage for a couple of minutes, a segment of 15 minutes is already a pretty big stretch, but volumes are pro-rated per hour as far as I’ve seen in the documentation, which makes that feel like I would be paying for far longer storage usage than I need. On the other hand, as far as I can tell, rootfs of a machine can only grow to ~8GB of space (and is technically not even really meant for writing). Am I completely missing the right approach here? Or would I just have to go with persistent volumes anyway and try to manage them somehow to make the most use of the hour of space when it is emptied?
(Edit: Thinking about it, for now I am thinking in the direction of making new volumes when making a new machine, and then deleting the machine, but not yet the volume (as it should be paid off for an hour), which means another machine could start another processing job and use it, and then I can cancel abundant volumes after an hour, is there anything I can improve about this line of thinking?)
Any suggestions and help are appreciated, thank you for reading!