I am currently working on a project where I need to upload an entire folder (including all its subfolders) to my Fly.io app using the Fly.io APIs. So far, my approach has been to extract all the files, encode them in Base64 format, and then push them to a Lambda function. However, this process is proving to be cumbersome and inefficient for large directories.
Here’s the current format I am using for uploading individual files:
Is there a more efficient way to upload an entire folder and its contents (including subdirectories) directly via the APIs? If so, could you please provide guidance or an alternative method for achieving this?
Additionally, if you have any suggestions or best practices for handling bulk file uploads to Fly.io, I would greatly appreciate your input.
That would depend on what APIs you are using (aka how are you starting the machines and passing in the folders)
I would probably upload them to something like tigris or another S3 provider and then have the app fetch them from there.
That way you have them on a path that you can debug yourself, and there are many tools and libraries that can download from S3.
I’m using this part of the config object when a machine is being created. I’m calling this from another Fly machine, so I have full capacity to add as many entries to this JSON structure as I like.
Could you clarify what you mean by “efficient”? The above is the correct way to do it. If you are just wanting to automate this process, have your API-calling code explore the source folder recursively, and add files sequentially to this data structure.
However you have a whole playground of options available to you:
Compress a tarball or bz2 file and upload it via this method, then extract it on the container
Get the container to pull a tarball from a trusted location
Implement an authenticated listener in the container that allows you to send files to it via HTTP
Use flyctl ssh sftp to write container files remotely while the machine is running