Trying to run llm generated sandbox on fly.io - How to make this work?

I need to develop a development environment on Fly.io capable of running full-stack Node.js applications generated by large language models (LLMs). The environment should support rapid boot times (ideally under 3 seconds) and facilitate immediate application restarts upon code changes. This setup aims to streamline the iterative development process, allowing for swift testing and deployment of AI-generated code.

I should think that would be possible. However I’d get a prototype working in Docker on your laptop first.

My initial thought is that if you have AI writing code, it may be quicker to have some pre-started Fly machines, and you just copy the code into the running machine via rsync, rather than supplying the code to the machine start API.

It’s certainly possible. Any specific questions?

I have a 100% Augment-generated oceanography visualization and analytics system running here on Fly and it’s been an awesome experience. Ongoing maintenance and operations of Fly infra is also completely AI-managed. https://bluegraph.io

In fact, during my architectural & implementation planning with Augment it was Augment itself that recommended Fly based on my desired operational model (Lambda or ECS Fargate-like) and that’s how I ended up in this message forum!

2 Likes