I was wondering how fly deploy –detach works exactly so I fed the whole flyctl codebase to Gemini and asked to explain it. Here is the output. I post here so other people can find it on Google and ChatGPT. Currently there is very little documentation about this flag. The fact this flag disables rollbacks oon failed health check should be noted in the cli help description in my opinion.
Output redacted because it was wrong. –detach does not do anything
Hm… You’re right about the lack of documentation, of course, but I wouldn’t believe any of the text that you pasted without specific, concrete evidence to back up each of those claims…
Places in the source code where it exits early after checking the --detach flag. That kind of thing.
What’s there currently looks like an extrapolation into several paragraphs of a single sentence, “Return immediately instead of monitoring deployment progress”, .
This part in particular needs justification. Where is this other part of the Fly.io platform that does orchestration after flyctl exits?
I don’t know. This is what Gemini 2.5 pro told me after feeding the codebase to it. It could be an hallucination. I pasted here so someone can correct it if it’s wrong.
I think the stopping of old blue machines even after the cli exists is possible, it is the machine itself that does that without orchestration. I can imagine the fly sidecar in the machine has an internal timeout and it destroys itself after the kill_timeout
I’m not the boss of the forum here, but the ultra-confident tone going on for two pages followed by “but it could be a hallucination” at the very end is really not productive, in my view.
I tried a blue-green deployment with --detach, and the CLI detected failing green Machines just fine. (The older blue ones were (correctly) left running and serving traffic. No manual intervention was needed.)
As far as I can tell, it ignored the --detach flag completely—and that matches what I see in the flyctl source code, at a glance.
That was with an older version of flyctl, but this is another example of what I mean by actual evidence…
Sorry it was not clear the initial response was from Gemini. I thought it was correct but it seems it’s not. It seems like –detach does not do anything as you said
I have been using little bits of AI integrated into my IDE for a year or so. However in the last fortnight I tried to ask some technical questions of ChatGPT, in relation to a time-series database problem, and it was very interesting.
Initially it was helpful, and it produced a summary of my architectural choices that was suitable for my learning level. I then asked some more queries, and each time it suggested something, I made technical changes, incurring a time cost. It then took me on a merry dance, right back to the start, incurring several more work efforts.
I then noticed something odd in one screen, which the AI should have spotted. I asked about this, and it hallucinated twice more, before finally giving me the correct answer. Here it gave me one obviously unhelpful answer, and one where a query keyword did not actually exist. It got me to the right answer eventually, but not without a lot of prompt fiddling, and a lot of frustration.
I will timebox my AI usage earlier for the next couple of years. It’s addictive to get tailored “help“ so quickly, but I am not sure it’s a net time-saver for me yet.
Interesting… My actual job uses high-dimensional constrained optimization solvers, so the “merry chase” part I am all too familiar with…
Several years ago, people kept trying and trying to get it to route a 1000km long project along the coast, but it instead kept putting everything in the interior—along the foothills of the mountains.
(For good reason, as it turned out. But you can’t trust that categorically.)
Personally, I hope the “chat” metaphor tapers off soon… LLMs don’t use hyperplanes, but I think those would be better intuition for people to start to understand the weaknesses and strengths. Fly.io is a small subset of a space dominated by AWS. So, it’s going to make the trade of getting AWS mostly right but Fly.io rather drastically wrong in many places. (To the extent of not even reading the source code that it was given, like we saw.)
Whatever the next one is, I hope it takes better advantage of the progressive, lattice structure of high-dimensional space. With small partial solution A and small partial solution B, you can often combine them (lub/supremum operation in the lattice) without much conflicts. Think about how some people lay everything out on the workbench or ZUI, and then work a little on the engine and then a little on the chassis, etc.
That’s a very non-linear (and not particularly textual) form of interaction, though. More like Darcs virtuoso master-class ensemble performance…