We’re investigating some other platforms right now and I’ve used Fly.io on another project and wanted to see how well it works with this one. However, the first build failed because I think it didn’t checkout the git submodule that is required for our app to work.
But the problem is.. it put our app in a strange state. There is no fly.toml configuration, I can’t try to deploy from the command line because it wants to create a new app. My only option is to try a different commit, which won’t solve the problem.
I also can’t find any information online about what to do to get it to load the submodule properly.
Any guidance here would be helpful to either:
Tell me if there is a configuration to enable submodules
Tell me if we can pull the compiled build from another docker registry instead of building on fly
Tell me if there is a way to modify configuration in this state of pre-first deploy
Hi… Were you using the Launch UI? As far as I know, that’s still considered experimental, and I doubt that it supports Git submodules.
In general, you shouldn’t put too much weight on having a particular app name, but you can try fly app delete followed by fly launch --name—and see if that gives the desired result.
If it’s a public registry, then fly deploy --image will work, but you will need the fly.toml first. (That file isn’t really that hard to create by hand, incidentally.)
If it’s a private registry, then you will need to copy it over to registry.fly.io first.
I tend to think of Fly as a Docker platform, notwithstanding attempts to automate this (I assume that’s what @mayailurus refers to).
So your questions don’t seem to me to be the right ones; if you can get it to build in Docker, there’s a good chance it will work in Fly. In other words, there does not need to be a Fly configuration to do stuff with Git or Git submodules, since you can (and should) do that in Docker.
However you mention a compiled build in the context of Docker, so maybe you’re not too far away. Does your app work in Docker locally, including all the submodules-specific stuff?
Ah, I wonder if a Dockerfile fix would be in order, in that case. If you can only deploy from your local console, then you will bump into problems when you move to deploying from CI.
I’d be curious to tease out some more meaning here. In Docker, one would generally clone a repo, and then run git submodules to pull them in as well. These would be committed to a deployable image (one can use a multi-stage build so that Git is not deployed along with the code). So I am a bit hazy on what you mean by “files [not] available on disk” since, by definition, when making an immutable build artifact, one needs all code that needs to be deployed.
(You may be an expert, and you may be entirely happy with your set-up, but my habit of asking follow-up questions sometimes results in a happy improvement for the question asker. Of course I cannot know whether that is the case here.)
I have never cloned a repository inside a Dockerfile so I am not following what you are saying. We’re using the files you already have on disk from a preexisting clone or not (git is in no way required for docker)
The main thing is that you need to ensure that your submodules are present in your image. Would you show your Dockerfile?
I wonder if you’ve always relied on your local dev state to create the conditions for a production image. You can replicate that in CI if you do your git clone and git submodules outside your Dockerfile. So, narrowly, yes, you don’t need Git in Docker. But you do need to snapshot your exact app state in Docker, which may not be entirely a different thing.
You can replicate that in CI if you do your git clone and git submodules outside your Dockerfile. So, narrowly, yes, you don’t need Git in Docker. But you do need to snapshot your exact app state in Docker, which may not be entirely a different thing.
I’m confused by this whole conversation. Can you show me a Dockerfile that clones the repository inside the image? I’ve never seen that before. You have to have the files available locally in order for it to build. It’s common for automatic builders or even CI in general to NOT checkout submodules normally and you have to configure it to check it out properly. That was my original question, it actually has nothing to do with Docker itself other than the fact that our Dockerfile depends on files from our submodule. (which again, only has to do with the fact that the file wasn’t there and not the method of getting that file – which in this case is git.)
I wonder if you’ve always relied on your local dev state to create the conditions for a production image.
Yes, you of course have to rely on a dev state to build the image. Unless you’re able to clone inside a Dockerfile, which sounds a bit outside of what Docker was meant to do.