Certificate issuance failure when using terraform

Hi there,

I am using the fly terraform module to:

  • create an app
  • create a certificate for the app
  • add a DNS validation CNAME record for the certificate to cloudflare
  • add a CNAME record for the app itself to cloudflare

This has worked successfully, however because I am in the process of developing the terraform config, I am of course having to delete and recreate the certificates & CNAMES a few times.

One thing I noticed during this process was that recreation of the same certificate name would be marked as “Verified” for domain ownership in the dashboard, but the certificates would not be issued and both ECDSA & RSA certificates would have a red dot next to them in the dashboard, however creating a new certificate with a slightly different name (e.g. staging2 v staging) would then have both verification and certificates successful and green in the dashboard.

Is it possible that this is caused by a Let’s Encrypt rate limit? It would be great to know what the specific issue is!

If it helps, the app in question is api-kbcadvisors-staging

1 Like

Thanks for the report, I’m looking into it on the tf side and I’ll ping someone to ask about the let’s encrypt side.

1 Like

@DAlperin I don’t think there’s any issue with the tf module, that’s working great as far as I can tell. The reason I need to destroy and recreate is I believe because the fly api doesn’t allow for editing certifcates.

Also thanks for your work in creating and developing the tf module! :slight_smile:

1 Like

Hello @eadmundo, I am trying to do the same using apps that a remote builder built from Dockerfiles upon launching the apps using flyctl launch.
Currently, I try to refactor the nice simple example into Terraform, which also uses a Dockerfile to build an image and launches it as a machine within an application.

Is your Terraform running machines within an application that it has created beforehand, or how are you building and run the application using Terraform? Would you mind sharing the (relevant parts of) your .tf configuration? Thank you.

I’m not @eadmundo but I’m gonna jump in real quick :wink:
I’m actually writing up a whole bunch of tutorials that address this, here are the two main ways I’ve dealt with this problem.

  1. As part of the ci step you can build your docker image on something like github actions and push it to the fly.io registry: docker image push registry.fly.io/yourimagename:latest, then create your terraform resources with the image. This has the issue though that you need to destroy and create a new machine to update the code, however.
  2. A slightly different way that makes things a little easier to update (less destroying and recreating): if as part of your ci you build and push your image: docker image push registry.fly.io/yourimagename:gitsha, you can then use a terraform variable for the image name that you can pass in at the command line pointing to the latest image.

I’m still playing around with such things and messing with some ways to make this more integrated with terraform but for now I’d probably recommend option 2.

I also know flyctl is going to have some better tools added to work with machines in the coming weeks which might make things easier.

1 Like

Thanks @DAlperin for looking into all this, as well as for your suggestions. Maybe we should move this discussion to a new topic, not to hijack this thread.

But briefly: fly launch with its capability to build apps remotely from Dockerfiles on builders that run on fly.io fits my use case perfectly. It looks like a good fit for the workflow of Terraform as well. Can we exploit that capability by extending the fly_app resource in the TF provider?
Or, as fly machine launch --dockerfile ... supports building images remotely on fly.io as well, can we extend the fly_machine resource in the Terraform provider accordingly, too?

That would remove much of the complexities that comes with having to keep track of actions in the build pipelines and images in the registry etc.
However, using the simple example referenced yesterday as a baseline to convert it to fly_machine in Terraform, I got stuck while trying to save the image in the repository after building it remotely from the Dockerfile using the command fly machine launch --build-only --push --generate-name --dockerfile ./Dockerfile --no-cache --no-deploy --remote-only and variations of it. New apps with auto-generated names get created, but it looks like the remote build based does not happen, and therefore no image is pushed to the registry. But will give it another try today.

Update: The image is built once I do fly deploy after the fly machine launch ... above, or if I omit --no-deploy there, e.g. fly machine launch --push --image-label flyMachineHello --remote-only --now -r fra
Then, the image gets pushed to the registry indeed (see fly image show), and the VM runs, e.g. Node’s console logs hello world before it terminates (see fly logs). I can re-run it with fly machine start <machine ID>
Now, I am trying to figure out how to cast this into a Terraform config, and what the fly_machine resource misses eventually.

P.S. Which triggers the question what is the purpose of the --build-only option, if the image cannot be built and pushed without actually deploying and running it? Apparently, replacing --now by --build-only above has no effect.

P.S.2: Actually, --now combined with --build-only have an effect, namely the image gets built, pushed and deployed, but the VM is not scheduled yet. The VM is run only if --build-only is omitted. The VM can be launched by
fly machine run registry.fly.io/fly-machine-hello:flyMachineHello
Each launch will pull the image from the registry and create a new machine within the application, see fly image show. Existing VM can be re-started by fly machine start <machine ID>

hi @hb9cwp - sorry for the slow reply. We are just doing it as simply as possible! Here’s the .tf config in the repo https://github.com/autotelic/fly-terraform-test/blob/main/main.tf

We are creating an empty app, and then once all the resources have been created for this test we used fly config save (fly config save · Fly Docs) to create the fly.toml, although for the real version of this we template that file into the repo

Once we have a fly.toml in the repo, we use a github action to do the first deployment of the app as described here Continuous Deployment with Fly.io and GitHub Actions · Fly Docs

The whole repo template is intended to be templated from a backstage instance (https://backstage.io/docs/features/software-templates/software-templates-index) which would generate the repo and then create the resources and deploy the app using github actions.

I’m looking at the machines API for a future iteration of the config, but not got there yet

1 Like