First off, huge thanks for building the provider! Without it, I wouldn’t have even been willing to try Machines in production. Glad to hear you found my feedback useful, so here’s some more
- Awesome, 0.0.14 fixed the empty handlers issue for me!
- I wanted to test a new machine with empty
cpus
and memorymb
but with this block:
resource "fly_machine" "test" {
app = var.app_name
region = "ewr"
name = "${var.app_name}-test-ewr"
image = var.image_name
}
The plan completes successfully, but the apply fails with this message:
╷
│ Error: Failed to create machine
│
│ with fly_machine.test,
│ on main.tf line 42, in resource "fly_machine" "test":
│ 42: resource "fly_machine" "test" {
│
│ Create request failed: 422 Unprocessable Entity, &{ID: Name: State: Region: InstanceID: PrivateIP: Config:{Env:map[] Init:{Entrypoint:[] Cmd:[]} Image: Metadata:<nil> Restart:{Policy:} Services:[] Mounts:[] Guest:{CPUKind: Cpus:0 MemoryMb:0}} ImageRef:{Registry: Repository: Tag: Digest: Labels:{}} CreatedAt:0001-01-01 00:00:00 +0000 UTC}
╵
Any idea why request is being made with null/default values for all fields even for those that are explicitly set?
- Figured this was the case since Machines overall feel very rough right now.
- I was able to skip the
flyctl machines api-proxy
command, by updating the provider to use a custom fly_http_endpoint
like so:
provider "fly" {
fly_http_endpoint = "[_api.internal]:4280"
}
I see that flyctl
can directly interact with Machines without Wireguard (i.e. `flyctl machines list). Can I also just use this API endpoint with the Terraform provider instead of internal API endpoint?
- A new thing I’m noticing when running
terraform apply
with new resources is that some of the existing resources are being replaced, like so:
Terraform will perform the following actions:
# fly_machine.coordinator["ewr"] must be replaced
-/+ resource "fly_machine" "coordinator" {
~ cpus = 0 -> 1
+ cputype = "shared"
~ env = {} -> (known after apply)
+ id = (known after apply)
+ image = "registry.fly.io/hathora-games-coordinator:deployment-01GBT8TRAYG8QSPB41NZ4KQGE2"
~ memorymb = 0 -> 2048
+ name = "hathora-games-coordinator-ewr" # forces replacement
+ region = "ewr" # forces replacement
~ services = [
+ {
+ internal_port = 8080
+ ports = [
+ {
+ handlers = [
+ "tls",
+ "http",
]
+ port = 443
},
+ {
+ handlers = [
+ "http",
]
+ port = 80
},
]
+ protocol = "tcp"
},
+ {
+ internal_port = 7147
+ ports = [
+ {
+ port = 7147
},
]
+ protocol = "tcp"
},
]
# (1 unchanged attribute hidden)
}
Seems like the two things forcing replacement at cpus
(0 → 1) and memorymb
(0 → 2048). This is surprising because when I created that machine, I already set the CPU to 1 and Memory to 2048. Could this be because the Machine is shutdown or unallocated because of inactivity and Terraform doesn’t understand this?