{'error': 'invalid config.guest.memory_mb, cannot exceed 0 MiB'} when trying to make a deployment

I am trying to create a deployment from this article via the machines api:

For now I don’t care about the private access / internal ips, as I just want to get this to work.

I have tried changing the memory values, but it would give funny errors.
Can anyone guide me in the correct direction?

    url = f"https://api.machines.dev/v1/apps/{app_name}/machines"

    payload = {
        "config": {
            "region": "ord",
            "init": {},
            "image": f"registry-1.docker.io/ollama/ollama:latest",
            "auto_destroy": True,
            "restart": {"policy": "always"},
            "mounts": [
                {
                    "add_size_gb": 10,
                    "name": "ollama",
                    "path": "/root/.ollama",
                    "size_gb": 10,
                    "size_gb_limit": 10,
                }
            ],
            "services": [
                {
                    "internal_port": 11434,
                    "protocol": "tcp",
                }
            ],
            "guest": {
                "cpu_kind": "shared",
                "cpus": 2,
                "memory_mb": 1024,
                "gpus": 1,
                "gpu_kind": "a100-40gb",
            },
        }
    }

    response = requests.post(url, json=payload, headers=headers)

    print(response.json())
    return response.json()

Hi @AdonisCodes, that error message is definitely confusing; off the top of my head I recall that GPU Machines need the performance flavour of CPU. The RAM floor for performance vCPUs is 2GB per core (I’m looking at Fly.io Resource Pricing · Fly Docs).

Yes. That was correct.
Sorry for taking so long to reply!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.