Problem connecting to External Database

General disclaimer - I am taking over an ops deployment script left by another dev, and I’m very inexperienced in this domain.

I am trying to connect an elixir/phoenix app to crunchy-bridge. On deploy, I run a create and migrate command via the deploy shell script. When running a fly deploy github action, I’m getting the following output on the fly.io machine logs:

2024-11-25T04:13:46.471 app[x] iad [info] 2024/11/25 04:13:46 INFO SSH listening listen_address=[y]:22 dns_server=[z]:53

2024-11-25T04:13:48.292 app[x] iad [info] IP [error] Postgrex.Protocol (#PID<0.151.0>) failed to connect: ** (DBConnection.ConnectionError) ssl connect: Options (or their values) can not be combined: [{verify,verify_peer},

2024-11-25T04:13:48.292 app[x] iad [info] {cacerts,undefined}] - {:options, :incompatible, [verify: :verify_peer, cacerts: :undefined]} 

My runtime.exs file includes the following:

config :my_app, MyApp.Repo,
    ssl: true,
    url: database_url,
    pool_size: String.to_integer(System.get_env("POOL_SIZE") || "10"),
    socket_options: maybe_ipv6,
    # username: "application",
    # password: database_pw,
    database: database_name

The crunchy bridge helper provides the following instructions for Phoenix:


Step One: Set the DATABASE_URL env variable

Phoenix uses postgres by default when you generate a new application.Using the generator below you can access the connection url. Since Phoenix expects a URL the format has be preset to URL.

Step Two: Set your database connection to SSL in the config/prod.secret.exs file

When connecting to your Crunchy database we enforce SSL. You will need to un-commnent the following line to enable SSL in your Repo connection.

postgres://application:…{long_string}


I set the database_url environment variable in the fly.io secrets, and it appears to be loading correctly into the application.

Both the create and migrate functions lead to the same error shown above. Both work locally on my machine.

def migrate do
    load_app()

    for repo <- repos() do
      {:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, :up, all: true))
    end
  end

  def create do
    load_app()

    Enum.each(repos(), fn repo ->
      case repo.__adapter__().storage_up(repo.config()) do
        :ok ->
          IO.puts("The database for #{inspect(repo)} has been created")

        {:error, :already_up} ->
          IO.puts("The database for #{inspect(repo)} has already been created")

        {:error, term} when is_binary(term) ->
          raise("The database for #{inspect(repo)} couldn't be created: #{term}")

        {:error, term} ->
          raise("The database for #{inspect(repo)} couldn't be created: #{inspect(term)}")
      end
    end)
  end

  defp repos do
    Application.fetch_env!(@app, :ecto_repos)
  end

  defp load_app do
    :ssl.start()
    Application.load(@app)
  end

Any suggestions for where to troubleshoot?

My problem was solved with this:

config :my_app, MyApp.Repo,
    ssl: true,
    ssl_opts: [verify: :verify_none],

I believe this was driven from changed defaults in OTP 25 (or 26).

I also solved this @RoboZoom doing what you did until I discovered the following:

config :my_app, MyApp.Repo, 
      ssl: [cacertfile: "/etc/ssl/certs/ca-certificates.crt"]

This lets you not skip the SSL verification, assuming the SSL is valid on the other end.

The more proper solution is to use ssl: [cacerts: :public_key.cacerts_get()] but I’ve found the more connections you open the more memory gets used; it seems to be copying the cert in each connection process instead of it being referenced once in an ETS table (or other solution). I think it’s a bug in the way Postgrex handles things, and I went down a rabbit hole trying to find someone with answers and maybe get it fixed but ended up just using the direct cacertfile since it works on Fly and does not increase memory usage with each new connection (outside of the process memory itself of course).

I also noticed this change happen from OTP 25 to OTP 26.

Disclaimer: I use neon.tech for my database but I assume it would work with any valid SSL certs.

1 Like