Early look: PostgreSQL on Fly. We want your opinions.

I’m new to fly and having some issues with regards to connections (I’m using Windows).

I’ve set up the default postgres instances but I’m having difficulty with regards to the host address (while connected with Wireguard).

I finally was able to connect with psql shell but I had to use one of the app instance addresses to connect. APP_NAME.internal doesn’t translate the host - it gives the error ‘unknown host’. As per the example in the docs “psql postgres://postgres:secret123@appname.internal:5432” doesn’t work for me.

I’ve tested with the privatenet example on github and appinstances are correctly displayed.

Am I missing something?

Thanks
Steve

As an update, for some reason I can now connect through the .internal address. I’m not sure if it is due to time (just under an hour since inception of the app) or because I used SSH to connect to one of the instances. I’m quite confused.

Did you happen to open a new shell after establishing a wireguard connection? Wireguard does some DNS magic to resolve *.internal using our private resolvers, and do everything else normally. On Mac OS it only works if I open a new shell, existing shells don’t know how to resolve those names.

I haven’t looked at exactly what they do on Windows yet (not WSL, right?), but it wouldn’t surprise me if it’s a similar problem.

I had the wireguard connection open earlier, but it wasn’t until opening a shell that it was able to resolve. It took a while to open the shell since I didn’t realise that the key as Open SSH didn’t work with putty and could simply be provided through command prompt. Not WSL - although I’ve enabled WSL2 for a Ubuntu instance in Windows (which resolved the dig command).

(edit) I just tested again and the .internal didn’t resolve. It wasn’t until I opened a new shell that it resolved. As soon as I closed the shell it again did not resolve.

With regards to this process, are timing issues before deleting the old volumes (e.g. replication process)? Also, is it possible to attach one or more volumes to the app for the purpose of storing backups or can they only be used for replication? I just tried adding a different volume to the pg app hoping that it would show up in the data folder, but that wasn’t the case.

A post was split to a new topic: Phoenix / ecto + Postgres

Yes, you will want to make sure the new instances are replicated before you tear down both old volumes.

Right now, we don’t support multiple volumes on a single app. Our UX is a little confusing because you can create multiple different volume names, but you can’t mount them.

The best bet, at the moment, is to make a larger volume and create a directory on it for any backups or other data you want to store. It’s not ideal but it will work.

Thanks for the clarification.

I want to try stolon on my own nomad cluster is there an MVP job file you have lying around one could work with to deploy “this” postgres-ha?

1 Like

How do I set postgres config values like log_min_duration_statement and shared_preload_libraries?

I’d like to see units on all the y axes in the metrics:

1 Like

For that, you’ll want to fork the postgres app and deploy it directly.

1 Like

Hmm…not what I expect from a managed database offering

See the “DB Management” section in the initial post. :smiley:

We’ll be improving that postgres app over time, though. It’s handy to be able to fork it and do your own thing in the meantime.

Would it be possible for you to automatically choose tuned values based on instance size? Eg shared_buffers, random_page_cost, effective_io_concurrency, etc. You know the properties of the hardware you’re running on so you should be able to pick good values. shared_buffers is probably the most important one, 128MB is gonna cripple performance on the large instance sizes.

Yep, that’s the plan. Same for connection limits. It’s pretty easy to pass those things to Stolon based on memory size and CPU count.

We’re slow rolling this a bit, but one of the things that’s going to drive Postgres improvements are larger DB customers. I’m guessing we’ll start working on that in a couple of months, for now we’re open to PRs for pretty much anything if you feel like dabbling.

I updated the DB Management section, it was a little vague.

I would be interested in a full-time position working on this :stuck_out_tongue:

1 Like

Ahahahaha. Give us like 3-6 months. You better believe we’re going to need it.

As for setting pgParameters, any tip on how to modify that script so there’s an “update” step for changing them after initial config? I know stolon supports it, something like this?

stolonctl update --patch '{ "automaticPgRestart" : true }

but do I need to specify cluster-name or pass $keeper_options to that? Confused also on diff between stolon bin in their docs vs exec gosu stolon in the script, is that the same? Also would I have to wait overmind to “finish” starting? I’ll take a look, but assuming people here may be more familiar with the specific start.sh script.