How to handle two different AWS_ACCESS_KEY_ID with AWS and Tigris object store?

If I want to use tigris I automatically get a Tigris AWS_ACCESS_KEY_ID which is Fly.io specific. But what can I do, if my docker container requires a different AWS_ACCESS_KEY_ID for AWS?
I cannot set that environment variable twice.

Will I need two docker containers therefore, one for the tigris volume and one for the app itself?

It does seem kind of odd that Tigris uses AWS_ACCESS_KEY_ID, as that would conflict w/ the native AWS libraries. Can’t you explicitly pass in the id/secret to the S3 client?

Also, I’d recommend doing Assuming Cloud Roles on Fly.io Machines so you don’t need to worry about AWS creds at all.

Added storage, tigris

If you’re just leverage the cli, you could consider defining separate profiles for each set of credentials: Configuration and credential file settings - AWS Command Line Interface

Tigris is S3-compatible and hence works out of the box with native AWS libraries, including using the same environment variables.

I understand that, but what happens in the scenario that OP is in? Tigris is 1 part of the larger AWS ecosystem, so it makes more sense to use something like TIGRIS_ACCESS_KEY_ID. Just my 2c, personally, I would use the explicit parameter in the S3 Client

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.