Fly logs to papertrail/logdna

What is the recommended way of piping logs to services like papertrail/logdna?

I read this post which links to https://github.com/superfly/fly-log-shipper but TBH I think that assumes some knowledge that I don’t think I have. I wasn’t sure where to start with that.

In my case I’m running an Elixir app, so is it best to just use a Elixir package like https://github.com/larskrantz/logger_papertrail_backend, or is there a more “Fly” way of doing things that doesn’t require changes to the app?

1 Like

That repository does require some deeper-than-we’d-like knowledge.

The flow looks a bit like this:

  • Clone the fly-log-shipper repository
  • Create an app w/ flyctl apps create --no-config --name <your-app-name>
  • Replace the app name in fly.toml with the one you just created
  • Set ORG as an env variable in [env] in the fly.toml, it corresponds to the organization from which you’d like to pull logs.
  • flyctl secrets set ACCESS_TOKEN=$(flyctl auth token) LOGDNA_API_KEY=<your LogDNA API key>
  • flyctl deploy

You should now be set.

We should make that a “launcher” (a bit like our postgres setup).

1 Like

Thank you that’s helpful.

Once I’ve created this app, does it simply share the logs from all other apps in the organisation to Logdna?

It does, yes :slight_smile:

You can specify which specific app you’d like by setting a different SUBJECT env var. This takes a NATS subject.

The format is: logs.{app-name}.{region}.{instance-id}. Using logs.> means you want logs from all regions for all apps and instances. If you just want 1 app, you can use logs.{app-name}.>.

Crash course:

  • Subject “parts” are delimited by dots (.).
  • * in a NATS subject means “match everything for this part of the subject”
  • > means “match everything from any number of subject parts”
1 Like

Thanks. I’ll give it a go :slight_smile:

@jerome I was able to get this running, and delivering to S3. Thanks for working on this. Can I deploy another instance to send the same org logs to a different sink or do I just add other secrets to the existing one?

You can use multiple sinks with one instance. You can also deploy a second instance, totally up to you.

Thanks @Kurt. Got it hooked up to aws and erasearch. Thank you and the team for putting that together!