How can a running instance locate its own logs?

hey fly friends,

Is there a place that logs automatically go to on Fly instances that I could point the splunk universal forwarder to?

If this doesn’t exist, I could go to my server process setup and create a file or something, but I figured there might be a path somewhere that was living in some variable. I’d love to not have to think about file sizes and such.

Related, this seems like a great thing to have a tutorial on!

Thanks

Follow on here, I just want an easy way to search my logs when a user has an issue and I’ve found logging to be quite a pain to set up. I just assumed splunk would make it easy, but there’s a lot left to the implementer.

I’m not entirely sure how Splunk works, but our init program creates a pipe and point both stdout and stderr file descriptor to it when we spawn your app’s process.

Unfortunately, that’s redirected to a serial terminal and cannot be tailed from within your instance.

There’s a lot we’re planning to make logging nicer. Logs are currently “best effort”. They’re generally reliable, but they’re not searchable or easily tailable.

What we’d like to do is offer various “sinks” options. Definitely big, popular, services likes Splunk, but also simpler software like syslog.

If you can write logs directly to Splunk or to a file (and use something to read and send to Splunk), that could work. You wouldn’t get logs in Fly though.

1 Like

My kingdom for a heroku drain compatible implementation.

2 Likes

Thank you. We’ll poke around and circle back.

How big is this kingdom!?

Logs are the next infra component we’re going to expose to you all better (after alerts + metrics). Give us a bit!

2 Likes

Wanted to add a status update here.

  1. After trying for a bit I hate splunk and will not be using it. Super user-hostile.
  2. Am going to try some other providers, like perhaps datadog or papertrail.
  3. Getting a logging daemon set up inside a docker container when I don’t manage the docker daemon is non-obvious. I think there could be a tutorial here. Specifically, most websites talking about how to set up a logging agent with docker seem to assume that I’m not limited by only being able to set things up in the Dockerfile.

LogDNA is pretty good!

This is a big problem with log daemons. You can’t really run sidecars on Fly so most of the “Docker to log service” projects don’t work. And running a second process in your container means adding something like overmind or s6.

The simplest possible thing is probably to use an npm package (or similar) and send logs directly from your app, for now.

Can you clarify what you mean by not really being able to run sidecars? I’m not super up on the terminology here. If the entry point had a && in it and launched a daeomon, would that not work?

Sounds hacky I know.

Yep that would work! Sidecars in docker world are separate processes running in the same namespace, in kubernetes you can run your app process than app a log ingestion container in as a “sidecar” and it can handle all the log stuff (without you changing your container).

Running multiple processes in one container is a-ok. I’d use overmind or similar, though, that’s what we do for the postgres app.

1 Like

Admittedly, fairly small right now :sweat_smile:
But I’m really interested in helping build an ecosystem for Fly apps. Logs, metrics, waf, etc.

Logs are a critical part of running production-like workloads. Give us a webhook/pub sub endpoint and we’ll build integrations on top of it :handshake:

2 Likes

Oh you’ll enjoy the metrics stuff we have coming then. :wink:

We agree about how important logs are! Logplex is pretty brittle, so we’re trying to make sure we don’t ship something unmanageable.

1 Like