Is there a place that logs automatically go to on Fly instances that I could point the splunk universal forwarder to?
If this doesn’t exist, I could go to my server process setup and create a file or something, but I figured there might be a path somewhere that was living in some variable. I’d love to not have to think about file sizes and such.
Related, this seems like a great thing to have a tutorial on!
Follow on here, I just want an easy way to search my logs when a user has an issue and I’ve found logging to be quite a pain to set up. I just assumed splunk would make it easy, but there’s a lot left to the implementer.
I’m not entirely sure how Splunk works, but our init program creates a pipe and point both stdout and stderr file descriptor to it when we spawn your app’s process.
Unfortunately, that’s redirected to a serial terminal and cannot be tailed from within your instance.
There’s a lot we’re planning to make logging nicer. Logs are currently “best effort”. They’re generally reliable, but they’re not searchable or easily tailable.
What we’d like to do is offer various “sinks” options. Definitely big, popular, services likes Splunk, but also simpler software like syslog.
If you can write logs directly to Splunk or to a file (and use something to read and send to Splunk), that could work. You wouldn’t get logs in Fly though.
After trying for a bit I hate splunk and will not be using it. Super user-hostile.
Am going to try some other providers, like perhaps datadog or papertrail.
Getting a logging daemon set up inside a docker container when I don’t manage the docker daemon is non-obvious. I think there could be a tutorial here. Specifically, most websites talking about how to set up a logging agent with docker seem to assume that I’m not limited by only being able to set things up in the Dockerfile.
This is a big problem with log daemons. You can’t really run sidecars on Fly so most of the “Docker to log service” projects don’t work. And running a second process in your container means adding something like overmind or s6.
The simplest possible thing is probably to use an npm package (or similar) and send logs directly from your app, for now.
Can you clarify what you mean by not really being able to run sidecars? I’m not super up on the terminology here. If the entry point had a && in it and launched a daeomon, would that not work?
Yep that would work! Sidecars in docker world are separate processes running in the same namespace, in kubernetes you can run your app process than app a log ingestion container in as a “sidecar” and it can handle all the log stuff (without you changing your container).
Running multiple processes in one container is a-ok. I’d use overmind or similar, though, that’s what we do for the postgres app.