Structured logging with Fly

Hey there Fly community and team,

I’m wondering if there is a way to emit structured logs to Fly. I’m in Java, using Logback, and the format of my log lines repeats a lot of information that Fly already expresses in the log line.

I could emit plain log lines, but then the metadata attached with my logs would lose resolution (what I emit is a superset of what Fly emits, but there is a lot of overlap). The best solution to this would, I think, be to emit structured JSON logs from my app, which I can do with Google Cloud, AWS, Azure, etc. Logback supports emitting this logs over nearly any medium including regular stdout.

Is there a way I can get these logs into Fly in a structured manner? If I emit JSON, will Fly notice and adapt? Or is there a special endpoint somewhere?

Thank you in advance for any help. I am a rather new user to Fly and I’m loving it :slight_smile:

1 Like

If you mean, shipping logs out of your Fly apps, then, yes; but its something you’d have to set up yourself: Fly Logs over NATS (fly-logs-shipper).

See also: Local Logging Setup (fly-logs-local).


@ignoramous thank you for your quick reply! :slight_smile: although hm, this isn’t quite what i’m looking for. this might accomplish shipping logs somewhere off-platform, but here I’m looking to keep them on Fly, just with timestamps and severity levels that reflect what my app is logging.

for example, Google Cloud Logging lets me emit a JSON payload to stdout, like a regular log line, or to their API, either one, that looks something like:

{severity: "X", "message": "X", "timestamp": 123} etc., and then their logging system will pick up and use these values.

As is, the timestamps in Fly’s logs conflict with my own app’s timestamps. For example:

2023-02-02T00:07:40.857 app[4a5a4056] dfw [info] [00:07:40.856 INFO] fdaa:1:30fe:a7b:a15f:4a10:9194:2 - - [02/Feb/2023:00:07:40 +0000] "GET / HTTP/1.1" 200 16

I really love that Fly injects the region and the timestamp, but, as you can see, the timestamp is expressed twice, and Fly’s is one millisecond greater (likely because of delivery time). The severity is also expressed twice.

In fact, how would one go about controlling the severity line, if there is not a structured logging input? Perhaps that’s what you linked here, i.e. fly-logs-shipper, but I’m not sure how NATS fits into the equation w.r.t. subscriptions when I’m actually wanting to emit, rather than subscribe.

Thank you again for your help here and I hope I’ve explained this problem well


Hi @sgammon

We output JSON formatted logs for our apps, then we use a custom config for the Fly-logs-shipper that parses the log message as JSON then merges the parse JSON with the existing properties supplied by

This allows us to replace the severity with our own when the log message comes from our app.

You’ll get some logs coming from other parts of the infrastructure that will have its own severity so you’ll still want to those messages processed as normal.

Here’s our custom config we use for grafana loki:

  type = "remap"
  inputs = ["log_json"]
  source = '''
  .level = .log.level

  if starts_with(.message, "{") ?? false {
    # parse json messages
    structured = object(parse_json(.message) ?? "") ?? { "message": .message }

    # delete message field and merge structured message
    . |= structured
  } else {
    # parse non-json messages
    structured = object(parse_nginx_log(.message, "combined") ?? "") ?? { "message": .message }

    # delete message field and merge structured message
    . |= structured

  type = "loki"
  inputs = ["loki_json"]
  endpoint = "${LOKI_URL}"
  compression = "gzip"
  auth.strategy = "basic"
  auth.user = "${LOKI_USERNAME}"
  auth.password = "${LOKI_PASSWORD}"
  encoding.codec = "json"

  labels.event_provider = "{{event.provider}}"
  labels.fly_region = "{{fly.region}}"
  labels.fly_app_name = "{{}}"
  labels.fly_app_instance = "{{}}" = "{{host}}"
  labels.level = "{{level}}"

  out_of_order_action = "accept"

@charsleysa ah, gotcha! thank you, this explains how it all fits together.