We’ve been working with the esteemed team at Quickwit to bring you an experimental application log cluster and search interface. You can try it out by:
For the period of this experiment, log search is free and we retain logs for 30 days. Learn how to build more complex queries in Quickwit’s Query Language Intro.
It should work! You may need to log out which will log you back in again. Colored logs should work. Feel free to write to extensions@fly.io with the app names that aren’t working for you.
I do enjoy having my logs right next to my metrics. It’s very convenient.
I tried using the query functionality and I’m a little confused. In the grafana panel, when I click on a message it expands to show me the different fields.
From reading the quickwit docs, I should be able to query the message field for content in that field. However, if I try the query message:* I get no results. If I’m understanding the docs correctly, this query should return all entries that have a message field. If I try it with other fields it works perfectly.
Unfortunately, I don’t think this covers my user story completely enough to move off of betterstack. My app logs contain timestamps – not from the fly platform, its a part of the log message format in my app. Quickwit doesn’t seem to support wildcard prefixed queries (eg. fly.app.name:*foo), unless I’m missing something. If I can’t just grep logs by text in a crunch, then it’s a no go.
The fact that message:* returns no result is most likely a bug (plugin side or quickwit side; I will check that).
Concerning the timestamp, I think the timestamp from Fly and your app should be very similar. Is this a real problem? Note that currently, the plugin displays only the timestamp at second precision; this is fixed in the latest plugin version, so it will be fixed soon.
Concerning the query fly.app.name:*foo, Quickwit does not yet support that indeed. I imagine you have a lot of apps, and you don’t want to specify the list of apps.
Without having the ability to query with wildcard prefixes, I can’t really search my logs at all because of the date and time. Of course I could write code to be able to make that configurable (and I probably should) but it’s just another barrier to entry.
Ultimately, what I really want is to be able to write queries for the fields in my logs, like the logger name. But a nice (perhaps more generally useful) alternative would be the ability to quickly search by some substring.
I think what happened when I tried it before was I didn’t have my query range set for long enough, and was searching for a string that occurred outside of that time range. Thanks for the clarification.
Got nearly the same output, how do you map/configure those fields correctly?
I agree with the sentiment. We’re also pushing json. It appears to be formatted correctly, but I can’t really query on it. None of my fields are indexed or available when you expand the log to view details. So what we end up with is message.message because our logs emit data, along with a message. It’s hard to query and find logs when the whole message block is treated as a string.
We (Quickwit) would love to parse those JSON logs, but we need to consider how to handle that correctly.
Taking inspiration from the OTEL log data model, here is what we propose to do:
we try to parse the log line.
If it’s successful, we put the JSON in the field body. All subfields of body will be tokenized so users can run full-text search queries on them. We also propose to extract attributes, resources, severity_text fields present in the JSON and populate the log accordingly. The values of those fields won’t be tokenized, and users will be able to run analytics queries on them. For example, this opens the possibility to do aggregations on status, method if those fields are under attributes or resources fields.
If the parsing fails, we fall back to the current behavior with a slight change, we put the log line in the field body.message.