I’ve been running through the Fly Log Shipper quickstart guide and have been unable to get it to publish to loki.
I sshed into my app and noticed /etc/vector/loki.toml
is not in the image
# ls /etc/vector/
aws_s3.toml datadog.toml honeycomb.toml humio.toml logdna.toml logtail.toml new_relic.toml papertrail.toml sematext.toml uptrace.toml vector.toml
Upon further investigation, the fly-log-shipper image was last published about a year ago and loki code was merged only 9 months ago .
Would you mind publishing an updated fly-log-shipper image so the quickstart guide works as expected without having to clone the repo?
Thanks!
1 Like
dusty
November 29, 2022, 3:12pm
2
@aaronrenner thanks for letting us know! I’ve updated the image. Please let us know if you run into any issues.
1 Like
I just tried it out and it works great! Thanks for fixing it so quickly!
2 Likes
@dusty Would you mind regenerating this image again? The http endpoint support was added recently and the latest image does not include it.
Thanks!
1 Like
dusty
May 15, 2023, 2:38pm
5
I’ve just updated latest. Let me know if you run into any issues.
tuomas
August 16, 2023, 6:43am
6
I have a similar issue with latest (v0.0.9)
I have setup LOKI_URL and verified it with fly ssh console:
root@<FLY_LOG_SHIPPER_INSTANCE>:/# echo $LOKI_URL
<LOKI_APP_NAME>.internal
But I have no sinks
root@<FLY_LOG_SHIPPER_INSTANCE>:/# ls /etc/vector
examples sinks vector.toml
root@<FLY_LOG_SHIPPER_INSTANCE>:/# ls /etc/vector/sinks
root@<FLY_LOG_SHIPPER_INSTANCE>:/#
Logs say the same thing
2023-08-16T06:32:18Z app[<FLY_LOG_SHIPPER_INSTANCE>] ams [info]Configured sinks:
2023-08-16T06:32:19Z app[<FLY_LOG_SHIPPER_INSTANCE>] ams [info]2023-08-16T06:32:19.022080Z INFO vector::app: Log level is enabled. level="vector=info,codec=info,vrl=info,file_source=info,tower_limit=trace,rdkafka=info,buffers=info,lapin=info,kube=info"
They seem to get filtered in the entrypoint
#!/bin/bash
set -euo pipefail
template() { eval $'cat <<_EOF\n'"$(awk '1;END{print"_EOF"}')"; }
sponge() { cat <<<"$(cat)" >"$1"; }
filter() { for i in "$@"; do template <"$i" | sponge "$i" || rm "$i"; done; }
filter /etc/vector/sinks/*.toml 2>&-
echo 'Configured sinks:'
find /etc/vector/sinks -type f -exec basename -s '.toml' {} \;
exec vector -c /etc/vector/vector.toml -C /etc/vector/sinks
tuomas
August 16, 2023, 9:15am
7
FYI I hacked through the issue:
Dockerfile (added custom entrypoint and loki.toml)
FROM ghcr.io/superfly/fly-log-shipper:v0.0.9
COPY entrypoint.sh ./entrypoint.sh
COPY loki.toml ./loki.toml
ENTRYPOINT [ "./entrypoint.sh" ]
entrypoint.sh (copy hardcoded loki.toml and use it)
#!/bin/bash
set -euo pipefail
template() { eval $'cat <<_EOF\n'"$(awk '1;END{print"_EOF"}')"; }
sponge() { cat <<<"$(cat)" >"$1"; }
filter() { for i in "$@"; do template <"$i" | sponge "$i" || rm "$i"; done; }
filter /etc/vector/sinks/*.toml 2>&-
cp ./loki.toml /etc/vector/loki.toml
echo 'Configured sinks:'
find /etc/vector/sinks -type f -exec basename -s '.toml' {} \;
exec vector -c /etc/vector/vector.toml -c /etc/vector/loki.toml
loki.toml (hardcode values)
[transforms.loki_json]
type = "remap"
inputs = ["log_json"]
source = '''
.level = .log.level
if starts_with(.message, "{") ?? false {
# parse json messages
json = object!(parse_json!(.message))
del(.message)
. |= json
}
'''
[sinks.loki]
type = "loki"
inputs = ["loki_json"]
endpoint = "http://<THE_LOKI_APP_NAME>.internal:3100"
compression = "gzip"
encoding.codec = "json"
healthcheck = false
labels.event_provider = "{{event.provider}}"
labels.fly_region = "{{fly.region}}"
labels.fly_app_name = "{{fly.app.name}}"
labels.fly_app_instance = "{{fly.app.instance}}"
labels.host = "{{host}}"
labels.level = "{{level}}"
fly.toml
app = "<APP_NAME"
primary_region = "<REGION>"
[[services]]
http_checks = []
internal_port = 8686
secrets:
$ fly secrets list
NAME DIGEST CREATED AT
ACCESS_TOKEN <THE_DIGEST> 16h52m ago
dusty
August 18, 2023, 2:49pm
8
Hey @tuomas , you’ll need to set all three secrets/variables for the Loki sink to work.
LOKI_URL
LOKI_USERNAME
LOKI_PASSWORD
tuomas
August 18, 2023, 3:13pm
9
Tried to, but got error message about incorrect/malformed config. I wish I would have checked out how it was malformed