Shipping logs through fly-log-shipper

Hi everyone,

I have setup fly-log-shipper to ship the logs from my Fastify API server, which emit logs in JSON format, to AWS CloudWatch. What I observe in CLoudWatch is that the log entries from the app are converted to stringified JSON, because they are inside another JSON object emitted by the fly-log-shipper. Is there a way to configure fly-log-shipper to just ship the original message without adding its own “fluff?

FWIW, below is the fly.toml file that I used to deploy fly-log-shipper:

app = ‘…’’
primary_region = ‘…’

[build]
image = ‘ghcr.io/superfly/fly-log-shipper:latest’

[env]
SUBJECT = “logs.app-name.>”

[http_service]
internal_port = 8080
force_https = true
auto_stop_machines = ‘stop’
auto_start_machines = true
min_machines_running = 0
processes = [‘app’]

[[services]]
http_checks = 

internal_port = 8686

[[vm]]
memory = ‘256mb’
cpus = 1
memory_mb = 256

Thanks!

Can you try this branch of the log shipper? You’ll need to update the fly.toml and set CLOUDWATCH_ENCODING_CODEC to “raw_message”. If that doesn’t work, other formats are here.

Thank you! I changed my fly.toml as follows and as far as I can tell nothing gets shippped now:

[build]

  image = 'flyio/log-shipper:latest'




[env]

SUBJECT = "logs.tr-api-prod.>"

CLOUDWATCH_ENCODING_CODEC = "raw_message”

Below is what I see in the log-shipper’s log:

09:57:04Configured sinks:

09:57:042026-03-06T09:57:04.698811Z  INFO vector::app: Log level is enabled. level="vector=info,codec=info,vrl=info,file_source=info,tower_limit=trace,rdkafka=info,buffers=info,lapin=info,kube=info"

09:57:042026/03/06 09:57:04 INFO SSH listening listen_address=[fdaa:2e:b3af:a7b:56:187e:18e:2]:22

09:57:042026-03-06T09:57:04.715729Z  INFO vector::app: Loading configs. paths=["/etc/vector/vector.toml", "/etc/vector/sinks"]

09:57:052026-03-06T09:57:05.285610Z  INFO vector::topology::running: Running healthchecks.

09:57:052026-03-06T09:57:05.285988Z  INFO vector::topology::builder: Healthcheck passed.

09:57:052026-03-06T09:57:05.286051Z  INFO vector::topology::builder: Healthcheck passed.

09:57:052026-03-06T09:57:05.286965Z  INFO vector: Vector has started. debug="false" version="0.29.1" arch="x86_64" revision="74ae15e 2023-04-20 14:50:42.739094536"

09:57:052026-03-06T09:57:05.294566Z  INFO vector::sinks::prometheus::exporter: Building HTTP server. address=[::]:9598

09:57:052026-03-06T09:57:05.302435Z  INFO vector::sinks::blackhole::sink: Collected events. events=0 raw_bytes_collected=0

09:57:052026-03-06T09:57:05.304477Z  INFO vector::internal_events::api: API server running. address=[::]:8686 playground=http://:::8686/playground

I reverted for now. Please let me know if I’ve done something wrong.

Any further thoughts on my update?

It looks like you are still using image = ‘flyio/log-shipper:latest’. You’ll need to follow my previous instructions. git clone ... fly-log-shipper, checkout the cloudwatch-encoding branch, etc. I’ll can probably merge this change but wanted some verification it actually works/helps.

I went ahead and merged the changes so that flyio/log-shipper:latest contains the updates for Cloudwatch encoding. Please test @virasasan and let me know.

I gave this a try.

First things fist: the app is running under the an org, which is not a personal org. I tried running the log shipper under the same org, but it was complaining about ACCESS_TOKEN not being set, which it was, so I deleted the log shipper app and re-created it under the personal org. Now it fails with the following error:

ERROR vector::cli: Configuration error. error=missing field `codec`

I generated an access token as instructured on https://github.com/superfly/fly-log-shipper:

fly tokens create readonly personal

Then I set the secrets:

fly secrets set \
  ORG="<org slug for the org under which the app is>" \
  ACCESS_TOKEN="<token from the above command>" \
  AWS_ACCESS_KEY_ID="..." \
  AWS_SECRET_ACCESS_KEY="..." \
  AWS_REGION="..." \
  CLOUDWATCH_LOG_GROUP_NAME="..." \
  CLOUDWATCH_ENCODING_CODEC='raw_message' \
  -a fly-log-shipper-artextasia-api-prod

have confirmed these 7 secrets are listed on the app’s dashboard page.

Give it another try, I configured the vector sink incorrectly.

Now I see the below in the log-shipper’s log:

16:24:24
Configured sinks:
16:24:24
aws_cloudwatch
16:24:24
2026/03/17 16:24:24 INFO SSH listening listen_address=[fdaa:51:9f96:a7b:6df:9d7d:4a:2]:22
16:24:24
2026-03-17T16:24:24.810878Z INFO vector::app: Log level is enabled. level=“vector=info,codec=info,vrl=info,file_source=info,tower_limit=trace,rdkafka=info,buffers=info,lapin=info,kube=info”
16:24:24
2026-03-17T16:24:24.837074Z INFO vector::app: Loading configs. paths=[“/etc/vector/vector.toml”, “/etc/vector/sinks”]
16:24:25
2026-03-17T16:24:25.292578Z ERROR vector::topology: Configuration error. error=Source “nats”: NATS Connect Error: unexpected line while connecting: Err(“Authorization Violation”)
16:24:25
INFO Main child exited normally with code: 78
16:24:25
INFO Starting clean up.

Did you have a chnace to look at my latest reply?

It sounds like you need to regenerate the ACCESS_TOKEN.

Now I get:

ERROR vector::topology: Configuration error. error=Source "nats": NATS Connect Error: unexpected line while connecting: Err("Authorization Violation")

The steps I followed:

fly tokens create readonly personal

copy-pasted the output to the following command:

fly secrets set ACCESS_TOKEN="<the output of the previous command>”

fly deploy

Because the machine was in stopped state:

fly machines restart <machineId>

It is worth mentioning that

fly secrets set ACCESS_TOKEN=$(fly tokens create readonly personal)

generates the following error:

Error: update secrets: failed to update app secrets: “” is not a valid secret name (Request ID: 01KM8ZXAFX2S1CXMXWGXTC18DZ-fra)

Hi, did you have a chance to see my previous response?

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.