Stream logs to local volume

Hi ! Is it possible to write my app’s logs to a local volume I created?
I see this post: Shipping Logs · Fly for shipping logs out to third party providers via vector and NATS. That’s great. For now though, I jsut want to start very simple where I want to take a look at the logs that happen for my app for the past days… Right now, fly doesn’t persist them and I don’t want to create an acocunt and set integration with other log providers… Can I just write the logs to the volume I created? And when I mean write them, I’m essentially looking to see if here’s already a way - sample code maybe - to stream the logs from the NATS socket to the volume… essentially treating the fly volume as a vector sink.

Is this doable? or would I need to handle this on my own in my app?(not ideal).

Essentially looking to see if there’s a built-in way for fly to give users an ability to store logs on the vm (volume) itself without needing to integrate with a 3rd party provider for looking at log files. (Not looking at fancy sql, indexing, graphs, etc - just pure raw logs from stdout).

1 Like

I hope someone else has a proper solution, but until then here are some quick & dirty workarounds:

A. Change your app’s startup command to ORIG_CMD 2>&1 | tee -a /path/to/log. (Edit: where /path/to/log is a path on a mounted volume.)
Disadvantages: You’ll only get the messages your app writes to stdout/stderr, not the messages generated by Fly. And you won’t get timestamps (but see ts from moreutils). *Edit: demo below.

B. Install flyctl inside your VM and run fly logs (filtered by the current instance to avoid duplicating the logs from other instances). Edit: clarification below.
Disadvantages: Hacky. Need to install flyctl and make your Fly access token available to the VM.

C. Run Fly Log Shipper with a custom file sink (as a separate app with a separate volume; or you could install & run Vector in your app; or implement the HTTP provider endpoint in your app and point the Fly Log Shipper at it).

Edit: See also Local Logging Setup, suggested by @ignoramous below. Note that it uses Loki as the datastore rather than a plain text file.

1 Like

Thanks for the suggestions! I’m really hoping there’s a better - more straightforward way to achieve this… Having logs stored on disk should be a straightforward thing for the infrastructure provider to support…

Btw, for option two, oyu meant run fly logs forever and output the resolts to a file/volume? like ssh to the machine and manually run flyctl logs with a proper output to a file/disk?

wonder what happens on every deployment of my code or on vm restart? I would lose the flyctl logs process no? (i know you said hacky, just confirming if my questions are correct) - this is definitely “easier” than #C

Actually. #A might work. I have a python api with flask and Fly actually outputs when the api request was received (timestamp). So, in theory, i could apply #A to redirect logs to a local file (or volume if needed) and the timestamp would actually be included for fly internal logs (not on my print statements but i could include timestamp on my print statements or simply just “guesstimate it” based on the api request log from fly

You’re welcome. For B, I meant to start fly logs in the background from your startup command, for example by setting the command to (untested) fly logs -i "$FLY_ALLOC_ID" >>/path/to/log 2>&1 & ORIG_CMD. So you don’t have to manually ssh in, and when the VM restarts the logs process will be started automatically.

For A, here is a quick demo of using ts to add timestamps:

$ apt-get install -y moreutils
$ ./app
Waiting for request...
Processing request...
Request complete
$ ./app 2>&1 | ts
Apr 15 08:17:00 Waiting for request...
Apr 15 08:17:01 Processing request...
Apr 15 08:17:02 Request complete
$ ./app 2>&1 | TZ=UTC0 ts "%FT%.T app[${FLY_ALLOC_ID%%-*}] $FLY_REGION [info]"
2023-04-15T08:17:05.188238 app[b996131a] ams [info] Waiting for request...
2023-04-15T08:17:06.179019 app[b996131a] ams [info] Processing request...
2023-04-15T08:17:07.182022 app[b996131a] ams [info] Request complete

where ./app is the following dummy script:

echo "Waiting for request..."
sleep 1
echo "Processing request..."
sleep 1
echo "Request complete"

Getting the log messages to appear both in your custom log file and in the output of fly logs without duplicate timestamps is slightly tricky, here is one way:

ORIG_CMD 2>&1 | tee -a /dev/stderr | TZ=UTC0 ts "%FT%.T app[${FLY_ALLOC_ID%%-*}] $FLY_REGION [info]" >>/path/to/log

Apart from what tom93 has suggested, you may also take a look at fly-log-local: Local Logging Setup

1 Like

Hey @tom93 I just thought about one small thing.
Everytime I do a dpeloyment (I’m guessing a vm refresh?) I lose the files that I had on the disk before… meaning, any logs fro mthe previous deployment will not show up in the current deployment (if I were to do option A for example)

Would this require me to create a volume and write the file to that volume so that it gets persisted?

I actually tried to create a volume but I’m getting other issues with this… :slight_smile:

  1. I don’t see any value under ATTACHED_VM when I run fly volumes list
  2. Not sure if this is the culprit, but if it isn’t and I properly created a volume (after I ran the fly volumes create command) - how do i actually write data to this volume? like a simple txt file?

For anyone looking at my previous reply - you need this in your .toml file…
(of course, change the name to whaetver name your fly volume was created with). and destination change it to whatever you’d like

1 Like

Yes, glad you worked it out.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.