Non-root user lost permissions to write to /dev/stdout

TL;DR - Were there any recent changes to Fly’s infrastructure that could break an app’s ability to detect if it’s running in a container?

The long version

I’ve been running FusionAuth via a Docker image on Fly for a couple months and things have been relatively smooth sailing. In the last day or two the app suddenly stopped working though. My ability to debug is hindered because FusionAuth is closed-source and logging very little, but according to the one error I’m getting back the issue appears to be that FusionAuth recently started trying write log files:

/usr/local/fusionauth/fusionauth-app/apache-tomcat/bin/ line 401: /usr/local/fusionauth/fusionauth-app/apache-tomcat/../../logs/fusionauth-app.log: Permission denied

FusionAuth is supposed to write to STDOUT instead of that log file when run in a container (see Monitoring FusionAuth). This behavior worked until very recently. I’ve already reached out to the FusionAuth team, but given that the exact same FusionAuth docker image was chugging along just fine and now refuses to start I’m wondering if something could have changed with Fly.

Any help is appreciated!

I’m poking around, but I kind of doubt it. The last major change to init happened 2 weeks ago, when we changed the way we ship the logs we collect off of VMs (they were going to a virtual TTY and now are routed through a vsock).

But that error message refers to an actual log file (I assume? If you ls it, it doesn’t show up as a pipe or a character device?).

Is there a way I can keep the VM up even if the app crashes and burns? One of the difficulties I’ve been having is the inability to get into the VM because it dies on startup.

If you control the Dockerfile for it (or can make a new dockerfile with a FROM old-docker in it, you can make a container with a startup script that runs FusionAuth in the background (like nohup /path/to/fusionauth &) and then tail -f /dev/null to keep the container itself alive.

Ok I have more details:

  1. The log file is actually a symbol link pointed at /dev/stdout. Found this in the upstream Dockerfile:
/bin/sh -c mkdir -p /usr/local/fusionauth/logs   && touch /usr/local/fusionauth/logs/fusionauth-app.log   && ln -sf /dev/stdout /usr/local/fusionauth/logs/fusionauth-app.log
  1. The app is not run as root. From the same Dockerfile:

USER fusionauth

Turns out my first theory was off - FusionAuth is still attempting to write to stdout but seems to have lost permissions to do so. I’ll update the thread title for future searchability. Any chance something changed regarding permissions to /dev/stdout?

Welp, I blew away the old app and created an entirely new one and /dev/stdout is writeable again. For the record I did try suspending/resuming, restarting, and scaling up/down the old app but destroy was the only effective command :sweat_smile:

You caught this before I could reply (sorry!) but I don’t think we’ve changed anything that would impact the permissions on /dev/stdout (we did change how init wires stdout up, but /dev/stdout is just Linux’s way of referring to the stdout of any given process; you should always have permissions to write to it).

I wrote this more forcefully at first but then edited it back because Linux can be mysterious and I’m leaving room for amazement.

1 Like

Haha “room for amazement” is too real!

This happened again today. Tried swapping vms, restarting, etc. and nothing helped. I’ve resolved for now by running the app as root.