Volume mysql Pending destroy

I’m having a critical issue with my MySQL app. The VM is stuck in a restart loop and can’t boot properly. Looking at the logs, I’m seeing tons of I/O errors and filesystem corruption on the volume mounted at /data.

There are a bunch of errors like:

  • “Input/output error” when trying to access MySQL files
  • “EXT4-fs error” messages about being unable to read inode blocks
  • “Buffer I/O error on dev vdc”

The machine has hit its max restart count of 10 and just keeps failing. I also noticed the volume is showing a “Pending_destroy” status in the dashboard, which might be related to these issues.

Could you help me figure out what’s happening here? Is this a hardware failure with the volume?

My app is currently down and I’d really appreciate some help getting it back online.

Yikes… That does sound like a disk error, :dragon:.

You have three volumes, though, and the one that’s pending destroy isn’t attached to a Machine currently (unless I’m misreading the screenshot).

Were the top two created today from snapshots? Are those behaving any better?

1 Like

Yes, the first one was created automatically. The second one I created to see if I could restore the database.

After about an hour, everything went back to normal without any intervention. It’s strange.

1 Like

Glad to hear that it’s working again… 13GB would have been a lot to restore from an rsync backup, etc.

Perhaps this was a glitch during Machine migration, :thought_balloon:.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.