Automatic migration: undeletable volumes

Hello, it seems the automatic migration was triggered for my app recently, but it seems to have failed: the app on the dashboard was still shown to be running on V1, and there were three volumes (the original one I created last November, then one created on July 31 and one on August 2).
I tried to run the migrate-to-v2 command myself. And I got this log:

[1/2] Updating 9080e96ec19ed8 [app]
failed while migrating: failed to update VM 9080e96ec19ed8: aborted: could not reserve resource for machine: insufficient CPUs available to fulfill request
? Would you like to enter interactive troubleshooting mode? If not, the migration will be rolled back. Yes

Oops! We ran into issues migrating your app.
We're constantly working to improve the migration and squash bugs, but for
now please let this troubleshooting wizard guide you down a yellow brick road
of potential solutions...
               ,,,,,
       ,,.,,,,,,,,, .
   .,,,,,,,
  ,,,,,,,,,.,,
     ,,,,,,,,,,,,,,,,,,,
         ,,,,,,,,,,,,,,,,,,,,
            ,,,,,,,,,,,,,,,,,,,,,
           ,,,,,,,,,,,,,,,,,,,,,,,
        ,,,,,,,,,,,,,,,,,,,,,,,,,,,,.
   , ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,

The app's platform version is 'detached'
This means that the app is stuck in a half-migrated state, and wasn't able to
be fully recovered during the migration error rollback process.

Fixing this depends on how far the app got in the migration process.
Please use these tools to troubleshoot and attempt to repair the app.
No legacy Nomad VMs found. Setting platform version to machines/Apps V2.

But after that, it seems the app is running on V2, at least according to the dashboard. I tried to remove the extra volumes, but I failed to remove the original one from November. It shows it’s attached to the machine (9080e96ec19ed8) which was stopped… so I tried to destroy that machine. The command line showed it was destroyed:

$ fly machines destroy 9080e96ec19ed8
machine 9080e96ec19ed8 was found and is currently in stopped state, attempting to destroy...
9080e96ec19ed8 has been destroyed

But I see the volume is still attached to it, and hence it cannot be deleted and the command line machine list won’t show it. On the web dashboard, this (9080e96ec19ed8) machine’s state is not “destroyed”, but “failed”. What can I do to remove the extra volume?

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.