Production DB down when trying to downscale

I tried to downscale my production db from dedicated-cpu-1x to shared-cpu-1x and the database has been failing since.

2022-03-17T06:02:33Z app[3ad0b9ea] ewr [info]exporter | INFO[0196] Established new database connection to "fdaa:0:3565:a7b:ab2:0:44a3:2:5433".  source="postgres_exporter.go:970"
2022-03-17T06:02:35Z app[3ad0b9ea] ewr [info]exporter | INFO[0197] Established new database connection to "fdaa:0:3565:a7b:ab2:0:44a3:2:5433".  source="postgres_exporter.go:970"
2022-03-17T06:02:44Z app[3ad0b9ea] ewr [info]sentinel | 2022-03-17T06:02:44.301Z	WARN	cmd/sentinel.go:276	no keeper info available	{"db": "343bebf5", "keeper": "14bf0452e2"}
2022-03-17T06:02:44Z app[3ad0b9ea] ewr [info]sentinel | 2022-03-17T06:02:44.301Z	WARN	cmd/sentinel.go:276	no keeper info available	{"db": "4f08d7d9", "keeper": "ab30abd82"}
2022-03-17T06:02:44Z app[3ad0b9ea] ewr [info]sentinel | 2022-03-17T06:02:44.301Z	WARN	cmd/sentinel.go:276	no keeper info available	{"db": "e496e425", "keeper": "ab3044a42"}
2022-03-17T06:02:44Z app[3ad0b9ea] ewr [info]sentinel | 2022-03-17T06:02:44.303Z	INFO	cmd/sentinel.go:995	master db is failed	{"db": "e496e425", "keeper": "ab3044a42"}
2022-03-17T06:02:44Z app[3ad0b9ea] ewr [info]sentinel | 2022-03-17T06:02:44.303Z	INFO	cmd/sentinel.go:1001	db not converged	{"db": "e496e425", "keeper": "ab3044a42"}
2022-03-17T06:02:44Z app[3ad0b9ea] ewr [info]sentinel | 2022-03-17T06:02:44.303Z	INFO	cmd/sentinel.go:1006	trying to find a new master to replace failed master
2022-03-17T06:02:44Z app[3ad0b9ea] ewr [info]sentinel | 2022-03-17T06:02:44.304Z	ERROR	cmd/sentinel.go:1009	no eligible masters

The name of the database is indiepaper-production-db

In a last-ditch effort, I switched to one instance and deleted one of the volumes. How do I restore this db ?

Hey @aswinmohanme,

Looks like you may have deleted the volume holding your primary’s data. I made the volume you deleted visible through the cli so you can view the snapshots associated with it.

Here are some docs to guide you through how you can provision a new cluster from a snapshot: