I have a feature that runs crawlers for short periods of a time. Currently I just use a 256M machine for a HTTP crawl, and a 2048M machine for a browser-based crawl. I’d like to mark my database crawl records with a rough cost i.e. the hourly cost from pricing multiplied by the unit-hours a machine was running.
I’d be interested to know if this pricing list is highly dynamic, or whether the prices have been fairly static over time. I suspect I will start off with using the prices are they now, but if they are frequently in flux, I might work out a way to record a list of spec-region prices, so I can store that price against the time of use.