I’m building an application that utilizes memoization for some heavy computation calls between function invocations. Is this pattern compatible on a serverless platform like fly? My initial testing seems to indicate no, there is no way to save in-process memory state across queries.
Fly is not serverless in this way. Fly deploys long-lived containers for your application, so memory state is preserved between requests to a single container, so you can cache results in memory and have them preserved across requests.
@Eric_Pauley is correct. If you need this data to persist longer than the life of a VM you can persist data in files on a volume or an external service like redis.
@E.G can you describe what type of app you’re building and what you’re trying to do? What do you mean by query there? Are you looking to snapshot a VM and restore it’s memory state later?
As mentioned earlier, we’re not a “functions as a service” platform. VMs are long-lived and can handle many requests during their lifetime. Caching in memory is a common, but it is lost when a VM stops. We’re hoping to offer snapshots and restores at some point though.
Can you describe what type of app you’re building and what you’re trying to do? What do you mean by query there? Are you looking to snapshot a VM and restore it’s memory state later?
Thanks Michael and Eric. That makes sense. A simple python script should illustrate my question. By query I’m referring to an API query. Just asking about the lifecycle of the vm and the state of memory between invocations. I think your first answer was what I needed.
CACHE = None
def populate_cache(data_seed):
global CACHE
# resource intensive cache population code w/ request data seed
# ...
def get_cached_data(request):
global CACHE
if CACHE is None:
populate_cache(request.data)
return CACHE