Hello, I searched the Tigris docs and didn’t find any information on this topic. I’m interested to learn about how to scale throughput and how an application should be designed to avoid set limits. Some things I’m thinking about:
S3 has a per account limit on buckets: 100 by default and can be increased up to 1000 per account. Are there any limits on the number of buckets that can be created in Tigris per organization?
Is there a rate limit on storage requests and is code 503 returned in that event?
S3: The maximum size of a single object is 5 terabytes. Is this the same for Tigris?
S3: you can perform 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per partitioned S3 prefix. Along with using multiple clients, using distributed prefixes is recommended to scale to higher levels of throughput. Is this a design consideration that we should be thinking about for Tigris as well or does this not apply?
Tigris has no limit on the number of buckets per organization
There is no limit on the number of storage requests. In any case, 5XX would only be returned in case of a server-side issue and not due to hitting any limits.
Yes, the maximum size of a single object in Tigris is also 5TB.
To scale throughput you can employ parallelization, for example, use multipart uploads, chunked downloads - the S3 SDK supports parallelizing requests to large objects to maximize throughput. Beyond parallelizing requests, there are no additional considerations to be made when using Tigris.
Do, let me know if you have additional questions or comments.