Tigris - Performance, Quotas, and Limits

Hello, I searched the Tigris docs and didn’t find any information on this topic. I’m interested to learn about how to scale throughput and how an application should be designed to avoid set limits. Some things I’m thinking about:

  1. S3 has a per account limit on buckets: 100 by default and can be increased up to 1000 per account. Are there any limits on the number of buckets that can be created in Tigris per organization?
  2. Is there a rate limit on storage requests and is code 503 returned in that event?
  3. S3: The maximum size of a single object is 5 terabytes. Is this the same for Tigris?
  4. S3: you can perform 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per partitioned S3 prefix. Along with using multiple clients, using distributed prefixes is recommended to scale to higher levels of throughput. Is this a design consideration that we should be thinking about for Tigris as well or does this not apply?
1 Like

Hi @thiery, let me respond to your questions:

  1. Tigris has no limit on the number of buckets per organization
  2. There is no limit on the number of storage requests. In any case, 5XX would only be returned in case of a server-side issue and not due to hitting any limits.
  3. Yes, the maximum size of a single object in Tigris is also 5TB.
  4. To scale throughput you can employ parallelization, for example, use multipart uploads, chunked downloads - the S3 SDK supports parallelizing requests to large objects to maximize throughput. Beyond parallelizing requests, there are no additional considerations to be made when using Tigris.

Do, let me know if you have additional questions or comments.

1 Like

@ovaistariq thanks for the info. Will reach out if I have any other questions.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.