Please define the delete consistency behaviour

I have made a small example project to highlight the query I have:

I understand that I can have conditional updates on objects, but what about deleting the objects ? What consistency can I expect to observe ?

If I want consistency of deletions, then should I use a tombstone object to represent a deletion ?

If I use a tombstone, what TTL should I set it to, to ensure that all the caching has definitely caught up and will consistently view it as deleted ?

Given my goals, consistent deletions, is there some other way I am better of using Tigris to accomplish my goals ?

Thank you for making Tigris.

1 Like

@inverted-capital This is something we are actively working on in simplifying and offering this through a single consistency header that can then be used for PUT, GET, LIST, and DELETE operations, offering strong consistency in cases where it is needed. I’m optimistic that we’ll have this ready within a week. To answer your original question, the CAS header that you’re using won’t be sufficient for deletes, which is why we’re streamlining it to just have one header.

That’s exciting - thank you. I will use a tombstone in the meantime while I eagerly await your update :star_struck:

1 Like

@inverted-capital as promised we have released the feature. You can find documentation on our consistency model here, and additional details on how to use it are available here. Let us know how it goes.

3 Likes

thank you - that’s amazing - I’m very excited to give this a spin - I’ll report back how I get on

I love this interface - easily have the full bucket be consistent, or be specific on a per request basis. I am observing the correct consistency behaviour that I want. I have not tested multi region consistency. Here are some benchmark results:

Task bench:store deno bench --allow-read --allow-env --allow-net store.bench.ts
Check file:///napps/blobstore-tigris/store.bench.ts
    CPU | AMD EPYC 7702P 64-Core Processor
Runtime | Deno 2.2.4 (x86_64-unknown-linux-gnu)

file:///napps/blobstore-tigris/store.bench.ts

benchmark                  time/iter (avg)        iter/s      (min … max)           p75      p99     p995
-------------------------- ----------------------------- --------------------- --------------------------
Upload 128 bytes                  224.6 ms           4.5 (214.9 ms … 267.3 ms) 224.9 ms 267.3 ms 267.3 ms
Upload 256 bytes                  269.4 ms           3.7 (216.9 ms … 377.8 ms) 304.4 ms 377.8 ms 377.8 ms
Upload 512 bytes                  243.9 ms           4.1 (216.7 ms … 328.7 ms) 233.6 ms 328.7 ms 328.7 ms
Upload 1024 bytes                 475.7 ms           2.1 (405.6 ms … 699.9 ms) 491.9 ms 699.9 ms 699.9 ms
Download 128 bytes                199.6 ms           5.0 (189.6 ms … 254.2 ms) 195.7 ms 254.2 ms 254.2 ms
Download 256 bytes                194.8 ms           5.1 (190.5 ms … 209.3 ms) 194.9 ms 209.3 ms 209.3 ms
Download 512 bytes                197.6 ms           5.1 (189.4 ms … 235.5 ms) 197.2 ms 235.5 ms 235.5 ms
Download 1024 bytes               387.0 ms           2.6 (379.3 ms … 393.2 ms) 388.5 ms 393.2 ms 393.2 ms
Download Weak 128 bytes           200.5 ms           5.0 (192.2 ms … 228.4 ms) 200.1 ms 228.4 ms 228.4 ms
Download Weak 256 bytes           201.4 ms           5.0 (194.7 ms … 232.5 ms) 202.3 ms 232.5 ms 232.5 ms
Download Weak 512 bytes           204.3 ms           4.9 (194.0 ms … 247.3 ms) 199.8 ms 247.3 ms 247.3 ms
Download Weak 1024 bytes          408.4 ms           2.4 (383.4 ms … 602.5 ms) 393.3 ms 602.5 ms 602.5 ms
Exists 128 bytes                  196.7 ms           5.1 (191.5 ms … 212.6 ms) 196.3 ms 212.6 ms 212.6 ms
Exists 256 bytes                  198.4 ms           5.0 (190.9 ms … 214.5 ms) 197.2 ms 214.5 ms 214.5 ms
Exists 512 bytes                  196.0 ms           5.1 (190.2 ms … 212.8 ms) 196.2 ms 212.8 ms 212.8 ms
Exists 1024 bytes                 199.6 ms           5.0 (188.9 ms … 249.7 ms) 194.5 ms 249.7 ms 249.7 ms
Exists Weak 128 bytes             194.7 ms           5.1 (191.3 ms … 198.3 ms) 196.8 ms 198.3 ms 198.3 ms
Exists Weak 256 bytes             198.7 ms           5.0 (194.0 ms … 218.2 ms) 198.7 ms 218.2 ms 218.2 ms
Exists Weak 512 bytes             204.8 ms           4.9 (191.7 ms … 257.1 ms) 203.7 ms 257.1 ms 257.1 ms
Exists Weak 1024 bytes            202.3 ms           4.9 (191.3 ms … 258.6 ms) 199.1 ms 258.6 ms 258.6 ms
Delete 128 bytes                  248.2 ms           4.0 (213.0 ms … 378.2 ms) 277.3 ms 378.2 ms 378.2 ms
Delete 256 bytes                  229.8 ms           4.4 (202.6 ms … 317.9 ms) 231.0 ms 317.9 ms 317.9 ms
Delete 512 bytes                  231.3 ms           4.3 (204.7 ms … 328.7 ms) 230.9 ms 328.7 ms 328.7 ms
Delete 1024 bytes                 243.7 ms           4.1 (207.5 ms … 336.1 ms) 264.3 ms 336.1 ms 336.1 ms

This is the ping I see:

PING fly.storage.tigris.dev (2a09:8280:1::24:a5c5) 56 data bytes
64 bytes from 2a09:8280:1::24:a5c5: icmp_seq=1 ttl=56 time=226 ms
64 bytes from 2a09:8280:1::24:a5c5: icmp_seq=2 ttl=56 time=226 ms
64 bytes from 2a09:8280:1::24:a5c5: icmp_seq=3 ttl=56 time=226 ms
64 bytes from 2a09:8280:1::24:a5c5: icmp_seq=4 ttl=56 time=226 ms

This is the source code for the tests:

That’s great to hear! Thanks for quickly testing it.

1 Like

Hey @himank - I have noticed that if I run on a fly.io machine in Sydney, that using strong consistency the region for the object ends up being in San Jose. If I set the whole bucket to only store in Sydney, then the object ends up in Sydney.

Under the hood is there something else we should be aware of doing a strongly consistent write ? For example, to be assured of global consistency does it need to do a check with all regions or something, and that skews it to pick something on a global scale, rather than the region that initiated the put ?

Hey @inverted-capital, You can think of it something like relying on Tigris to automatically choose leader region for you versus enforcing it by explicitly saying don’t look for outside of region ‘X’.

By default, an automatic leader is assigned to your bucket, which is used for any strongly consistent operation. In your case, San Jose is the default leader for the bucket. However, when you enforce a region restriction on the bucket or request level to a specific region, the default leader is not preferred but instead, the leader close to the region set at the bucket or request level takes precedence, which in your case would be Sydney. To enforce it to strictly use a region close to you, I’d suggest to pass a region header along with the consistency header(as mentioned here), I hope this answers your question.

Ok understood. My opinion is that the default leader should be set at the closest region, since this fits with the broader Tigris philosophies outlined in the docs.

This behaviour diffuses or complicates the strong regional consistency statements:

By default, Tigris offers strict read-after-write consistency within the same region and eventual consistency globally. This means that if you write data in, for example, the San Jose region and read from the same region, the data will be strongly consistent.

The confusion arises from writing to what I think is my local region with strong consistency, but actually a far away default leader has been chosen, then thinking I can read without specifying the consistency header, since I am presumably local. But I am not.

I am grateful the consistency features, don’t get me wrong, and the combination of a region header works for me, I am mentioning that for the sake of ergonomics, figuring out what region I want and adding the extra header to get a close leader feels clumsy.

Just to somehwhat paint my gratitude a little deeper, thanks to this feature we have been able to ditch an entire database component from our application. An entire database. PLUS we can provide localized regional processing, since the leader can be wherever the processing is occuring.

One last question that I could not find an answer to in the docs - once the leader is set, is it always set ? so doing a write in another region, with strong consistency, will not move the leader to that new region ? To put simply, leader is chosen at creation time and fixed for the lifetime of the key ?

I appreciate the feedback and thank you for using it and rest assured we will be improving the DX of this feature based on the feedback.

One last question that I could not find an answer to in the docs - once the leader is set, is it always set ? so doing a write in another region, with strong consistency, will not move the leader to that new region ? To put simply, leader is chosen at creation time and fixed for the lifetime of the key ?

Yes, your understanding is correct. This is the constraint right now as once default leader is set for your bucket then it is not changed.

Thanks for the clarification. The best experience I can imagine is that leader moves to be the closest location where the last write occurred. Leader information would be helpful to get back in the headers of an object too.

In your docs, the word “leader” appears zero times. Might I suggest such a crucial concept get its own page in the concepts section ?

Also, consistency appears to be the second biggest selling point for Tigris after transparent caching. Consistency does not get a mention in the Overview page, even tho this concept is so fundamental that you have built it in right from the outset of Tigris.

“Globally distributed, globally consistent, any size object” - this is kind of your motto I think ?

I came for the global distribution, but I’m definitely staying for the global consistency.

The best experience I can imagine is that leader moves to be the closest location where the last write occurred.

The current focus is mainly on sticky leadership for a bucket, or allowing users to control leadership by selecting a region on a req level. While dynamic leadership is something we definitely aim to achieve in the future, it’s not part of our immediate near term plans, but it’s certainly on our roadmap for the future.

In your docs, the word “leader” appears zero times. Might I suggest such a crucial concept get its own page in the concepts section ?

Thank you for the feedback, we will be updating it shortly to have more details about it and will be highlighting it as well to make it more clear.

“Globally distributed, globally consistent, any size object” - this is kind of your motto I think ?
I came for the global distribution, but I’m definitely staying for the global consistency.

Awesome! We’re thrilled to have you using Tigris and really appreciate you sharing your valuable feedback with us. Glad you’re staying :slight_smile:

1 Like