[Golang] PutObject Failing with StatusCode: 501 (NotImplemented)

Hello everyone :wave:

I hope you are having a good end to the week. Me, on the other hand, I went on an S3 scavenger hunt :sweat_smile:

For context, my production code works fine but I haven’t ran any deployments or dependency upgrades in a few days. Today, I was working on a feature on my local machine and after upgrading my deps (go get -u ./… and go mod tidy) all my requests to Tigris started failing with the following error:

operation error S3: PutObject, https response error StatusCode: 501, RequestID: 1737729871240111208, HostID: , api error NotImplemented: A header you provided implies functionality that is not implemented”

I tried connecting to a different bucket, changing permissions etc. but with no luck - and the error message isn’t very clear as to what is not implemented. However, I came across the AWS S3 Go SDK changelog for v1.73.0 (15/01/2025):

Feature: S3 client behavior is updated to always calculate a checksum by default for operations that support it (such as PutObject or UploadPart), or require it (such as DeleteObjects). The checksum algorithm used by default now becomes CRC32. Checksum behavior can be configured using when_supported and when_required options - in code using RequestChecksumCalculation, in shared config using request_checksum_calculation, or as env variable using

So it seems that the Checksum Calculation is causing errors on Tigris’ side? I ended up fixing the error by requesting for the checksum to be calculated only when required:

svc := s3.NewFromConfig(config, func(o *s3.Options) {
  o.BaseEndpoint = aws.String(u)
  o.Region = r
  o.RequestChecksumCalculation = aws.RequestChecksumCalculationWhenRequired // new line <----
})

I am not sure if this is a robust solution as I haven’t tested DeleteObject yet. But it’s something that I wanted to bring to the community’s attention as it might start affecting people upgrading their deps.

Hope this helps :love_you_gesture:

Hi @mk94 you are indeed right, there are breaking changes in the newer versions of the AWS S3 SDKs.

We have captured the details in our blog post: If you’ve upgraded boto3 or the JavaScript S3 client in the last week, uploading files won’t work. Here’s how to fix it. | Tigris Object Storage

We are actively working on addressing this.

1 Like

Awesome, thanks a lot @ovaistariq!

Edit: I will mark your answer as the solution, so that people that come across this post know that there’s a solution and that you guys are looking into it :rocket:

1 Like

Hey, this should be fixed now! Please try again. Tigris now supports recent releases of the S3 SDK | Tigris Object Storage

1 Like

Hi there,

I get a similar 501 error when using PutObject Kotlin API but only when my file is larger than 1024* 1024 bytes. I’m using aws-sdk-kotlin 1.4.9 (Release v1.4.9 · awslabs/aws-sdk-kotlin · GitHub) which just came out and supposedly contains a fix for a header signing issue but I don’t know if this is the same issue.

When uploading the same file via the aws s3 CLI everything works, but running with --debug shows that it’s doing a multipart upload.

Hi, @zienkikk

I wasn’t able to reproduce the issue with 1.4.9 and newer.
If the issue still persist could you share minimum reproducible example please.

Here you go @Yevgeniy. Still happening on 1.4.9

import aws.sdk.kotlin.runtime.auth.credentials.EnvironmentCredentialsProvider
import aws.sdk.kotlin.services.s3.S3Client
import aws.sdk.kotlin.services.s3.putObject
import aws.smithy.kotlin.runtime.content.asByteStream
import aws.smithy.kotlin.runtime.net.url.Url
import kotlinx.coroutines.runBlocking
import java.io.File

suspend fun main() {
    val s3 = S3Client {
        region = "auto"
        endpointUrl = Url.parse("https://fly.storage.tigris.dev")
        credentialsProvider = EnvironmentCredentialsProvider()
    }

    val file = File("1gb_file")
    val resp = s3.putObject {
        bucket = "<MY_BUCKET>"
        key = "tigris_upload_test"
//        body = file.asByteStream() // Fails with aws.sdk.kotlin.services.s3.model.S3Exception: A header you provided implies functionality that is not implemented, Request ID: 1739201481384492872
//        body = file.asByteStream(0, 1024 * 1024) // Same as above
        body = file.asByteStream(0, 1024 * 1024 - 1) // SUCCEEDS
    }
    println(resp)
}

I just tried this with 1.4.16 as well. Same issue.

Hi, @zienkikk ,

Thanks for sharing the example!

I was able to reproduce the issue you’re encountering. It appears to be caused by a breaking change introduced in the recent versions of the AWS SDK.

We are currently working on a fix and will notify you once it’s deployed to production.

In the meantime, clients using version 1.4.9 or earlier should continue to work as expected.

Thanks for your patience!

I think there is some confusion here. None of the 1.4.x line work with uploads larger than a megabyte. I tried some 1.3.x as well.

Can I get confirmation that Tigris should be able to support uploads larger than 1MB without using multipart uploads?

For what it’s worth, I looked at the --debug log of AWS CLI more closely and it’s doing multipart upload only for things larger than 8MBs. Using the CLI on a 7MB file did use PutObject directly so this does appear to be related to the aws-sdk-kotlin.

@zienkikk Yes, it is indeed related to aws-sdk-kotlin only. There is no issue with bigger payloads or to that matter PutObject Or Multipart Uploads. The issue in this case is there is an additional trailer sent by kotlin sdk seeing which we were returning an error. We are working on the fix and we will roll it out in a next couple of days. We will inform you once this is rolled out.

1 Like

Perfect. Thank you @himank.

Bumping this thread before it’s auto-closed.

@himank: Has there been any development on this? I’ve been having to upload terabytes of data using multipart with a size of 1mb so my Class A requests are “artificially” higher by about 2 orders of magnitude.

Hi, @zienkikk

We’ve rolled out the fix! Please give it a try and let us know if you encounter any further issues.
Thanks.

Thank you @Yevgeniy. I can confirm this now works.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.