Supabase storage upload not working
Okay I didnt change anything but my upload on storage not working anymore (self hosted instance). In my NextJS when I upload I have :
And directly from web ui :
35 Replies
Do you have any triggers setup against the
storeage.objects
table? the error is saying its trying to instert a string that isn't a uuid into one of your columns.I have this one on
storage.objects
but I guess its normal :
For the resumable en web ui is cause to :
http://localhost:8000/storage/v1/upload/resumable
which is a local endpoint instead of public domain idk whyI didn't understand.
What error do you get after setting env?
The same. But look here the request is made to
http://localhost:8000/storage/v1/upload/resumable
instead of https://mypublicdomain.com/storage/v1/upload/resumable
So of course isnt gonna work
Do you have a reverse proxy in front?
reverse_proxy -> kong?
Yes NginxProxyManager

maybe related : https://github.com/supabase/storage/issues/331
Please always mention these details
For now, you'll have to directly route to storage
The weird thing, I didnt change anything since so many time
This error pops up when you try to upload files larger than some threshhold defined in supabase dashboard
When the files have to be chunked
For small files, everything works fine.
In your proxy manager config, you'll have to directly route to storage service and that will prevent this error
For reference, this is the nginx server config.
https://github.com/singh-inder/supabase-automated-self-host/blob/main/setup.sh#L658
You'll also have to add cors headers in the location block
https://github.com/singh-inder/supabase-automated-self-host/blob/main/docker/volumes/nginx/snippets/cors.conf
But the testing file is around 3ko, seems pretty weird no ?
It is. In my testing, this error only popped up when uploading large files and when kong is behind reverse proxy
For kong alone, the answer I linked to works fine
After setting up the config, make sure to clear browser storage. That is a necessary step
Okay IM stupdi I know why supabase studio use
http://localhost:8000
instead of my public domain is bc I didnt set the env var : SUPABASE_PUBLIC_URL
with my domain
now the resumable
return this error :
Im gonna add the env variableShare screenshot
Which env var?
Getting the same
Header 'x-amz-tagging' with value 'Tus-Completed=false' not implemented
error. Did you find a way to make this work?How have you setup your instance?
Self-Hosted Supabase locally with S3 docker-compose file for storage (using Cloudflare R2) and set the correct env vars
Have you tried uploading data with supabase js sdk?
When uploading with the sdk everything works as expected but when uploading using the dashboard i get the mentioned error in my docker logs
Can you test something right now?
I don't use R2 so can't do it at my end
Yes
Inside docker-compose storage service set this env and restart
TUS_ALLOW_S3_TAGS: "false"
This has to be in s3 compose file storage service where you specify your r2 configAdded this env and still no success:
"error": {
"raw": "{"name":"NotImplemented","$fault":"client","$metadata":{"httpStatusCode":501,"attempts":1,"totalRetryDelay":0},"Code":"NotImplemented","message":"Header 'x-amz-tagging' with value 'Tus-Completed=false' not implemented"}",
"name": "NotImplemented",
"message": "Header 'x-amz-tagging' with value 'Tus-Completed=false' not implemented",
"stack": "NotImplemented: Header 'x-amz-tagging' with value 'Tus-Completed=false' not implemented\n at throwDefaultError (/app/node_modules/@smithy/smithy-client/dist-cjs/index.js:859:20)\n at /app/node_modules/@smithy/smithy-client/dist-cjs/index.js:868:5\n at de_CommandError (/app/node_modules/@aws-sdk/client-s3/dist-cjs/index.js:4768:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async /app/node_modules/@smithy/middleware-serde/dist-cjs/index.js:35:20\n at async /app/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:482:18\n at async /app/node_modules/@smithy/middleware-retry/dist-cjs/index.js:320:38\n at async /app/node_modules/@aws-sdk/middleware-flexible-checksums/dist-cjs/index.js:248:18\n at async /app/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:110:22\n at async /app/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:138:14"
},"
inside my storage container:
`/ # printenv TUS_ALLOW_S3_TAGS
false``
I'll setup a bucket and get back to you later.
Thanks in advance. will use sdk as workaround for now
Forgot to ask you before did you try to upload in a private window or in the same window you were testing before. Tus leaves entries for failed uploads in browser's local storage and checks for it the next time file with same name is uploaded
Yes and also just tested in new window (incognito) - same error
Hi, after setting
TUS_ALLOW_S3_TAGS: "false"
, upload works

my env variables
hm thats strange... i still get the same error :/



I read somehwhere in cloudflare forums that the multipart uploads will expire after 7 days, so the entries will remain until then. Regarding the error, are you trying to upload a large file? I didn't test with a large file so will have to do that later.
Also, what is the version of storage image? I tested on
supabase/storage-api:v1.25.7