S
Supabase5mo ago
loup

Supabase storage upload not working

Okay I didnt change anything but my upload on storage not working anymore (self hosted instance). In my NextJS when I upload I have :
insert into "objects" ("bucket_id", "metadata", "name", "owner", "owner_id", "user_metadata", "version") values ($1, DEFAULT, $2, $3, $4, DEFAULT, $5) on conflict ("name", "bucket_id") do update set "version" = $6,"owner" = $7,"owner_id" = $8 returning * - invalid input syntax for type uuid: "c1f5e758"
insert into "objects" ("bucket_id", "metadata", "name", "owner", "owner_id", "user_metadata", "version") values ($1, DEFAULT, $2, $3, $4, DEFAULT, $5) on conflict ("name", "bucket_id") do update set "version" = $6,"owner" = $7,"owner_id" = $8 returning * - invalid input syntax for type uuid: "c1f5e758"
And directly from web ui :
35 Replies
silentworks
silentworks5mo ago
Do you have any triggers setup against the storeage.objects table? the error is saying its trying to instert a string that isn't a uuid into one of your columns.
loup
loupOP5mo ago
I have this one on storage.objects but I guess its normal :
create trigger update_objects_updated_at before
update on storage.objects for each row
execute function storage.update_updated_at_column ();
create trigger update_objects_updated_at before
update on storage.objects for each row
execute function storage.update_updated_at_column ();
loup
loupOP5mo ago
For the resumable en web ui is cause to : http://localhost:8000/storage/v1/upload/resumable which is a local endpoint instead of public domain idk why
inder
inder5mo ago
I didn't understand. What error do you get after setting env?
loup
loupOP5mo ago
The same. But look here the request is made to http://localhost:8000/storage/v1/upload/resumable instead of https://mypublicdomain.com/storage/v1/upload/resumable
loup
loupOP5mo ago
So of course isnt gonna work
inder
inder5mo ago
Do you have a reverse proxy in front? reverse_proxy -> kong?
loup
loupOP5mo ago
Yes NginxProxyManager
No description
inder
inder5mo ago
Please always mention these details For now, you'll have to directly route to storage
loup
loupOP5mo ago
The weird thing, I didnt change anything since so many time
inder
inder5mo ago
This error pops up when you try to upload files larger than some threshhold defined in supabase dashboard When the files have to be chunked For small files, everything works fine. In your proxy manager config, you'll have to directly route to storage service and that will prevent this error For reference, this is the nginx server config. https://github.com/singh-inder/supabase-automated-self-host/blob/main/setup.sh#L658 You'll also have to add cors headers in the location block https://github.com/singh-inder/supabase-automated-self-host/blob/main/docker/volumes/nginx/snippets/cors.conf
loup
loupOP5mo ago
But the testing file is around 3ko, seems pretty weird no ?
inder
inder5mo ago
It is. In my testing, this error only popped up when uploading large files and when kong is behind reverse proxy For kong alone, the answer I linked to works fine After setting up the config, make sure to clear browser storage. That is a necessary step
loup
loupOP5mo ago
Okay IM stupdi I know why supabase studio use http://localhost:8000 instead of my public domain is bc I didnt set the env var : SUPABASE_PUBLIC_URL with my domain now the resumable return this error :
Something went wrong with that request
Header 'x-amz-tagging' with value 'Tus-Completed=false' not implemented
Something went wrong with that request
Header 'x-amz-tagging' with value 'Tus-Completed=false' not implemented
Im gonna add the env variable
inder
inder5mo ago
Share screenshot Which env var?
TupiC
TupiC3mo ago
Getting the same Header 'x-amz-tagging' with value 'Tus-Completed=false' not implementederror. Did you find a way to make this work?
inder
inder3mo ago
How have you setup your instance?
TupiC
TupiC3mo ago
Self-Hosted Supabase locally with S3 docker-compose file for storage (using Cloudflare R2) and set the correct env vars
inder
inder3mo ago
Have you tried uploading data with supabase js sdk?
TupiC
TupiC3mo ago
When uploading with the sdk everything works as expected but when uploading using the dashboard i get the mentioned error in my docker logs
inder
inder3mo ago
Can you test something right now? I don't use R2 so can't do it at my end
TupiC
TupiC3mo ago
Yes
inder
inder3mo ago
Inside docker-compose storage service set this env and restart TUS_ALLOW_S3_TAGS: "false" This has to be in s3 compose file storage service where you specify your r2 config
TupiC
TupiC3mo ago
Added this env and still no success: "error": { "raw": "{"name":"NotImplemented","$fault":"client","$metadata":{"httpStatusCode":501,"attempts":1,"totalRetryDelay":0},"Code":"NotImplemented","message":"Header 'x-amz-tagging' with value 'Tus-Completed=false' not implemented"}", "name": "NotImplemented", "message": "Header 'x-amz-tagging' with value 'Tus-Completed=false' not implemented", "stack": "NotImplemented: Header 'x-amz-tagging' with value 'Tus-Completed=false' not implemented\n at throwDefaultError (/app/node_modules/@smithy/smithy-client/dist-cjs/index.js:859:20)\n at /app/node_modules/@smithy/smithy-client/dist-cjs/index.js:868:5\n at de_CommandError (/app/node_modules/@aws-sdk/client-s3/dist-cjs/index.js:4768:14)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async /app/node_modules/@smithy/middleware-serde/dist-cjs/index.js:35:20\n at async /app/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:482:18\n at async /app/node_modules/@smithy/middleware-retry/dist-cjs/index.js:320:38\n at async /app/node_modules/@aws-sdk/middleware-flexible-checksums/dist-cjs/index.js:248:18\n at async /app/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:110:22\n at async /app/node_modules/@aws-sdk/middleware-sdk-s3/dist-cjs/index.js:138:14" }," inside my storage container: `/ # printenv TUS_ALLOW_S3_TAGS false``
inder
inder3mo ago
I'll setup a bucket and get back to you later.
TupiC
TupiC3mo ago
Thanks in advance. will use sdk as workaround for now
inder
inder3mo ago
Forgot to ask you before did you try to upload in a private window or in the same window you were testing before. Tus leaves entries for failed uploads in browser's local storage and checks for it the next time file with same name is uploaded
TupiC
TupiC3mo ago
Yes and also just tested in new window (incognito) - same error
inder
inder3mo ago
Hi, after setting TUS_ALLOW_S3_TAGS: "false", upload works
inder
inder3mo ago
No description
No description
inder
inder3mo ago
my env variables
ANON_KEY: ${ANON_KEY}
SERVICE_KEY: ${SERVICE_ROLE_KEY}
POSTGREST_URL: http://rest:3000
PGRST_JWT_SECRET: ${JWT_SECRET}
DATABASE_URL: postgres://supabase_storage_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
FILE_SIZE_LIMIT: 52428800
STORAGE_BACKEND: s3
GLOBAL_S3_BUCKET: ${GLOBAL_S3_BUCKET}
GLOBAL_S3_ENDPOINT: https://SOME_ID.r2.cloudflarestorage.com
GLOBAL_S3_PROTOCOL: https
GLOBAL_S3_FORCE_PATH_STYLE: true
AWS_ACCESS_KEY_ID: KEY_HERE
AWS_SECRET_ACCESS_KEY: KEY_HERE
AWS_DEFAULT_REGION: stub
FILE_STORAGE_BACKEND_PATH: /var/lib/storage
TENANT_ID: ${TENANT_ID}
# TODO: https://github.com/supabase/storage-api/issues/55
REGION: auto
ENABLE_IMAGE_TRANSFORMATION: "true"
IMGPROXY_URL: http://imgproxy:5001
# ADDED THIS ENV
TUS_ALLOW_S3_TAGS: "false"
ANON_KEY: ${ANON_KEY}
SERVICE_KEY: ${SERVICE_ROLE_KEY}
POSTGREST_URL: http://rest:3000
PGRST_JWT_SECRET: ${JWT_SECRET}
DATABASE_URL: postgres://supabase_storage_admin:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
FILE_SIZE_LIMIT: 52428800
STORAGE_BACKEND: s3
GLOBAL_S3_BUCKET: ${GLOBAL_S3_BUCKET}
GLOBAL_S3_ENDPOINT: https://SOME_ID.r2.cloudflarestorage.com
GLOBAL_S3_PROTOCOL: https
GLOBAL_S3_FORCE_PATH_STYLE: true
AWS_ACCESS_KEY_ID: KEY_HERE
AWS_SECRET_ACCESS_KEY: KEY_HERE
AWS_DEFAULT_REGION: stub
FILE_STORAGE_BACKEND_PATH: /var/lib/storage
TENANT_ID: ${TENANT_ID}
# TODO: https://github.com/supabase/storage-api/issues/55
REGION: auto
ENABLE_IMAGE_TRANSFORMATION: "true"
IMGPROXY_URL: http://imgproxy:5001
# ADDED THIS ENV
TUS_ALLOW_S3_TAGS: "false"
TupiC
TupiC3mo ago
hm thats strange... i still get the same error :/
No description
No description
No description
inder
inder3mo ago
I read somehwhere in cloudflare forums that the multipart uploads will expire after 7 days, so the entries will remain until then. Regarding the error, are you trying to upload a large file? I didn't test with a large file so will have to do that later. Also, what is the version of storage image? I tested on supabase/storage-api:v1.25.7

Did you find this page helpful?