Self-Hosted Supabase constant 100% CPU usage

Hello, I’ve been testing Supabase as a replacement for another self-hosted BaaS. I set it up on my VPS (8 vCPU cores, 24GB RAM) using Docker, following the official documentation. Everything runs without errors, but I noticed that all CPU cores stay at 100% usage constantly. It’s been several hours since starting Supabase, and the CPU load hasn’t gone down. When I stop the containers with docker compose down, CPU usage immediately returns to normal (around 9% per core, since I run other services on the VPS as well). So it seems the issue is directly related to Supabase. This also happens on a completely clean install, with no data or configuration changes. As soon as I run docker compose up, the CPU usage spikes to 100% across all cores and stays there. Does anyone know what might be causing this or how to fix it? I'm attaching images of htop before and after shutting down the Supabase containers.
No description
No description
50 Replies
inder
inder5d ago
have you checked which container in particular is causing this spike
Ninjonik
NinjonikOP5d ago
I've checked it using docker compose stats and it was at most pg-vector, which was consuming about 20% of the CPU and studio was also consuming about 15%. The rest was about 1%. One more thing that comes to mind is the analytics container which is spamming, like literally spamming (can't even read it at that speed) messages such as these:
tor | 2025-08-19T06:55:40.924229Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=5a98518159b54e310dfec1cf6e2bccfbf356add5a6ec81d4e11c425ea0685e5b supabase-vector | 2025-08-19T06:55:40.929534Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=6375ebd188378e50cadca1af60167898a4be5e680dea3f1f8dd1486513914dac supabase-vector | 2025-08-19T06:55:40.929604Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=6375ebd188378e50cadca1af60167898a4be5e680dea3f1f8dd1486513914dac supabase-vector | 2025-08-19T06:55:40.929811Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=5a98518159b54e310dfec1cf6e2bccfbf356add5a6ec81d4e11c425ea0685e5b supabase-vector | 2025-08-19T06:55:40.929848Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=5a98518159b54e310dfec1cf6e2bccfbf356add5a6ec81d4e11c425ea0685e5b supabase-vector | 2025-08-19T06:55:40.930558Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=8ad14b0a63d9b2d04502923a8a9d1d9655d623fefe5dc342d552e3f3eaf47c70 supabase-vector | 2025-08-19T06:55:40.930608Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container
tor | 2025-08-19T06:55:40.924229Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=5a98518159b54e310dfec1cf6e2bccfbf356add5a6ec81d4e11c425ea0685e5b supabase-vector | 2025-08-19T06:55:40.929534Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=6375ebd188378e50cadca1af60167898a4be5e680dea3f1f8dd1486513914dac supabase-vector | 2025-08-19T06:55:40.929604Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=6375ebd188378e50cadca1af60167898a4be5e680dea3f1f8dd1486513914dac supabase-vector | 2025-08-19T06:55:40.929811Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=5a98518159b54e310dfec1cf6e2bccfbf356add5a6ec81d4e11c425ea0685e5b supabase-vector | 2025-08-19T06:55:40.929848Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=5a98518159b54e310dfec1cf6e2bccfbf356add5a6ec81d4e11c425ea0685e5b supabase-vector | 2025-08-19T06:55:40.930558Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=8ad14b0a63d9b2d04502923a8a9d1d9655d623fefe5dc342d552e3f3eaf47c70 supabase-vector | 2025-08-19T06:55:40.930608Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container
inder
inder5d ago
And are all containers healthy?
Ninjonik
NinjonikOP5d ago
They are like this:
<no value> STATUS PORTS
realtime-dev.supabase-realtime Up About a minute (unhealthy)
supabase-analytics Up About a minute (healthy) 0.0.0.0:4002->4000/tcp, [::]:4002->4000/tcp
supabase-auth Restarting (2) 31 seconds ago
supabase-db Up About a minute (healthy) 5432/tcp
supabase-edge-functions Up About a minute
supabase-imgproxy Up About a minute (healthy) 8080/tcp
supabase-meta Up About a minute (healthy) 8080/tcp
supabase-pooler Restarting (1) 17 seconds ago
supabase-rest Up About a minute 3000/tcp
supabase-storage Restarting (1) 8 seconds ago
supabase-studio Up About a minute (healthy) 3000/tcp
supabase-vector Up About a minute (healthy)
<no value> STATUS PORTS
realtime-dev.supabase-realtime Up About a minute (unhealthy)
supabase-analytics Up About a minute (healthy) 0.0.0.0:4002->4000/tcp, [::]:4002->4000/tcp
supabase-auth Restarting (2) 31 seconds ago
supabase-db Up About a minute (healthy) 5432/tcp
supabase-edge-functions Up About a minute
supabase-imgproxy Up About a minute (healthy) 8080/tcp
supabase-meta Up About a minute (healthy) 8080/tcp
supabase-pooler Restarting (1) 17 seconds ago
supabase-rest Up About a minute 3000/tcp
supabase-storage Restarting (1) 8 seconds ago
supabase-studio Up About a minute (healthy) 3000/tcp
supabase-vector Up About a minute (healthy)
Seems like the auth, pooler and storage keep restarting all the time The only error im getting is this one: Error response from daemon: driver failed programming external connectivity on endpoint supabase-kong (6d71a393544089d8442c07b07ad2073f8ef27297c1cecfd41a69d38c77496b39): failed to bind port 0.0.0.0:8444/tcp: Error starting userland proxy: listen tcp4 0.0.0.0:8444: bind: address already in use Which is rather weird because nothing is running on that port except for the Kong itself. Even if I docker compose down containers and up again then I'll get this kong error for some reason. I technically probably don't even need Kong as I use nginx reverse proxy, but I wanted to stick to the original without modifying much stuff. storage keeps giving me this error from docker logs:
supabase-storage | Node.js v22.17.0
supabase-storage | node:internal/url:818
supabase-storage | href = bindingUrl.parse(input, base, true);
supabase-storage | ^
supabase-storage |
supabase-storage | TypeError: Invalid URL
supabase-storage | at new URL (node:internal/url:818:25)
supabase-storage | at parse (/app/node_modules/pg/node_modules/pg-connection-string/index.js:29:14)
supabase-storage | at new ConnectionParameters (/app/node_modules/pg/lib/connection-parameters.js:56:42)
supabase-storage | at new Client (/app/node_modules/pg/lib/client.js:18:33)
supabase-storage | at connect (/app/node_modules/pg-listen/dist/index.js:68:20)
supabase-storage | at createPostgresSubscriber (/app/node_modules/pg-listen/dist/index.js:200:14)
supabase-storage | at new PostgresPubSub (/app/dist/internal/pubsub/postgres.js:42:52)
supabase-storage | at Object.<anonymous> (/app/dist/internal/database/pubsub.js:29:16)
supabase-storage | at Module._compile (node:internal/modules/cjs/loader:1730:14)
supabase-storage | at Object..js (node:internal/modules/cjs/loader:1895:10) {
supabase-storage | code: 'ERR_INVALID_URL',
supabase-storage | input: 'postgres://supabase_storage_admin:R9d!aF#1b7mX^zQ4tJk@2hW6eP0vUgL8@db:5434/postgres',
supabase-storage | base: 'postgres://base'
supabase-storage | }
supabase-storage |
supabase-storage | Node.js v22.17.0
supabase-storage | Node.js v22.17.0
supabase-storage | node:internal/url:818
supabase-storage | href = bindingUrl.parse(input, base, true);
supabase-storage | ^
supabase-storage |
supabase-storage | TypeError: Invalid URL
supabase-storage | at new URL (node:internal/url:818:25)
supabase-storage | at parse (/app/node_modules/pg/node_modules/pg-connection-string/index.js:29:14)
supabase-storage | at new ConnectionParameters (/app/node_modules/pg/lib/connection-parameters.js:56:42)
supabase-storage | at new Client (/app/node_modules/pg/lib/client.js:18:33)
supabase-storage | at connect (/app/node_modules/pg-listen/dist/index.js:68:20)
supabase-storage | at createPostgresSubscriber (/app/node_modules/pg-listen/dist/index.js:200:14)
supabase-storage | at new PostgresPubSub (/app/dist/internal/pubsub/postgres.js:42:52)
supabase-storage | at Object.<anonymous> (/app/dist/internal/database/pubsub.js:29:16)
supabase-storage | at Module._compile (node:internal/modules/cjs/loader:1730:14)
supabase-storage | at Object..js (node:internal/modules/cjs/loader:1895:10) {
supabase-storage | code: 'ERR_INVALID_URL',
supabase-storage | input: 'postgres://supabase_storage_admin:R9d!aF#1b7mX^zQ4tJk@2hW6eP0vUgL8@db:5434/postgres',
supabase-storage | base: 'postgres://base'
supabase-storage | }
supabase-storage |
supabase-storage | Node.js v22.17.0
inder
inder5d ago
These containers not healthy is the reason you're getting all those spammy type messages in analytics logs. The issue I believe is the db password there are special characters
Ninjonik
NinjonikOP5d ago
And I think that is also what causes the cpu spikes
inder
inder5d ago
You can generate password using openssl rand -hex 16 I assume you don't have any data in this instance? If yes, then after updating the db password in .env file, run this command rm -r volumes/db/data and then run docker compose up -d --force-recreate Could be the reason.
Ninjonik
NinjonikOP5d ago
alright, will try it out alright, I've completely reinstalled supabase, recreated db and this is what I got:
root@igportals  /srv/docker/backend-other-services/supabase  docker compose ps -a
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
realtime-dev.supabase-realtime supabase/realtime:v2.34.47 "/usr/bin/tini -s -g…" realtime About a minute ago Up About a minute (unhealthy)
supabase-analytics supabase/logflare:1.14.2 "sh run.sh" analytics About a minute ago Up About a minute (healthy) 0.0.0.0:7500->4000/tcp, [::]:7500->4000/tcp
supabase-auth supabase/gotrue:v2.177.0 "auth" auth About a minute ago Up About a minute (healthy)
supabase-db supabase/postgres:15.8.1.060 "docker-entrypoint.s…" db About a minute ago Up About a minute (healthy) 5432/tcp
supabase-edge-functions supabase/edge-runtime:v1.67.4 "edge-runtime start …" functions About a minute ago Up About a minute
supabase-imgproxy darthsim/imgproxy:v3.8.0 "imgproxy" imgproxy About a minute ago Up About a minute (healthy) 8080/tcp
supabase-kong kong:2.8.1 "bash -c 'eval \"echo…" kong About a minute ago Up About a minute (healthy) 0.0.0.0:7000->7000/tcp, :::7000->7000/tcp, 8000-8001/tcp, 0.0.0.0:7443->7443/tcp, :::7443->7443/tcp, 8443-8444/tcp
supabase-meta supabase/postgres-meta:v0.91.0 "docker-entrypoint.s…" meta About a minute ago Up About a minute (healthy) 8080/tcp
supabase-pooler supabase/supavisor:2.5.7 "/usr/bin/tini -s -g…" supavisor About a minute ago Up About a minute (healthy) 0.0.0.0:7432->7432/tcp, :::7432->7432/tcp, 0.0.0.0:7543->7543/tcp, :::7543->7543/tcp
supabase-rest postgrest/postgrest:v12.2.12 "postgrest" rest About a minute ago Up About a minute 3000/tcp
supabase-storage supabase/storage-api:v1.25.7 "docker-entrypoint.s…" storage About a minute ago Up About a minute (healthy) 5000/tcp
supabase-studio supabase/studio:2025.06.30-sha-6f5982d "docker-entrypoint.s…" studio About a minute ago Up About a minute (healthy) 3000/tcp
supabase-vector timberio/vector:0.28.1-alpine "/usr/local/bin/vect…" vector About a minute ago Up About a minute (healthy)
root@igportals  /srv/docker/backend-other-services/supabase 
root@igportals  /srv/docker/backend-other-services/supabase  docker compose ps -a
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
realtime-dev.supabase-realtime supabase/realtime:v2.34.47 "/usr/bin/tini -s -g…" realtime About a minute ago Up About a minute (unhealthy)
supabase-analytics supabase/logflare:1.14.2 "sh run.sh" analytics About a minute ago Up About a minute (healthy) 0.0.0.0:7500->4000/tcp, [::]:7500->4000/tcp
supabase-auth supabase/gotrue:v2.177.0 "auth" auth About a minute ago Up About a minute (healthy)
supabase-db supabase/postgres:15.8.1.060 "docker-entrypoint.s…" db About a minute ago Up About a minute (healthy) 5432/tcp
supabase-edge-functions supabase/edge-runtime:v1.67.4 "edge-runtime start …" functions About a minute ago Up About a minute
supabase-imgproxy darthsim/imgproxy:v3.8.0 "imgproxy" imgproxy About a minute ago Up About a minute (healthy) 8080/tcp
supabase-kong kong:2.8.1 "bash -c 'eval \"echo…" kong About a minute ago Up About a minute (healthy) 0.0.0.0:7000->7000/tcp, :::7000->7000/tcp, 8000-8001/tcp, 0.0.0.0:7443->7443/tcp, :::7443->7443/tcp, 8443-8444/tcp
supabase-meta supabase/postgres-meta:v0.91.0 "docker-entrypoint.s…" meta About a minute ago Up About a minute (healthy) 8080/tcp
supabase-pooler supabase/supavisor:2.5.7 "/usr/bin/tini -s -g…" supavisor About a minute ago Up About a minute (healthy) 0.0.0.0:7432->7432/tcp, :::7432->7432/tcp, 0.0.0.0:7543->7543/tcp, :::7543->7543/tcp
supabase-rest postgrest/postgrest:v12.2.12 "postgrest" rest About a minute ago Up About a minute 3000/tcp
supabase-storage supabase/storage-api:v1.25.7 "docker-entrypoint.s…" storage About a minute ago Up About a minute (healthy) 5000/tcp
supabase-studio supabase/studio:2025.06.30-sha-6f5982d "docker-entrypoint.s…" studio About a minute ago Up About a minute (healthy) 3000/tcp
supabase-vector timberio/vector:0.28.1-alpine "/usr/local/bin/vect…" vector About a minute ago Up About a minute (healthy)
root@igportals  /srv/docker/backend-other-services/supabase 
all containers are fine (and yes, the encryption key was the issue for pooler) now the issue seems to be the analytics container
supabase-analytics |
supabase-analytics | 09:48:15.919 [info] Logs last second!
supabase-analytics |
supabase-analytics | 09:48:16.922 [info] All logs logged!
supabase-analytics |
supabase-analytics | 09:48:16.922 [info] Logs last second!
supabase-analytics |
supabase-analytics | 09:48:17.924 [info] All logs logged!
supabase-analytics |
supabase-analytics | 09:48:17.924 [info] Logs last second!
supabase-analytics |
supabase-analytics | 09:48:18.926 [info] All logs logged!
supabase-analytics |
supabase-analytics | 09:48:18.926 [info] Logs last second!
supabase-analytics |
supabase-analytics | 09:48:19.135 [info] Scheduler metrics!
supabase-analytics |
supabase-analytics | 09:48:15.919 [info] Logs last second!
supabase-analytics |
supabase-analytics | 09:48:16.922 [info] All logs logged!
supabase-analytics |
supabase-analytics | 09:48:16.922 [info] Logs last second!
supabase-analytics |
supabase-analytics | 09:48:17.924 [info] All logs logged!
supabase-analytics |
supabase-analytics | 09:48:17.924 [info] Logs last second!
supabase-analytics |
supabase-analytics | 09:48:18.926 [info] All logs logged!
supabase-analytics |
supabase-analytics | 09:48:18.926 [info] Logs last second!
supabase-analytics |
supabase-analytics | 09:48:19.135 [info] Scheduler metrics!
as it seems to be spamming something like this
root@igportals  /srv/docker/backend-other-services/supabase  docker compose stats --no-stream

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
047cc500aac0 supabase-storage 0.48% 74.17MiB / 23.47GiB 0.31% 70.3kB / 74.6kB 0B / 0B 12
d2f43054bdfd supabase-pooler 0.45% 192.2MiB / 23.47GiB 0.80% 214kB / 173kB 24MB / 0B 41
a1b9af479f8c supabase-auth 0.00% 8.059MiB / 23.47GiB 0.03% 77.4kB / 61kB 184kB / 0B 12
f96c4c5eb2fb supabase-kong 6.31% 691.4MiB / 23.47GiB 2.88% 109kB / 65kB 0B / 106kB 9
dbd4ecb4c87c supabase-edge-functions 0.00% 23.47MiB / 23.47GiB 0.10% 294kB / 10.7kB 0B / 1.89MB 19
51141a179617 supabase-rest 0.06% 20.14MiB / 23.47GiB 0.08% 420kB / 236kB 0B / 0B 30
0dddfae8af32 supabase-studio 0.00% 156.8MiB / 23.47GiB 0.65% 44.9kB / 0B 0B / 0B 11
75ae841c6976 supabase-meta 0.49% 75.81MiB / 23.47GiB 0.32% 44.9kB / 0B 0B / 0B 12
2dbca40eb070 realtime-dev.supabase-realtime 0.19% 166.2MiB / 23.47GiB 0.69% 190kB / 128kB 0B / 0B 40
6f2c01b10824 supabase-analytics 0.62% 513.1MiB / 23.47GiB 2.14% 3.24MB / 1.42MB 0B / 3.36MB 51
da683fb7b0ce supabase-db 0.18% 142MiB / 23.47GiB 0.59% 1.94MB / 2.64MB 4.1kB / 75.3MB 38
c232d2a3e59e supabase-imgproxy 0.00% 13.2MiB / 23.47GiB 0.05% 50.6kB / 0B 643kB / 0B 13
58d1005ff488 supabase-vector 21.18% 35.46MiB / 23.47GiB 0.15% 58.8kB / 120kB 0B / 0B 9
root@igportals  /srv/docker/backend-other-services/supabase  docker compose stats --no-stream

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
047cc500aac0 supabase-storage 0.48% 74.17MiB / 23.47GiB 0.31% 70.3kB / 74.6kB 0B / 0B 12
d2f43054bdfd supabase-pooler 0.45% 192.2MiB / 23.47GiB 0.80% 214kB / 173kB 24MB / 0B 41
a1b9af479f8c supabase-auth 0.00% 8.059MiB / 23.47GiB 0.03% 77.4kB / 61kB 184kB / 0B 12
f96c4c5eb2fb supabase-kong 6.31% 691.4MiB / 23.47GiB 2.88% 109kB / 65kB 0B / 106kB 9
dbd4ecb4c87c supabase-edge-functions 0.00% 23.47MiB / 23.47GiB 0.10% 294kB / 10.7kB 0B / 1.89MB 19
51141a179617 supabase-rest 0.06% 20.14MiB / 23.47GiB 0.08% 420kB / 236kB 0B / 0B 30
0dddfae8af32 supabase-studio 0.00% 156.8MiB / 23.47GiB 0.65% 44.9kB / 0B 0B / 0B 11
75ae841c6976 supabase-meta 0.49% 75.81MiB / 23.47GiB 0.32% 44.9kB / 0B 0B / 0B 12
2dbca40eb070 realtime-dev.supabase-realtime 0.19% 166.2MiB / 23.47GiB 0.69% 190kB / 128kB 0B / 0B 40
6f2c01b10824 supabase-analytics 0.62% 513.1MiB / 23.47GiB 2.14% 3.24MB / 1.42MB 0B / 3.36MB 51
da683fb7b0ce supabase-db 0.18% 142MiB / 23.47GiB 0.59% 1.94MB / 2.64MB 4.1kB / 75.3MB 38
c232d2a3e59e supabase-imgproxy 0.00% 13.2MiB / 23.47GiB 0.05% 50.6kB / 0B 643kB / 0B 13
58d1005ff488 supabase-vector 21.18% 35.46MiB / 23.47GiB 0.15% 58.8kB / 120kB 0B / 0B 9
and the vector spams this:
supabase-vector | 2025-08-19T09:47:22.813878Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=5a98518159b54e310dfec1cf6e2bccfbf356add5a6ec81d4e11c425ea0685e5b
supabase-vector | 2025-08-19T09:47:22.816050Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=f0cb04545a2d307dfe1d409cfbde657e2b85df662302a7a6715a54d7c4f54776
supabase-vector | 2025-08-19T09:47:22.816129Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=f0cb04545a2d307dfe1d409cfbde657e2b85df662302a7a6715a54d7c4f54776
supabase-vector | 2025-08-19T09:47:22.817068Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=cb9868383233653ff56ca81e17994604364b9e6e8fe3d4886ee390a8fcdf42bb
supabase-vector | 2025-08-19T09:47:22.817115Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=cb9868383233653ff56ca81e17994604364b9e6e8fe3d4886ee390a8fcdf42bb
supabase-vector | 2025-08-19T09:47:22.818003Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=9e60a1603d7ce5c0d85c49fd0d9f12fced70d3d7253265da6e47ba0a817e8549
supabase-vector | 2025-08-19T09:47:22.818046Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=9e60a1603d7ce5c0d85c49fd0d9f12fced70d3d7253265da6e47ba0a817e8549
supabase-vector | 2025-08-19T09:47:22.819826Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=d464458034ce6764994e2b13b89087f484546d456778b9fc925194ebeae08629
supabase-vector | 2025-08-19T09:47:22.819871Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=d464458034ce6764994e2b13b89087f484546d456778b9fc925194ebeae08629
supabase-vector | 2025-08-19T09:47:22.824431Z INFO source{component_kind="sour^C
supabase-vector | 2025-08-19T09:47:22.813878Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=5a98518159b54e310dfec1cf6e2bccfbf356add5a6ec81d4e11c425ea0685e5b
supabase-vector | 2025-08-19T09:47:22.816050Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=f0cb04545a2d307dfe1d409cfbde657e2b85df662302a7a6715a54d7c4f54776
supabase-vector | 2025-08-19T09:47:22.816129Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=f0cb04545a2d307dfe1d409cfbde657e2b85df662302a7a6715a54d7c4f54776
supabase-vector | 2025-08-19T09:47:22.817068Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=cb9868383233653ff56ca81e17994604364b9e6e8fe3d4886ee390a8fcdf42bb
supabase-vector | 2025-08-19T09:47:22.817115Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=cb9868383233653ff56ca81e17994604364b9e6e8fe3d4886ee390a8fcdf42bb
supabase-vector | 2025-08-19T09:47:22.818003Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=9e60a1603d7ce5c0d85c49fd0d9f12fced70d3d7253265da6e47ba0a817e8549
supabase-vector | 2025-08-19T09:47:22.818046Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=9e60a1603d7ce5c0d85c49fd0d9f12fced70d3d7253265da6e47ba0a817e8549
supabase-vector | 2025-08-19T09:47:22.819826Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=d464458034ce6764994e2b13b89087f484546d456778b9fc925194ebeae08629
supabase-vector | 2025-08-19T09:47:22.819871Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=d464458034ce6764994e2b13b89087f484546d456778b9fc925194ebeae08629
supabase-vector | 2025-08-19T09:47:22.824431Z INFO source{component_kind="sour^C
inder
inder5d ago
Its hard to read this output. Send me the output of this command
docker compose ps --format='table{{.Names}}\t{{.Status}}'
docker compose ps --format='table{{.Names}}\t{{.Status}}'
Ninjonik
NinjonikOP5d ago
<no value> STATUS
realtime-dev.supabase-realtime Up 5 minutes (unhealthy)
supabase-analytics Up 6 minutes (healthy)
supabase-auth Up 5 minutes (healthy)
supabase-db Up 6 minutes (healthy)
supabase-edge-functions Up 5 minutes
supabase-imgproxy Up 6 minutes (healthy)
supabase-kong Up 5 minutes (healthy)
supabase-meta Up 5 minutes (healthy)
supabase-pooler Up 5 minutes (healthy)
supabase-rest Up 5 minutes
supabase-storage Up 5 minutes (healthy)
supabase-studio Up 5 minutes (healthy)
supabase-vector Up 6 minutes (healthy)
<no value> STATUS
realtime-dev.supabase-realtime Up 5 minutes (unhealthy)
supabase-analytics Up 6 minutes (healthy)
supabase-auth Up 5 minutes (healthy)
supabase-db Up 6 minutes (healthy)
supabase-edge-functions Up 5 minutes
supabase-imgproxy Up 6 minutes (healthy)
supabase-kong Up 5 minutes (healthy)
supabase-meta Up 5 minutes (healthy)
supabase-pooler Up 5 minutes (healthy)
supabase-rest Up 5 minutes
supabase-storage Up 5 minutes (healthy)
supabase-studio Up 5 minutes (healthy)
supabase-vector Up 6 minutes (healthy)
inder
inder5d ago
The issue is in realtime analytics is healthy
Ninjonik
NinjonikOP5d ago
these are the logs
realtime-dev.supabase-realtime | 09:53:07.881 request_id=GF0iV0Bekjfw5iUAAAEj [info] HEAD /api/tenants/realtime-dev/health
realtime-dev.supabase-realtime | 09:53:07.882 request_id=GF0iV0Bekjfw5iUAAAEj [info] Sent 403 in 298µs
realtime-dev.supabase-realtime | 09:53:13.169 request_id=GF0iWHuJHB_gfXkAAAWE [info] HEAD /api/tenants/realtime-dev/health
realtime-dev.supabase-realtime | 09:53:13.169 request_id=GF0iWHuJHB_gfXkAAAWE [info] Sent 403 in 453µs
realtime-dev.supabase-realtime | 09:53:18.405 request_id=GF0iWbOnyFmGmTEAAAWk [info] HEAD /api/tenants/realtime-dev/health
realtime-dev.supabase-realtime | 09:53:18.406 request_id=GF0iWbOnyFmGmTEAAAWk [info] Sent 403 in 897µs
realtime-dev.supabase-realtime | 09:53:23.651 request_id=GF0iWuxQG7nUCQkAAAXE [info] HEAD /api/tenants/realtime-dev/health
realtime-dev.supabase-realtime | 09:53:23.651 request_id=GF0iWuxQG7nUCQkAAAXE [info] Sent 403 in 344µs
realtime-dev.supabase-realtime | 09:53:28.869 request_id=GF0iXCNNKaIYpgAAAAXk [info] HEAD /api/tenants/realtime-dev/health
realtime-dev.supabase-realtime | 09:53:28.869 request_id=GF0iXCNNKaIYpgAAAAXk [info] Sent 403 in 806µs
realtime-dev.supabase-realtime | 09:53:34.107 request_id=GF0iXVuKeEkLomkAAAYE [info] HEAD /api/tenants/realtime-dev/health
realtime-dev.supabase-realtime | 09:53:34.108 request_id=GF0iXVuKeEkLomkAAAYE [info] Sent 403 in 1ms
realtime-dev.supabase-realtime | 09:53:07.881 request_id=GF0iV0Bekjfw5iUAAAEj [info] HEAD /api/tenants/realtime-dev/health
realtime-dev.supabase-realtime | 09:53:07.882 request_id=GF0iV0Bekjfw5iUAAAEj [info] Sent 403 in 298µs
realtime-dev.supabase-realtime | 09:53:13.169 request_id=GF0iWHuJHB_gfXkAAAWE [info] HEAD /api/tenants/realtime-dev/health
realtime-dev.supabase-realtime | 09:53:13.169 request_id=GF0iWHuJHB_gfXkAAAWE [info] Sent 403 in 453µs
realtime-dev.supabase-realtime | 09:53:18.405 request_id=GF0iWbOnyFmGmTEAAAWk [info] HEAD /api/tenants/realtime-dev/health
realtime-dev.supabase-realtime | 09:53:18.406 request_id=GF0iWbOnyFmGmTEAAAWk [info] Sent 403 in 897µs
realtime-dev.supabase-realtime | 09:53:23.651 request_id=GF0iWuxQG7nUCQkAAAXE [info] HEAD /api/tenants/realtime-dev/health
realtime-dev.supabase-realtime | 09:53:23.651 request_id=GF0iWuxQG7nUCQkAAAXE [info] Sent 403 in 344µs
realtime-dev.supabase-realtime | 09:53:28.869 request_id=GF0iXCNNKaIYpgAAAAXk [info] HEAD /api/tenants/realtime-dev/health
realtime-dev.supabase-realtime | 09:53:28.869 request_id=GF0iXCNNKaIYpgAAAAXk [info] Sent 403 in 806µs
realtime-dev.supabase-realtime | 09:53:34.107 request_id=GF0iXVuKeEkLomkAAAYE [info] HEAD /api/tenants/realtime-dev/health
realtime-dev.supabase-realtime | 09:53:34.108 request_id=GF0iXVuKeEkLomkAAAYE [info] Sent 403 in 1ms
inder
inder5d ago
After cloning repo, which env variables did you update?
Ninjonik
NinjonikOP5d ago
most of the ports, I also had to change analytics port in the docker-compose as it's already used on my machine
inder
inder5d ago
No, the env variables in .env file.
Ninjonik
NinjonikOP5d ago
I think I might have changed some internal ports, that might be the problem
inder
inder5d ago
Are you using the default ones? By simply copying .env.example
Ninjonik
NinjonikOP5d ago
no, I'm not as some of them are used already by other processes
inder
inder5d ago
How did you generate the JWT_SECRET, ANON_KEY and SERVICE_ROLE_KEY?
Ninjonik
NinjonikOP5d ago
I've generated the VAULT_ENC_KEY using the command you've sent me and the rest just by using random string gen
inder
inder5d ago
That command was for db password. What about this?
Ninjonik
NinjonikOP5d ago
just random strings no specific command
inder
inder5d ago
This must be causing issues. These have to jwts. You create a random jwt secret and use that secret to mint jwts ANON_KEY and SERVICE_ROLE_KEY are jwts Try this first, take down all containers and use the default .env.example file to start the stack Then, check if all containers are healthy and see your cpu usage
Ninjonik
NinjonikOP5d ago
damn, I wish this was documented, because from what I thought you should be able to run supabase without touching .env at first
inder
inder5d ago
You can, with the default envs Also in the guide there is a generator for secret and keys https://supabase.com/docs/guides/self-hosting/docker#securing-your-services
Ninjonik
NinjonikOP5d ago
I see
inder
inder5d ago
First try with the default ones Make sure to run the rm -r volumes/db/data command before recreating the stack this is important cp .env.example .env
Ninjonik
NinjonikOP5d ago
seems to be working 🎉
root@igportals  /srv/docker/backend-other-services/supabase  docker compose ps --format='table{{.Names}}\t{{.Status}}'
<no value> STATUS
realtime-dev.supabase-realtime Up 37 seconds (healthy)
supabase-analytics Up 49 seconds (healthy)
supabase-auth Up 36 seconds (healthy)
supabase-db Up 55 seconds (healthy)
supabase-edge-functions Up 37 seconds
supabase-imgproxy Up About a minute (healthy)
supabase-kong Up 36 seconds (healthy)
supabase-meta Up 37 seconds (healthy)
supabase-pooler Up 37 seconds (healthy)
supabase-rest Up 37 seconds
supabase-storage Up 35 seconds (healthy)
supabase-studio Up 37 seconds (healthy)
supabase-vector Up About a minute (healthy)
root@igportals  /srv/docker/backend-other-services/supabase  docker compose ps --format='table{{.Names}}\t{{.Status}}'
<no value> STATUS
realtime-dev.supabase-realtime Up 37 seconds (healthy)
supabase-analytics Up 49 seconds (healthy)
supabase-auth Up 36 seconds (healthy)
supabase-db Up 55 seconds (healthy)
supabase-edge-functions Up 37 seconds
supabase-imgproxy Up About a minute (healthy)
supabase-kong Up 36 seconds (healthy)
supabase-meta Up 37 seconds (healthy)
supabase-pooler Up 37 seconds (healthy)
supabase-rest Up 37 seconds
supabase-storage Up 35 seconds (healthy)
supabase-studio Up 37 seconds (healthy)
supabase-vector Up About a minute (healthy)
however the CPU issue still persists, unfortunately
Ninjonik
NinjonikOP5d ago
No description
inder
inder5d ago
Wait for some minutes, its expected at the beginning
Ninjonik
NinjonikOP5d ago
it still persists twose two just keep spamming logs messages
vents::docker_logs: Stopped watching for container logs. container_id=055d96a29917dd5cfb47eabad517106f4e7f996a1ee607e335d6bf32c79ab012
supabase-vector | 2025-08-19T10:10:29.172371Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=055d96a29917dd5cfb47eabad517106f4e7f996a1ee607e335d6bf32c79ab012
supabase-vector | 2025-08-19T10:10:29.176164Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=d464458034ce6764994e2b13b89087f484546d456778b9fc925194ebeae08629
supabase-vector | 2025-08-19T10:10:29.176225Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=d464458034ce6764994e2b13b89087f484546d456778b9fc925194ebeae08629
supabase-vector | 2025-08-19T10:10:29.176949Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=f9638f7d9cdbf9d0c628c17fffea7533a5aef792b8017b7d0b1511605fe2f603
supabase-vector | 2025-08-19T10:10:29.176986Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=f9638f7d9cdbf9d0c628c17fffea7533a5aef792b8017b7d0b1511605fe2f603
vents::docker_logs: Stopped watching for container logs. container_id=055d96a29917dd5cfb47eabad517106f4e7f996a1ee607e335d6bf32c79ab012
supabase-vector | 2025-08-19T10:10:29.172371Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=055d96a29917dd5cfb47eabad517106f4e7f996a1ee607e335d6bf32c79ab012
supabase-vector | 2025-08-19T10:10:29.176164Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=d464458034ce6764994e2b13b89087f484546d456778b9fc925194ebeae08629
supabase-vector | 2025-08-19T10:10:29.176225Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=d464458034ce6764994e2b13b89087f484546d456778b9fc925194ebeae08629
supabase-vector | 2025-08-19T10:10:29.176949Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=f9638f7d9cdbf9d0c628c17fffea7533a5aef792b8017b7d0b1511605fe2f603
supabase-vector | 2025-08-19T10:10:29.176986Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=f9638f7d9cdbf9d0c628c17fffea7533a5aef792b8017b7d0b1511605fe2f603
supabase-vector | 2025-08-19T10:09:26.229479Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=9e60a1603d7ce5c0d85c49fd0d9f12fced70d3d7253265da6e47ba0a817e8549
supabase-vector | 2025-08-19T10:09:26.232728Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=6375ebd188378e50cadca1af60167898a4be5e680dea3f1f8dd1486513914dac
supabase-vector | 2025-08-19T10:09:26.232818Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=6375ebd188378e50cadca1af60167898a4be5e680dea3f1f8dd1486513914dac
supabase-vector | 2025-08-19T10:09:26.233670Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=293c1415f89d171d4529736e8cfc7e1cdd0d14f423f8d1832af43295456ce741
supabase-vector | 2025-08-19T10:09:26.233964Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=293c1415f89d171d4529736e8cfc7e1cdd0d14f423f8d1832af43295456ce741
supabase-vector | 2025-08-19T10:09:26.234429Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=b7a99e568eac0d26cf48ec26e0e7e6f71494f75e49c727fd5a134aa4f59b2a6e
supabase-vector | 2025-08-19T10:09:26.234517Z INFO source{component_kind="source" component_id=docker_host
supabase-vector | 2025-08-19T10:09:26.229479Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=9e60a1603d7ce5c0d85c49fd0d9f12fced70d3d7253265da6e47ba0a817e8549
supabase-vector | 2025-08-19T10:09:26.232728Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=6375ebd188378e50cadca1af60167898a4be5e680dea3f1f8dd1486513914dac
supabase-vector | 2025-08-19T10:09:26.232818Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=6375ebd188378e50cadca1af60167898a4be5e680dea3f1f8dd1486513914dac
supabase-vector | 2025-08-19T10:09:26.233670Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=293c1415f89d171d4529736e8cfc7e1cdd0d14f423f8d1832af43295456ce741
supabase-vector | 2025-08-19T10:09:26.233964Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Started watching for container logs. container_id=293c1415f89d171d4529736e8cfc7e1cdd0d14f423f8d1832af43295456ce741
supabase-vector | 2025-08-19T10:09:26.234429Z INFO source{component_kind="source" component_id=docker_host component_type=docker_logs component_name=docker_host}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=b7a99e568eac0d26cf48ec26e0e7e6f71494f75e49c727fd5a134aa4f59b2a6e
supabase-vector | 2025-08-19T10:09:26.234517Z INFO source{component_kind="source" component_id=docker_host
(still the same)
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O
e70d94a59457 supabase-storage 0.62% 71.71MiB / 23.47GiB 0.30% 193kB / 79.8kB
4b15ec15b012 realtime-dev.supabase-realtime 0.16% 183.1MiB / 23.47GiB 0.76% 1.85MB / 1.73MB
418d969d4291 supabase-studio 7.21% 144.8MiB / 23.47GiB 0.60% 824kB / 3.77MB
fcd94384eede supabase-meta 13.69% 101MiB / 23.47GiB 0.42% 656kB / 530kB
cc82e56a7f4d supabase-pooler 0.31% 185.4MiB / 23.47GiB 0.77% 567kB / 598kB
06221dd47a88 supabase-rest 0.10% 29.61MiB / 23.47GiB 0.12% 781kB / 371kB
588aa00abeb1 supabase-edge-functions 0.00% 22.85MiB / 23.47GiB 0.10% 412kB / 10.5kB
900d32f40dc2 supabase-kong 0.04% 702.7MiB / 23.47GiB 2.92% 4.04MB / 3.92MB
bddf94760a96 supabase-auth 0.04% 8.188MiB / 23.47GiB 0.03% 196kB / 61.2kB
f029d1bf3dc9 supabase-analytics 1.53% 563.6MiB / 23.47GiB 2.35% 8.54MB / 3.74MB
ad863a1deeff supabase-db 2.39% 143MiB / 23.47GiB 0.60% 6.37MB / 8.72MB
bc431ccb1b0d supabase-vector 21.31% 35.41MiB / 23.47GiB 0.15% 187kB / 226kB
7162239e3c86 supabase-imgproxy 0.00% 19.53MiB / 23.47GiB 0.08% 168kB / 0B
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O
e70d94a59457 supabase-storage 0.62% 71.71MiB / 23.47GiB 0.30% 193kB / 79.8kB
4b15ec15b012 realtime-dev.supabase-realtime 0.16% 183.1MiB / 23.47GiB 0.76% 1.85MB / 1.73MB
418d969d4291 supabase-studio 7.21% 144.8MiB / 23.47GiB 0.60% 824kB / 3.77MB
fcd94384eede supabase-meta 13.69% 101MiB / 23.47GiB 0.42% 656kB / 530kB
cc82e56a7f4d supabase-pooler 0.31% 185.4MiB / 23.47GiB 0.77% 567kB / 598kB
06221dd47a88 supabase-rest 0.10% 29.61MiB / 23.47GiB 0.12% 781kB / 371kB
588aa00abeb1 supabase-edge-functions 0.00% 22.85MiB / 23.47GiB 0.10% 412kB / 10.5kB
900d32f40dc2 supabase-kong 0.04% 702.7MiB / 23.47GiB 2.92% 4.04MB / 3.92MB
bddf94760a96 supabase-auth 0.04% 8.188MiB / 23.47GiB 0.03% 196kB / 61.2kB
f029d1bf3dc9 supabase-analytics 1.53% 563.6MiB / 23.47GiB 2.35% 8.54MB / 3.74MB
ad863a1deeff supabase-db 2.39% 143MiB / 23.47GiB 0.60% 6.37MB / 8.72MB
bc431ccb1b0d supabase-vector 21.31% 35.41MiB / 23.47GiB 0.15% 187kB / 226kB
7162239e3c86 supabase-imgproxy 0.00% 19.53MiB / 23.47GiB 0.08% 168kB / 0B
I honestly have no idea how this alone can add up to have 100% on all cores, but it somehow does and when I shut these containers down it's back to normal levels, about ~10% a core again
inder
inder5d ago
This is on an ec2 instance
No description
Ninjonik
NinjonikOP5d ago
before
No description
Ninjonik
NinjonikOP5d ago
after
No description
Ninjonik
NinjonikOP5d ago
I'll check it out anyway, thank you so much for your help, I appreciate what you're doing here
inder
inder5d ago
No problem. I'll be out for about 2 hours. Will get back to you later. If the solution mentioned in the forum doesn't work, install supabase cli on your server and run supabase start. See if that solves this issue. cli shouldn't be used for production environment. This is just for testing. Also make sure you're running the latest version of docker on your server.
Ninjonik
NinjonikOP5d ago
So, it helped a bit (quite a bit), we're down to an average of about 85% a CPU but I've noticed this: docker stats --no-stream --format "table {{.Container}}\t{{.Name}}\t{{.CPUPerc}}" | sort -k3 -r -n gives:
fc9fb094b653 supabase-vector 183.42%
56f5cca25329 supabase-meta 17.31%
fc9fb094b653 supabase-vector 183.42%
56f5cca25329 supabase-meta 17.31%
and I've also noticed the following by running the top command:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1311 root 20 0 10.6g 4.5g 58880 S 296.1 19.3 8,57 dockerd
3304411 root 20 0 222300 70460 31744 S 209.5 0.3 10:24.28 vector
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1311 root 20 0 10.6g 4.5g 58880 S 296.1 19.3 8,57 dockerd
3304411 root 20 0 222300 70460 31744 S 209.5 0.3 10:24.28 vector
after shutting down the supabase containers: (top 4 in top)
2999787 www-data 20 0 40552 25144 7296 S 14.9 0.1 0:25.59 nginx
1374 sinusbot 20 0 2980076 81740 14592 S 7.3 0.3 71:07.42 sinusbot
3357426 root 20 0 1628152 17616 14080 S 6.3 0.1 0:07.60 seaf-server
3357390 www-data 20 0 17548 8732 2816 S 3.0 0.0 0:00.53 nginx
2999787 www-data 20 0 40552 25144 7296 S 14.9 0.1 0:25.59 nginx
1374 sinusbot 20 0 2980076 81740 14592 S 7.3 0.3 71:07.42 sinusbot
3357426 root 20 0 1628152 17616 14080 S 6.3 0.1 0:07.60 seaf-server
3357390 www-data 20 0 17548 8732 2816 S 3.0 0.0 0:00.53 nginx
(top 4)
root@igportals  ~  docker stats --no-stream --format "table {{.Container}}\t{{.Name}}\t{{.CPUPerc}}" | sort -k3 -r -n
5e6fe7a6fadd obsidian 25.03%
fb15a03b8368 seafile 8.28%
5223f67c6cb0 seafile-mysql 2.92%
ea0135eff693 bitwarden-mssql 1.32%
root@igportals  ~  docker stats --no-stream --format "table {{.Container}}\t{{.Name}}\t{{.CPUPerc}}" | sort -k3 -r -n
5e6fe7a6fadd obsidian 25.03%
fb15a03b8368 seafile 8.28%
5223f67c6cb0 seafile-mysql 2.92%
ea0135eff693 bitwarden-mssql 1.32%
Ninjonik
NinjonikOP5d ago
Reddit
From the Supabase community on Reddit
Explore this post and more from the Supabase community
Ninjonik
NinjonikOP5d ago
I would personally simply remove the container as I don't plan on using any AI oriented stuff or vectors in the database, but I'm afraid whether it will break any other functionality.
inder
inder5d ago
the name vector can be misleading. its actually a log aggregator and forwards logs to analytics service. You can disable it if you like. But I'd only recommend that you do it if you're not going to going to expose supabase stack to internet and have your own servers in front of supabase
Ninjonik
NinjonikOP5d ago
well I'm already having nginx reverse proxy set up in front of the supabase and since supabase has its client sdk then restricting access to lcoalhost only servers wouldn't really be good I just don't understand why it consumes so much CPU, especially if people can run it machines with fewer CPUs, there's gotta be some bug
inder
inder5d ago
Can you try it on a new server for testing purposes? I've deployed the stack in about 15 production environments so far and have never faced this issue.
Ninjonik
NinjonikOP5d ago
Interesting but why did other people have the same issue as well? I've also seen some other posts on the discord here talking about this but they never received an answer
inder
inder5d ago
There was one I believe last month. They were deploying on Digital ocean droplet but the issue got resolved after they went with the 1 click setup provided by DO. This could be a cpu specific issue. The self-hosted vector service is still at 0.28.1, but the vector github repo is at 0.49.0
Ninjonik
NinjonikOP5d ago
so what do you suggest me to do?
inder
inder5d ago
Try on a new server In the forum link I sent, there was also a comment about logs being too large and setting up rotation. I notice that you've other containers running as well. Did you try setting up log rotation config in daemon.json file?
Ninjonik
NinjonikOP5d ago
nope but I will try it out now it didn't help, I guess I might try to use other BaaS then unfortunately
inder
inder5d ago
Can you test on another server? I just want to rule out any instance issues
Ninjonik
NinjonikOP5d ago
I'll see what I can do

Did you find this page helpful?