🚨 Realtime Broadcast Issue: Works Locally, Buffered in Production
Setup:
- Frontend: Next.js on Vercel
- Backend: Node.js API on Railway
- Supabase: Realtime broadcasts for progress updates
- Use case: Broadcasting compliance check progress (10% → 30% → 60% → 90% → 100%)
Problem:
Broadcasts are being buffered and delivered all at once in staging (which will be same issue on prod once release), but work perfectly in local development.
What Works ✅:
- Local development: Progress updates arrive in real-time (10%, 30%, 60%, etc.)
- WebSocket connection establishes successfully in both environments
- All broadcasts eventually arrive (just buffered in production)
What Doesn't Work ❌:
Staging: All broadcasts arrive at the same time at the end
Progress bar jumps from 0% to 100% instantly
WebSocket frames show all updates arriving within 42ms of each other
What I Tried:
- Run Staging env locally both backend and frontend and it works as expected however when i tried it on the deployed staging frontend and backend it wasn't working.
- Added all the necessary configs, enabled broadcast realtime, added RLS in realtime.messages
10 Replies
Over what time frame are the changes made that then are "buffered"?
Also what is being done to make the changes to the table?
I don't know the inner workings of broadcast from the DB as well as postgres_changes.
You may need to generate a question/issue in supabase/realtime github for a Supabase dev to tell you if that would be normal or not.
You can also check the realtime monitor in the dashboard to see if you see same thing.

i tried both realtime.broadcast_changes and channel.send() to see if there's difference however still the same issue when in staging but works on local whether its staging keys or dev keys
Not enough info to know exactly what you are seeing.
Certainly timing will be different on hosted as your realtime data from the db is going to a realtime server on the web and then to your client, versus everything local being in the same place.
Plus your changes are going out on the web to your supabase instance. Or at least they could be. You have not said what you are changing and how.
here's in the local

listening to the broadcast

this is also how i do in backend
OK so you are monitoring something and sending a database update(?) I guess. I assume you await the supabase call to do the update.
Then you are also using the REST API send.
That is about all I can tell.
There will be no guarantee of the timing of those two events as to which shows up first on the realtime channel. Although I would typically expect the update db operation to send its event to the realtime server before you get the await return in your client code. But that is not guaranteed.
Two totally different paths are going on here. One to DB and back to your code for the update. And independently from the DB to realtime and then to your handler. Then another from your code directly to realtime server and then back to your handler from realtime.
Two totally different paths are going on here. One to DB and back to your code for the update. And independently from the DB to realtime and then to your handler. Then another from your code directly to realtime server and then back to your handler from realtime.
To clarify my setup and what I've tried:
What I'm Building:
A compliance check scanner with a real-time progress bar (like a file upload progress bar). The scanning process takes ~30 seconds and I need to show progress updates as they happen.
what I've tried:
Approach 1 - Database Trigger using realtime.broadcast_changes():
Approach 2 - Direct broadcasts from Railway server:
The Issue:
Local: Both approaches deliver updates in real-time
Production: Both approaches buffer ALL updates until the end
Since even the database-level realtime.broadcast_changes() function (which runs directly in PostgreSQL) exhibits this buffering behavior, this seems to be a Realtime infrastructure issue rather than a client implementation problem
What happens:
I send 5 separate broadcasts over 30 seconds
Each should arrive immediately after being sent
Instead, all 5 arrive simultaneously after 30 seconds
Visual timeline:
SENDING (from server):
- 00:00 - Send broadcast #1
- 00:06 - Send broadcast #2
- 00:12 - Send broadcast #3 - 00:18 - Send broadcast #4 - 00:30 - Send broadcast #5 RECEIVING (in browser): - 00:00-00:29 - Nothing received - 00:30 - Broadcasts #1, #2, #3, #4, #5 all arrive together The messages are being held somewhere and then released all at once when my process completes. Is this: - Messages being queued until a transaction commits? - Messages being held until a connection closes? - Messages being batched for efficiency? - Something else? This happens with both realtime.broadcast_changes() from database triggers AND direct channel.send() from my server." Should I add delays between my updates to make sure each message gets sent right away?
- 00:12 - Send broadcast #3 - 00:18 - Send broadcast #4 - 00:30 - Send broadcast #5 RECEIVING (in browser): - 00:00-00:29 - Nothing received - 00:30 - Broadcasts #1, #2, #3, #4, #5 all arrive together The messages are being held somewhere and then released all at once when my process completes. Is this: - Messages being queued until a transaction commits? - Messages being held until a connection closes? - Messages being batched for efficiency? - Something else? This happens with both realtime.broadcast_changes() from database triggers AND direct channel.send() from my server." Should I add delays between my updates to make sure each message gets sent right away?
You will need to ask in supabase/realtime github or support. I don't know if this is a bug or not. No one here knows the internals of realtime. I've not seen or heard of buffering. Over 30 seconds and 6 seconds apart though I would expect them to flow thru individually. I can't test this setup most likely until Monday myself. But if I do get a chance before then I will.