🚨 Realtime Broadcast Issue: Works Locally, Buffered in Production

Setup: - Frontend: Next.js on Vercel - Backend: Node.js API on Railway - Supabase: Realtime broadcasts for progress updates - Use case: Broadcasting compliance check progress (10% → 30% → 60% → 90% → 100%) Problem: Broadcasts are being buffered and delivered all at once in staging (which will be same issue on prod once release), but work perfectly in local development. What Works ✅: - Local development: Progress updates arrive in real-time (10%, 30%, 60%, etc.) - WebSocket connection establishes successfully in both environments - All broadcasts eventually arrive (just buffered in production) What Doesn't Work ❌: Staging: All broadcasts arrive at the same time at the end Progress bar jumps from 0% to 100% instantly WebSocket frames show all updates arriving within 42ms of each other What I Tried: - Run Staging env locally both backend and frontend and it works as expected however when i tried it on the deployed staging frontend and backend it wasn't working. - Added all the necessary configs, enabled broadcast realtime, added RLS in realtime.messages
10 Replies
garyaustin
garyaustin3w ago
Over what time frame are the changes made that then are "buffered"? Also what is being done to make the changes to the table? I don't know the inner workings of broadcast from the DB as well as postgres_changes. You may need to generate a question/issue in supabase/realtime github for a Supabase dev to tell you if that would be normal or not. You can also check the realtime monitor in the dashboard to see if you see same thing.
rainrain
rainrainOP3w ago
No description
rainrain
rainrainOP3w ago
i tried both realtime.broadcast_changes and channel.send() to see if there's difference however still the same issue when in staging but works on local whether its staging keys or dev keys
garyaustin
garyaustin3w ago
Not enough info to know exactly what you are seeing. Certainly timing will be different on hosted as your realtime data from the db is going to a realtime server on the web and then to your client, versus everything local being in the same place. Plus your changes are going out on the web to your supabase instance. Or at least they could be. You have not said what you are changing and how.
rainrain
rainrainOP3w ago
here's in the local
No description
rainrain
rainrainOP3w ago
listening to the broadcast
No description
rainrain
rainrainOP3w ago
this is also how i do in backend
// Update BOTH database and broadcast together
const updateProgress = async (
dbClient: DbClient,
checkId: string,
fullData: UpdateComplianceCheck
) => {
// Update database
const updatedData = await updateComplianceCheckData({
dbClient,
id: checkId,
values: fullData,
});

// Send broadcast with the same data
await broadcastChannelManager.send({
channelId: `compliance-check:${checkId}`,
event: 'compliance_check_update',
payload: updatedData,
});
};

export const processSomethingService() {
// some service call here

await updateProgress(dbClient, checkId, {
metadata: {
stage: 'extracting_content',
progress: 10,
timestamp: Date.now(),
},
});

// some services
await updateProgress(dbClient, checkId, {
metadata: {
stage: 'extracting_content',
progress: 40,
timestamp: Date.now(),
},
});

// some services
// until it gets to 100
}
// Update BOTH database and broadcast together
const updateProgress = async (
dbClient: DbClient,
checkId: string,
fullData: UpdateComplianceCheck
) => {
// Update database
const updatedData = await updateComplianceCheckData({
dbClient,
id: checkId,
values: fullData,
});

// Send broadcast with the same data
await broadcastChannelManager.send({
channelId: `compliance-check:${checkId}`,
event: 'compliance_check_update',
payload: updatedData,
});
};

export const processSomethingService() {
// some service call here

await updateProgress(dbClient, checkId, {
metadata: {
stage: 'extracting_content',
progress: 10,
timestamp: Date.now(),
},
});

// some services
await updateProgress(dbClient, checkId, {
metadata: {
stage: 'extracting_content',
progress: 40,
timestamp: Date.now(),
},
});

// some services
// until it gets to 100
}
garyaustin
garyaustin3w ago
OK so you are monitoring something and sending a database update(?) I guess. I assume you await the supabase call to do the update. Then you are also using the REST API send. That is about all I can tell. There will be no guarantee of the timing of those two events as to which shows up first on the realtime channel. Although I would typically expect the update db operation to send its event to the realtime server before you get the await return in your client code. But that is not guaranteed.
Two totally different paths are going on here. One to DB and back to your code for the update. And independently from the DB to realtime and then to your handler. Then another from your code directly to realtime server and then back to your handler from realtime.
rainrain
rainrainOP3w ago
To clarify my setup and what I've tried: What I'm Building: A compliance check scanner with a real-time progress bar (like a file upload progress bar). The scanning process takes ~30 seconds and I need to show progress updates as they happen. what I've tried: Approach 1 - Database Trigger using realtime.broadcast_changes():
CREATE OR REPLACE FUNCTION compliance_checks_changes()
RETURNS trigger AS $$
BEGIN
PERFORM realtime.broadcast_changes(
'compliance-check:' || NEW.id::text,
'UPDATE',
TG_OP,
TG_TABLE_NAME,
TG_TABLE_SCHEMA,
NEW,
OLD
);
RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER broadcast_compliance_updates
AFTER UPDATE ON compliance_checks
FOR EACH ROW EXECUTE FUNCTION compliance_checks_changes();
CREATE OR REPLACE FUNCTION compliance_checks_changes()
RETURNS trigger AS $$
BEGIN
PERFORM realtime.broadcast_changes(
'compliance-check:' || NEW.id::text,
'UPDATE',
TG_OP,
TG_TABLE_NAME,
TG_TABLE_SCHEMA,
NEW,
OLD
);
RETURN NULL;
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER broadcast_compliance_updates
AFTER UPDATE ON compliance_checks
FOR EACH ROW EXECUTE FUNCTION compliance_checks_changes();
Approach 2 - Direct broadcasts from Railway server:
const channel = supabase.channel(`compliance-check:${checkId}`);
await channel.send({
type: 'broadcast',
event: 'UPDATE',
payload: { record: updatedData }
});
Frontend listens the same way for both:
channel.on('broadcast', { event: 'UPDATE' (realtime.broadcast) or custom_update (from direct broadcast from backend) }, (payload) => {
// Update progress bar
});
const channel = supabase.channel(`compliance-check:${checkId}`);
await channel.send({
type: 'broadcast',
event: 'UPDATE',
payload: { record: updatedData }
});
Frontend listens the same way for both:
channel.on('broadcast', { event: 'UPDATE' (realtime.broadcast) or custom_update (from direct broadcast from backend) }, (payload) => {
// Update progress bar
});
The Issue: Local: Both approaches deliver updates in real-time Production: Both approaches buffer ALL updates until the end Since even the database-level realtime.broadcast_changes() function (which runs directly in PostgreSQL) exhibits this buffering behavior, this seems to be a Realtime infrastructure issue rather than a client implementation problem What happens: I send 5 separate broadcasts over 30 seconds Each should arrive immediately after being sent Instead, all 5 arrive simultaneously after 30 seconds Visual timeline: SENDING (from server): - 00:00 - Send broadcast #1 - 00:06 - Send broadcast #2
- 00:12 - Send broadcast #3 - 00:18 - Send broadcast #4 - 00:30 - Send broadcast #5 RECEIVING (in browser): - 00:00-00:29 - Nothing received - 00:30 - Broadcasts #1, #2, #3, #4, #5 all arrive together The messages are being held somewhere and then released all at once when my process completes. Is this: - Messages being queued until a transaction commits? - Messages being held until a connection closes? - Messages being batched for efficiency? - Something else? This happens with both realtime.broadcast_changes() from database triggers AND direct channel.send() from my server." Should I add delays between my updates to make sure each message gets sent right away?
garyaustin
garyaustin3w ago
You will need to ask in supabase/realtime github or support. I don't know if this is a bug or not. No one here knows the internals of realtime. I've not seen or heard of buffering. Over 30 seconds and 6 seconds apart though I would expect them to flow thru individually. I can't test this setup most likely until Monday myself. But if I do get a chance before then I will.

Did you find this page helpful?