Best way to prevent supabase real-time from dying?
Do I have to send a ping like every 25 seconds or something to keep the socket alive? Or is there a better way of ensuring it doesnt disconnect from inactivity
53 Replies
If your tab goes into background or device puts the browser/app id low power mode or background then the times slow down and it will disconnect.
Towards the end of this is a newer feature that uses a webworker to keep it alive. I've not seen it documented yet. https://github.com/supabase/realtime-js/issues/121
But in reality you will likely need to deal with connection failure and use visibility to decide whether to bother to reconnect on error or wait and reconnect when the tab is visible again. If you need all changes/messages then you have to deal with errors, restart and collect missed data anyway as there is no queue.
GitHub
Realtime websocket loses connection regularly when browser tab goes...
Bug report Describe the bug With a simple subscription to a table (RLS or Not) the websocket drops and reconnects every 3 minutes after 5 minutes of tab in background. Verifying on Windows 10 with ...
lmao this seems to be a common issue
im not really familiar with webworkers but i think ur right about only reconnecting when tab is active
It also saves expensive connection which is a billing metric.
true
was this the webworker code you were referring to?
Could be. I've not tried it as I went with polling for some stuff and will use broadcast from the DB if I get working again on it. Then I'll use either the message table as the queue to fetch missed messages or my own intermediate table if the original table does not have a date to fetch from. I'll keep a local storage time of last realtime event and on any error or a few minutes of background shut down until visible again and reconnect and then fetch what I missed.
Even with the webworker if you lose connection (wifi/phone network drops) and realtime reconnects even a few seconds later you could have missed data.
yeah I saw you mention that in one of the github issues
thats not a problem bc I already have tanstack query set up to fetch the initial messages
Realtime works better for state monitoring, where if you miss a state, you pick it back up on the next one.
Im just wondering the best way of writing the logic of restablishing the websocket connection
filipecabaco closed the issue and says its released in 2.10.7 so i assumed that this was no longer a problem
or that there was a built-in solution
There is no built in. I believe you have to know to use that flag, unless I've missed something. No docs on it that I found.
And it does not deal with reconnect, just tries to avoid disconnecting.
so what exactly did he mean by "This has been released in 2.10.7"?
yeah well that doesnt seem to work either
https://github.com/GaryAustin1/Realtime2 and the linked discussion in that talk about/show what I looked at. But more just experiments to find the issues.
I assume he means the webworker flag is now available.
I just added it to my realtime test code that I always have running in chrome that fails in background tab after several minutes. I'll let you know if it keeps it alive.
alright
I added this to supabase-js createClient and have had no errors in background in about 40 minutes. Usually there are errors in about 3 or 4 minutes.

Ok thats promising
this code is only responsible for keeping the socket open, right?
I assume that if my connection dropped the websocket itself would fail and reconnect but I don't know that for sure. Normally the heartbeat gets skipped and that is the first the server seems to know of a bad connection.
Well it avoids using the slowed down timers to keep the heartbeat alive. The socket does not drop in the background... the heartbeat slows down to the point it misses.
Well it avoids using the slowed down timers to keep the heartbeat alive. The socket does not drop in the background... the heartbeat slows down to the point it misses.
I was using one of the code blocks provided that reconnects when connection is lost based on tab visibility, but now sending a message takes an extra ~3 seconds:
oh i see
Not sure what sending message means in this context.
ITs what Im using the real-time for, sending a message in a global chat channel

With postgres_changes there is a 100's to several second delay from SUBSCRIBED to the point it connects to the DB and would notice any changes. Any changes between you starting the subscription code until a status comes back (not SUBSCRIBED) saying postgres_changes connected would be missed.
Not sure once connected why there would be a delay.

well the delays only occurring now after using this useEffect code snippet
is my test subscription. The .on('system') detects when you really start getting data.
The initial connection delay where it is not really connected is from .subscribe() until the .on('system') payload message with the 'postgres_changes' message in it. That is the DB connecting to the realtime server. SUBSCRIPTION is just your code connecting to the REALTIME server.
Another reason to go to broadcast changes is to avoid all of this.
Oh i see what you mean I think, I sent two messages back to back and the second one was a lot quicker than the first
what do you mean
But it does not "delay" messages. They just won't be received until the 2nd event.
Supabase recommends going to Broadcast from the server instead of postgres_changes now. They have a shell built around that they call broadcast_changes to simulate this but it is just using private channel Broadcast messages from a trigger function on the table you want to monitor.
ok hold on let me get the payload from system instead and see what happens
oh
This has nothing to do with the heartbeats and connection time. But is much cleaner and faster.
https://supabase.com/docs/guides/realtime/subscribing-to-database-changes
https://supabase.com/docs/guides/realtime/broadcast?queryGroups=language&language=swift#broadcast-from-the-database
this whole thing is just about the most complex thing ive encountered
I thought this solution would work but now its telling me I have duplicate keys

Very much.
Is that a Supabase error or a next.js error? I don't use next.js.
its a React error
its referring to the .map of the messages
Do you have the same message twice in the array?
Im assuming so bc it appears twice in the chatbox
it disappears when I reload the page
I think its because Im manually appending the payload to the data fetched with react-query
So you might be fetching your past data and then when realtime connects you get the last piece of data again or the opposite. A race condition.
that makes sense
If you have a key you should probably be inserting the realtime time and not appending so it just replaces if already exists.
Ok so I replaced
with
but now its consistently slow
because invalidateQueries is refetching the data from the database whenever theres a new payload
All of the data?
yes
At least use a last key to only fetch > than that.
I've not seen any good examples of dealing with this stuff and you are into the realm of there are many different approaches and differences of how to deal with it. It gets more complex if there are deletes involved.
In my repository I have code like this to handle keeping an in memory array of records updated.

But that dealt with all three types of changes to a table.
and you initally populated this table by fetching the data first?
It is in the repository but I fetch the data AFTER postgres changes is connected to init the table.
There could still be data coming thru realtime though even if in the initial data so I make sure to overright versus just add.
yeah I saw you saying that doing it in that order prevents the user from missing data
yeah exactly
URGHH think ima just work on this again tomorrow
I might do it your way with the memoryTable bc I do plan on adding edits and deletes later
what is the memory table anyway? Is it a table in Redis, local storage?
Is the data something used all the time by the user or only on demand? Like they go to see it in a tab?
on demand im guessing, they're only going to need the global chat when its in view or they're typing in it
That one is just memory. For testing.
My real app used indexedDB to store records and would get updated.
im assuming that indexedDB isnt remote
But I only kept the past 100 records for my purposes. If you searched the data it went to the DB.
Right is is part of browsers.
oh i didnt know that
so would it be better to use that or redis in my use case
im not familiar with either
https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API
Lots of shims for it.
I don't know really about redis. There are many methods out there to handle it. But for my case it had to have your last bit of records for offline use. And you could change them on another device so needed to sync them. I also was using realtime for a chat feature with rooms. But quickly went to only turning on realtime when you were in the chatroom and polling to detect any change to avoid connections if someone just had the app on. This was all before broadcast messages with security which is the way I'll go now if I get back to it.
I don't know really about redis. There are many methods out there to handle it. But for my case it had to have your last bit of records for offline use. And you could change them on another device so needed to sync them. I also was using realtime for a chat feature with rooms. But quickly went to only turning on realtime when you were in the chatroom and polling to detect any change to avoid connections if someone just had the app on. This was all before broadcast messages with security which is the way I'll go now if I get back to it.
Why there doesn't seem to be a guide for this whole thing is beyond me