Reconcile query invalidation with fresh data, selectively overwrite cache.
We are looking at a long overdue rewrite of our app, but in the interim I'm trying to make the app friendlier for updates from multiple users. The quick way - which works great - is to invalidate the cache on every update with optimistic updates (and refetch on window refocus). Problem is we are returning a lot of data every time we do this. So what I was hoping was to do an initial load for all data, and on subsequent invalidations, only send the data that has been altered since the most recent invalidation (maybe a list of id's as well, to check for deletions). But then I would like to only update the rows that have changed (or removed the ones that have been deleted). Do I want to set queryData on a successful update, instead of invalidateQueries? This is in lieu of pagination or infinite scroll, which will be a feature of the rewrite, along with more focused data fetching in general. So for example if we have a list of todos, when changing the title for one, it would only return todos whose updated_at property is after a certain date (the initial data load), but it would still keep the other todos, which haven't been updated, so don't need to change. I'm thinking about a case where one user has changed a todo title, and another user changes another title - the second user will see the first user's update.
I just need a quick QOL improvement that would reduce a user's chances of overwriting other user's data.
6 Replies
eastern-cyan•12mo ago
You can provide a queryKey to filter on invalidateQueries. Is this a case where your queryKeys aren't already reasonably granular?
And is there a lot of query data on a page at a given time?
crude-lavender•12mo ago
Seems like quite a few ways to implement something here.
What if...
onSuccess of your mutation calls a fetch to return all todos where updated_at > dataUpdatedAt and you use that and the callback version of setQueryData to merge the results with a find/loop? Instead of invalidating the entire todos query you're just updating the "old" ones
After rereading what you said this only helps with mutations where it's just an UPDATE, it would suffer from removing DELETE-d todos.
I guess this would need a fancier API and probably user discretion (probably 100 different ways to do this).
Just spit ballin', you provide all ids from your current todos and dataUpdatedAt and the result would be a "trimmed" response.
Only return:
- todos with type: 'updated' and the entity because updated_at > dataUpdatedAt
- An id andtype: 'deleted' because it no longer exists (allowing you to pull it from your list)
Seems like I reiterated your thoughts. Partial updates with setQueryDatarising-crimsonOP•11mo ago
We are dealing with a rickety old version of react data grid, with fairly brittle architecture around it. At the moment we are fetching the entire set of rows and filtering on the front end. Which works great... Until we get up to like a thousand rows, with complicated sql join queries etc. I'm just trying to get a temporary solution before we rethink the entire architecture and can implement more intelligent data fetching
I settled on this... in case the code is not self-explanatory, we have a lastFetched state which initializes to null (full data fetch) and then updates for each successive fetch. We merge the fetched data with the existing data. Seems to work great right now but can anyone let me know if this is dangerous or an antipattern?
lastFetchedAt adds a WHERE clause to our db query, meaning user 1 sees user 2's updates when user 1 changes a value. So this way in a 1000 item data set, we're only ever really fetching a few rows.crude-lavender•11mo ago
Wouldn't using
dataUpdatedAt prevent you from extra state sync?
rising-crimsonOP•11mo ago
oh @troywoy that's awesome, I didn't see that in the docs!
crude-lavender•11mo ago
It's returned from
useQuery too but I don't know if there'd be any issue using that within its own queryFn