how do I work with very large collections
Many of the types of data I want to use this for are so large, that they could and should never be loaded entirely on the frontend.
I already have server-side filtering and am wondering how to best convert this to tanstack db.
I don't really have any idea how to start on that.
- Do I load only a subset of data, then do client side filtering and have a "load more" button? That seems awful cause it likely hides data from the user
- Do I do a hbrid where I do client side filtering but then load more data that matches the filter async?
- Do I just load gigabytes worth of data?
I guess these are not necessarily tanstack specific, but I'm new to local-first/sync engines etc
3 Replies
stormy-gold•2mo ago
These two can be helpful:
https://discord.com/channels/719702312431386674/1408402028245422131
https://github.com/TanStack/db/issues/343
GitHub
Paginated / Infinite Collections · Issue #343 · TanStack/db
A common request we are receiving is to lazily load data into a "query collection" using the infinite query pattern. We need to consider how to support this in a way that is then useable ...
stormy-gold•2mo ago
And this one - https://github.com/TanStack/db/issues/315
GitHub
Partitioned collections · Issue #315 · TanStack/db
A very common use case, and question, is how to handle collections where you don't want to download all of it. Such as issues in an issue tracker, downloading by project/status/createdData etc....
plain-purple•2mo ago
@devjmetivier fyi