Extending a spark extension?
I'm working with ash_graphql. The codebase I'm working in has a bunch of old non-ash JSON:API routes. There's a part of the application that decides whether or not to send requests to the DB read replica or the writer, which for the old JSON:API stuff was wired up to go to the writer if it was a POST request, or also if it was in a list of ignored routes.
For graphql this breaks down since everything comes through a POST, so I'm reworking how it decides to use the reader when it's an ash_graphql request. In general I can anchor on whether it's a query or mutation, however some queries will end up doing writes as well. So, I'm hoping to be able to have a simple way to just tack on an extra config to any query in the
graphql
queries
block that allows marking that query as intended to use the writer.
tl;dr, I'd like a way to add a new piece of config to the various queries
config objects that some custom plug would be able to hook into. I swear I've seen some way to do this in docs somewhere but it's eluding me now 😅36 Replies
you can't 😄
Extensions can add entities to other DSL sections
but cannot add options to entities
Actually now that you say that that makes sense since all the info modules expect structs of a certain type
(I think)
exactly
In terms of hacky workarounds, what's the
meta
field for on the various query functions? 😅 it just says "A keyword list of metadata for the query." haha
I suspect not what I'm looking for lolCan you not just switch the repo with function version of the
repo
DSL in the postgres
section?per gql query though?
or I guess could even be per action
https://hexdocs.pm/ash_postgres/dsl-ashpostgres-datalayer.html#postgres-repo
it'll be per database query
whoa, interesting
otherwise you can set repo options in the context, so you can set various actions to use specific repos
You can literally do
No hacky shit w/ methods or API stuff 😆
https://hexdocs.pm/ash_postgres/using-multiple-repos.html
Fascinating, never knew that was a thing. What happens if say, a request does a mutation then immediately a read? I know generally replication is quite fast but so far we've been very cautious to keep the writer "sticky" per API request, which is part of why our current setup is so complicated
heads up we fixed a bug there in
main
of ash_postgres
which is not released yet
Yeah right now it would just switch repos
But you can do this
Since its always one action getting called by the API
And then as long as you're passing context along properly, all nested action calls will use that repo
Using the new scope
option its easier than ever, because all context
arguments passed into callbacks are valid scopes
https://hexdocs.pm/ash/Ash.Scope.html#module-passing-scope-and-options
that plus shared context and you're winning
https://hexdocs.pm/ash/actions.html#sharedI have some reading to do haha
sidenote I think the thing I was thinking of was the ability to have your own
Resource
class and use MyApp.Resource
instead of use Ash.Resource
, which is definitely not the same as what I was trying to do haha. But your tips seem much better than my idea hahasome people do make their own base resource with a using macro, but it's rare
When did scope happen? Must have missed that announcement
looks cool
most of the time people use snippets for shared DSL behaviour.
Oh, scope was added in response to the changes in LiveView
Yep.
And we got a few nice things (like all contexts being scopes) that fell out of it and are actually really nice QOL improvements for Ash
unrelated to actual phoenix scopes
ah nice. I am out of the loop on LV stuff since the stuff I work on is all API driven
So, to summarize/make sure I'm following, what you're saying is basically, use
in all resources, and then in the special cases where some action that writes does a read e.g. in an after_action or something, put in a change that manually sets the repo to the writer? whether that's via scope or directly via Ash.Changeset.set_context is more of a detail?
if I followed that correctly, my concern would be that it'd be easy to miss some of these cases since you'd have to handle them explicitly every time. But I may have misunderstood
yup
Won't after actions, etc happen in the same repo because of being run in the same transaction @Zach?
🤔 that's probably a good point actually
I may be overthinking this 😅
you can verify it manually by just changing the
:read
repo to raise 🙂Although the same issue could come up if I for example wrote something like this
but at that point it feels like it goes beyond what I'd expect ash to have a built-in solution for
You can use
change get_and_lock_for_update()
🙂
which at least means that when you run the update (if it can't be done atomically) it will have the latest version of the data from the dbWell yeah, but the point was less that exact code and more that if I were to do an immediate read after a write, it's potentially dangerous. But point taken, you can always just not do an immediate read from the reader immediately after the write haha
Right, but the only reason you'd do that is if you wanted to load extra data rather than using the result of the update itself, and you can just pass
load: ...
to the update options if you want that.fair enough. I think my brain is thinking about this in terms of the old janky proto-ash framework our old solution is built for haha. Ash definitely simplifies it a lot
your only other option is some sort of distributed locking solution
like not committing the transaction until a majority of read replicas are in sync or something
but it's that big of a problem maybe you should use a sharding system rather than read replicas? something like citus maybe
It's not that big of a problem, really
I think what I'm grappling with is that setting reader vs writer at the action level feels inherently more fragile than setting it at e.g. the request and/or process level. For requests via the various API extensions like gql it's pretty much fine because 1 request = 1 action, but for like a custom controller, or a liveview running arbitrary ash actions, etc. it feels like it would be incredibly easy to accidentally "unstick" from the writer.
It's possible I'm overthinking it though, I'm not sure how much "unsticking" from the writer within a request would actually cause real world issues, because replication can be so fast. But the previous solution the codebase I'm working in had, was to set the ecto dynamic repo at the request level, which means once you touch the writer once, you stay there for the duration of that request.
it feels like it would be incredibly easy to accidentally "unstick" from the writer.in those cases you can set the context on all action calls if you wanted
I get that you can always override, but my point is more that something like
which looks totally innocuous is actually hiding some gnarly reader/writer semantics when the reader/writer is selected at the action level. vs at the request/process level, you can be confident it all goes to the same place
I guess my point is "those cases" are not always clear
therefore, you get fragile behavior
Yeah I think the answer is "don't do that"
In my case I can live with that since I'm mostly API centric, but if I was doing a ton of LV stuff, that would get real dicy real fast, I feel
Even with LV it would make less sense to have some kind of sticky repo
Sometimes you're reading, sometimes you're writing
You always have to scope it some how
And that scope should likely be an action
I agree it has to be scoped somehow, just not sure I agree the action is the right unit of scoping 😅
But, I can live with it
But like imagine a custom controller
Why would you not call an action to do the work it does?
You can use a generic action to arbitrary logic
And set the repo context there
that's fair
You might be convincing me. I need to go tinker and get a better feel for how this stuff plays out in practice haha