Policy engine duplicates/optimizations
To what degree does the policy engine internally prevent reexecution of identical policy checks? Will it evaluate a given MFA only once per request?
If so, is there a way to indicate that a policy check is e.g. not dependant on the underlying record?
14 Replies
Facts are stored in a map of the module/opts (and some other context)
Which can be a bit complex
but yes, it will not evaluate the same check multiple times (or if it does that is a bug)
i.e
That will evaluate
condition()
once and {Foo, bar: :baz}
once.
We also prune branches that we can determine to be statically unnecessary.
For example:
if {Foo, bar: :baz}
is true, we won't bother with the next one.Is there any form of parallelism in the existing check evaluation engine?
Not currently, no.
There probably will never be.
Well..maybe
There are transactional concerns with parallelizing, and we don't necessarily know what all resources could be used within a check.
Why do you ask?
What you might see is that multiple requests to load related data will run the same check
i.e
users |> load(:posts)
when they both contain {Foo, bar: :baz}
they will both run it. We could potentially resolve that in the future.Yep, there are a lot of pitfalls.
I think we could add a flag on simple check policies that says that they only use the actor, not the changeset/query, and then cache those for all requests that we do in one call to Ash. i.e
Resource |> load(:relationship)
we could cache it for those two things.That would definitely make sense. Same perhaps for context or even keys/paths in context at some point. Depends on how much complexity that would introduce.
I don't suppose this is caching between action calls though, so context as a whole is fine.
Yep
The same logic could be applied to parallel execution. Keyed dependency graph would take care of any possibility, but just a simple flag for "don't worry about duplicate work/transactional concerns" would be a start.
Yep! We actually already have that infrastructure underpinning all Ash operations (the Ash Engine)
We use a graph resolver internally to do things like load relationships/calculations/calculation dependencies
and that same engine runs
Ash.Flow
as well.
So we should be able to leverage it here 😄Fantastic. Is there a way to tune the pool for this engine currently?
oh, no its operation is isolated to individual calls to Ash still
So its not like the same engine is used by successive requests or anything
its just a tool for managing complex control flow of actions
Gotcha, but it already incorporates parallelism for blocking operations?
Yep! But currently not tuneable. It currently uses a concurrency limit of
System.schedulers_online() * 2
However, this usually only comes into play on read actions, as most mutations will be done in a transaction and so must be serialized.Right on