Policy engine duplicates/optimizations

To what degree does the policy engine internally prevent reexecution of identical policy checks? Will it evaluate a given MFA only once per request? If so, is there a way to indicate that a policy check is e.g. not dependant on the underlying record?
14 Replies
ZachDaniel
ZachDaniel3y ago
Facts are stored in a map of the module/opts (and some other context) Which can be a bit complex but yes, it will not evaluate the same check multiple times (or if it does that is a bug) i.e
policy condition() do
authorize_if {Foo, bar: :baz}
end

policy condition() do
authorize_if {Foo, bar: :baz}
end
policy condition() do
authorize_if {Foo, bar: :baz}
end

policy condition() do
authorize_if {Foo, bar: :baz}
end
That will evaluate condition() once and {Foo, bar: :baz} once. We also prune branches that we can determine to be statically unnecessary. For example:
policy condition() do
authorize_if {Foo, bar: :baz}
authorize_if {Other, bar: :baz}
end
policy condition() do
authorize_if {Foo, bar: :baz}
authorize_if {Other, bar: :baz}
end
if {Foo, bar: :baz} is true, we won't bother with the next one.
\ ឵឵឵
\ ឵឵឵OP3y ago
Is there any form of parallelism in the existing check evaluation engine?
ZachDaniel
ZachDaniel3y ago
Not currently, no. There probably will never be. Well..maybe There are transactional concerns with parallelizing, and we don't necessarily know what all resources could be used within a check. Why do you ask? What you might see is that multiple requests to load related data will run the same check i.e users |> load(:posts) when they both contain {Foo, bar: :baz} they will both run it. We could potentially resolve that in the future.
\ ឵឵឵
\ ឵឵឵OP3y ago
Yep, there are a lot of pitfalls.
ZachDaniel
ZachDaniel3y ago
I think we could add a flag on simple check policies that says that they only use the actor, not the changeset/query, and then cache those for all requests that we do in one call to Ash. i.e Resource |> load(:relationship) we could cache it for those two things.
\ ឵឵឵
\ ឵឵឵OP3y ago
That would definitely make sense. Same perhaps for context or even keys/paths in context at some point. Depends on how much complexity that would introduce. I don't suppose this is caching between action calls though, so context as a whole is fine.
ZachDaniel
ZachDaniel3y ago
Yep
\ ឵឵឵
\ ឵឵឵OP3y ago
The same logic could be applied to parallel execution. Keyed dependency graph would take care of any possibility, but just a simple flag for "don't worry about duplicate work/transactional concerns" would be a start.
ZachDaniel
ZachDaniel3y ago
Yep! We actually already have that infrastructure underpinning all Ash operations (the Ash Engine) We use a graph resolver internally to do things like load relationships/calculations/calculation dependencies and that same engine runs Ash.Flow as well. So we should be able to leverage it here 😄
\ ឵឵឵
\ ឵឵឵OP3y ago
Fantastic. Is there a way to tune the pool for this engine currently?
ZachDaniel
ZachDaniel3y ago
oh, no its operation is isolated to individual calls to Ash still So its not like the same engine is used by successive requests or anything its just a tool for managing complex control flow of actions
\ ឵឵឵
\ ឵឵឵OP3y ago
Gotcha, but it already incorporates parallelism for blocking operations?
ZachDaniel
ZachDaniel3y ago
Yep! But currently not tuneable. It currently uses a concurrency limit of System.schedulers_online() * 2 However, this usually only comes into play on read actions, as most mutations will be done in a transaction and so must be serialized.
\ ឵឵឵
\ ឵឵឵OP3y ago
Right on

Did you find this page helpful?