Is there a way to define dsl for new type of attribute?
I've implemented ULID type which is one of alternatives to UUID. I can use it without a problem right now by providing
type
and default
to uuid_primary_key
. But wanted to venture into making a small spark extension that adds ulid_primary_key
.
With my naive approach I wrote something like that:
And then wanted to apply it to a resource with extensions: [AshAuthentication]
. But got an error about attributes
section not being patchable. I see that unless a section explicitly allows patching it is not allowed. Is it because of some security/performance considerations? So currently it is impossible to add new attribute shortcuts, right?15 Replies
Hi @Vonagam take a look at our extension here: https://github.com/zoonect-oss/ash_uuid
I think it's very similar and can help š
GitHub
GitHub - zoonect-oss/ash_uuid: AshUUID: Extension for using UUID v4...
AshUUID: Extension for using UUID v4 and v7, with supports encoding and prefixing - GitHub - zoonect-oss/ash_uuid: AshUUID: Extension for using UUID v4 and v7, with supports encoding and prefixing
Oh, I saw your thing but thought that it reuses and changes behaviour of
uuid_primary_key
.
But I see that you have your own uuid_attribute
. And the solution quite simple - just a macro that produces attribute call, cool.
Thanks.
Not that it is super important, but does such macro play well with Ash's autocomplete feature (where it provides available dsl sections/entries/options) or does it not?
And looking at the code I assume such macro will not allow do end
syntax for specifying options, only keyword list (again not that important, just assessing downsides, but it good to have a working option).Unfortunately at the actual version neither the Ash's autocomplete nor the do end syntax are working, but we can ask to @Zach Daniel how I can enable these features: I'm happy to improve the extension if I can š
Using a macro for the
uuid_attribute
is a tip that Zach gave me here https://discord.com/channels/711271361523351632/1134499826579546122/1134675700125810780.
I also tried using dsl_patches
initially and stuck by same error@moissela we can just make the attribute section patchable š
Just need to add
patchable?: true
to this struct https://github.com/ash-project/ash/blob/main/lib/ash/resource/dsl.ex#L127GitHub
ash/lib/ash/resource/dsl.ex at main Ā· ash-project/ash
A declarative and extensible framework for building Elixir applications. - ash-project/ash
PR is welcome š
Oh, this would be great! I think that having the ability to patch the main DSL from project's or extensions code will make the framework more extensible and powerful.
Can an extension like AshUUID be required project wide patching the DSL for all resources instead of requiring that per resource?
I can prepare the PR tomorrow
It canāt patch it across the whole project, no. Iād be hesitant to add that kind of thing also.
There are ways for users to accomplish it using the ābase resourceā pattern
Ok, I understand the hesitation.
Maybe I could provide a base resource with the extension that users will be able to decide to use or not? Would that make sense?
I wouldnāt suggest providing a base resource, but I would give them an example of how they can add extensions everywhere using a base resource
Since you can only have like base resource in this context, libraries shouldnāt try to provide one I think.
Ok!
GitHub
Make resource's DSL attributes section patchable by moissela Ā· Pull...
Set patchable?: true on attributes section of resource's DSL
Hm... making attributes patchable might work in that scenario (without side-effects I hope).
But they were not patchable before. And sections are not patchable by default. I assume that there is a reason for that. I have not looked deep into spark, but maybe validators/transformers will not know how to handle unexpected for them fields or something like that. If that not the reason then why sections are not patchable?
As for macro approach, I would assume that autocomplete is not fixable, but supporting
do end
style can be done (it is macro after all). But requiring an author of a library to implement such behaviour does not seem user-friendly of spark. Maybe there should be an official util that helps with collecting spark-style options. Or maybe there is already a suitable method somewhere for that?They are patchable but itās still expected that what is returned is a valid entity for that section
So as long as the entity that is patched in still targets Ash.Resource.Attribute.
We made them not patchable by default I guess kind of ājust in caseā honestly
Like āI expect other extensions to provide custom constructorsā
So how do you feel about allowing
Spark.Dsl.Patch.AddEntity
even for patchable?: false
sections (meaning by default) as long as addition's target matches one of preexisting entities target?
I mean I can open a PR for that. I think there is nothing specific about attributes
that they can be patchable in that sense and others can't.
Did open the PR - https://github.com/ash-project/spark/pull/53