`Spark` templates
I have a number of resource templates that are defined as
__using__
macros, primarily for the sake of ergonomics and readability. As some of these have begun to solidify, I have converted them to Spark
extensions.
What I'm wondering is if we can have the best of both worlds. For example, by allowing extensions to provide a Spark
options schema and selecting a syntax to walk in Spark.Dsl.Fragment
(or a new Spark.Dsl.Template
) in order to inject options into the fragment/template.
This would be even more interesting if extensions were able to inject other extensions with options taken from their own DSL, but this may be a harder hill to climb.
Would the compile-time checks and error reporting be worth the effort? Any big impediments to allowing extensions to take options, even without necessitating them being specified in the DSL? Is this already possible? π
A very boring example:
Related: https://discord.com/channels/711271361523351632/1019647368196534283/threads/1110677874198986852: re: a callback to allow extensions to add other extensionsDiscord
Discord - A New Way to Chat with Friends & Communities
Discord is the easiest way to communicate over voice, video, and text. Chat, hang out, and stay close with your friends and communities.
8 Replies
I'm interested in way to solve this problem, but its also a very hard problem (that multiple people have looked at tackling here). The basic problem with templating is that certain kinds of things (value substitution) is relatively easy, but there are also issues like how do we merge up values that conflict, that kind of thing. While we don't currently have a way to configure things without a DSL, I'd argue that configuring extensions with a DSL is essentially the most idiomatic way to do things and in your example you can do what you want with a transformer that adds an attribute.
I'm not saying we shouldn't do it, just that I haven't yet seen/heard of a proposal that would solve for these issues. What I might suggest is instead to investigate the idea of a "builder DSL".
To essentially allow any extension to have a declarative DSL for modifying different targets in a way that is not a declarative merge, but rather a set of instructions.
The above was just to exemplify the syntax that I was envisioning, but several of the resource templates that I'd want to use this for are hundreds of lines long. I'm not opposed to such a builder DSL, although I'd be keen to keep an eye on the potential for integrating it with
Spark.Dsl.Builder
in order to automatically tailor the DSL to the extension(s) for which it is building.
That said, although a builder DSL might be a boon for legibility of things that would otherwise be written directly with Spark.Dsl.Transformer
, Ash.Resource.Builder
etc., both methods remain quite far from the legibility of a template with interpolation. Rather than being an alternative, I'd see such a thing as complementary to templates.
In addition to what has been mentioned already, the gains in directness of workflow for converting an existing Ash.Resource
(along with any extensions that it uses) to a Spark.Dsl.Template
lend it significant draw.
Adding to the complexity issue, an ideal solution would go beyond simple value interpolation and involve some selection of conditionals. Assuming the conditionals can only be based on provided options we could probably figure out a way to walk these as well. I think most likely we can draw the line there though, and anything more complex than that can be represented as some combination of template and transformer (as builder DSL or otherwise). Interpolation alone would already hit a lot of cases, but it's something to keep in mind.
I'd be interested to hear more about what you're describing re: merging conflicting values etc, and of course any of the prior art/proposals that you mentioned if you happen to have links to them.templates are hard for strings of text, let alone structured macros π Merging is like if you have something like:
and you include a template that has a different type and an index with a matching route but a different configuration. The DSL isn't aware that
index :read
conflicts with index :foo
even though under the hood they will both get route: "/"
and so don't really make sense to put together. The DSL can't properly merge or warn about those
Conditionals in the templates could potentially solve for that, but that is much more complicated than it sounds. Templating text is easy by comparison, because all of our option builders and what-not in the DSL would need to be adjusted to handle lazily-computed-values, or we'd need to copy essentially all of the DSL code into a new thing that stores its structure unvalidated and then fills it in with template variables. This would take a huge amount of work.So my thought was that the templates could be filled in prior to DSL validation or even before merging fragments etc. Essentially that the template is populated first based on the provided options, and then goes into the usual chain of processing. Could this approach solve some of the issues?
Not really, the essential issue is that we define macros for all of the dsl items, and weβd need to define a non validating version of each
And then a merger, and then a validator
We do validation live as the macros are called
So itβs not like a multiphase build/validate process
Gotcha, I was hoping we could do a prepass to evaluate conditionals and perform interpolation.
We could potentially if we put it all inside of a macro block, but honestly itβs just a macro at that point
defmacro
is all youβd needYeah agreed; at some point this just becomes a sugar for defmacro and unquote with Spark.Options.validate the earlier it happens, which is what I'm already doing.