AshCsv calculations update
I am drafting a little beginner tutorial for Ash, and figure that AshCsv + LiveBook might be a good way of minimal setup + transparency to what has changed in "the database".
I have setup a simple calculation (of a weighed average), and was expecting that once it's been loaded (
Ash.Query.load([:weighed_average])
) the entries would be written to the CSV file. That is not happening.
My question is, what is the right way of getting calculated values back into the file? (Or is this bad practice?)16 Replies
Calculations are specifically designed to be computed each time, not persisted
If you want to persist them, the idea is that you'd do something like this in your action:
We do also have a thing called "atomic updates" which is not yet supported by
ash_csv
, but could be supported in the same way we support it for ETS
then you could persist the value as the result of a calculationThank you Zach. I can see calculated every time makes sense; I'll probably just show the "write to CSV" with a
for_create
create!
It's odd that there's no AshCsv tutorial out there. After using it the first time, and seeing it doing away with the boilerplate (open... atomize keys... convert to right type... save) I had in every project, and having every declaration cleanly in the same place, your whole manifesto finally hits 😛You should write one!
:p
I'm glad you're getting mileage out of it
Doing just that; will prob come back to #documentation for suggestions once the first public draft is ready.
Just FYI: we're switching everything very soon to be driven by hexdocs instead of ash_hq
Guides will still be mirrored to the site, and the global search will be available but it will link to hex
So if you want to PR your guide you can iterate on it in hex and see how it will look w/
mix docs
and you can link to DSL options with d:AshCsv.DataLayer.csv.file
, for example
which will link to the appropriate place in the dsl fileTwo maybe-not-intended behaviour with AshCsv. I'm not so sure since AshCsv isn't so well documented. (The attached files, when placed in the same folder, should just run.)
1. Constraints are not validated when they are loaded from the CSV. Severus Snape's house in the CSV is "lionel messi", which does not match the
~r/lion|orca|ox|eagle/
constraint but AshCsv happily accepts it.
2. In AshCsv
, update!()
does create columns with the calculations, but only for the entries that did the update. This (single entry) then breaks the read!()
(see last cell). I'm not sure if this is intended behaviour.By default constraints are not applied on loading because we assume the stored data is valid (i.e written by us)
but especially for CSVs I could see a case where maybe that wouldn't make snese?
but what would you want it to do in invalid cases?
I'm not sure. I thought it would showcase AshCsv rejecting the input.
As for creating columns for calculations, that should definitely not happen and I'm not sure how it is happening
We could add an option that does that
but the entire read would fail
"invalid data in csv"
probably need more to help debug the csv. I think most people's csv is of unknown provenance.
Yeah, I mean it could show the invalid row and all that
just pointing out that the entire read would likely have to break
Provided there is some guidance to how the csv should be sanitized, I think that should be the expected behaviour?
Less than a postgres that you choose to load (and probably crafted at first place), CSVs could be from... just about anywhere
Yeah, that makes sense to me
While I catch you :). I'm aware that the sqlite adaptor is in the works. I think it might be a cool showcase to bring, say, a CSV into ETS using Ash (and suggestive that the same can happen for other datalayers). Is "multiple datalayers" something that is possible?
(and if so, what is a place where I can find the doc for the correct way to do that?)
Ash.DataLayer.Multi
will come someday
probably not very soon
but it will allow you to stack and configure multiple data layersLet me chip away at what is currently possible.