Having some trouble with a bulk update
I have something like:
For some context, I'm generating 4 character codes in
generate_unique_code/2
, and existing_codes
is a list of all existing ones so the function can retry so it won't hit a unique constraint error on save.
This works, but only when I run it one by one. When trying it in a bulk update, it does the query for existing codes before it has been updated, giving a bit of a race condition and hitting that unique constraint error.
Was thinking maybe setting batch_size
to 1, but that's getting me: * ** (Spark.Options.ValidationError) invalid value for :batch_size option: expected integer, got: nil
Any ideas on how I can make this work? Or maybe my approach here is fundamentally flawed?15 Replies
š¤ the batch size issue definitely sounds like a bug š¤
let me see if i still get that on the latest versions
Something you can also do is define a custom change module and define
batch_change
ah that could work
With that said, it actually should do that action one-by-one currently
can form with the latest released versions i still get:
* ** (Spark.Options.ValidationError) invalid value for :batch_size option: expected integer, got: nil
whats the stacktrace?
the function change should force it to happen one-by-one
hm
ohhh
that's when i try to set it to 1
when running normally, like in my first post, the bulk update completes but gets me a partial success with some having hit the database unique constraint and errored, since it queried existing codes before previous ones were saved and got [nil] and similar
Pushed to
main
š¤ yeah, okay you're rightSolution
I just pushed a fix to setting batch_size to main
let me give it a try
can confirm with
batch_size: 1
my tests are now passing, no more validation error
should they still be failing without batch_size: 1
though?Honestly its not something I had considered originally, but any self-referential update action would have to be done 1-by-1 as a batch
for updates
and it doesn't work that way currently
but not sure its something I'd "fix"
makes sense, manually setting batch_size to 1 works for me here
thanks!