Having some trouble with a bulk update

I have something like:
update :backfill_code do
change fn changeset, context ->
Ash.Changeset.before_action(changeset, fn changeset ->
id = Ash.Changeset.get_attribute(changeset, :id)
guide_id = Ash.Changeset.get_attribute(changeset, :guide_id)

existing_codes =
Foobar.Baz
|> Ash.Query.filter(guide_id == ^guide_id and id != ^id)
|> Ash.Query.select([:code])
|> Ash.read!(actor: context.actor)
|> Enum.map(& &1.code)

Ash.Changeset.change_attribute(
changeset,
:code,
Foobar.Utils.CodeGenerator.generate_unique_code(id, existing_codes)
)
end)
end

require_atomic? false
end
update :backfill_code do
change fn changeset, context ->
Ash.Changeset.before_action(changeset, fn changeset ->
id = Ash.Changeset.get_attribute(changeset, :id)
guide_id = Ash.Changeset.get_attribute(changeset, :guide_id)

existing_codes =
Foobar.Baz
|> Ash.Query.filter(guide_id == ^guide_id and id != ^id)
|> Ash.Query.select([:code])
|> Ash.read!(actor: context.actor)
|> Enum.map(& &1.code)

Ash.Changeset.change_attribute(
changeset,
:code,
Foobar.Utils.CodeGenerator.generate_unique_code(id, existing_codes)
)
end)
end

require_atomic? false
end
Ash.bulk_update(Foobar.Baz, :backfill_code, %{},
actor: actor,
resource: Foobar.Baz,
strategy: [:stream],
return_errors?: true
)
Ash.bulk_update(Foobar.Baz, :backfill_code, %{},
actor: actor,
resource: Foobar.Baz,
strategy: [:stream],
return_errors?: true
)
For some context, I'm generating 4 character codes in generate_unique_code/2, and existing_codes is a list of all existing ones so the function can retry so it won't hit a unique constraint error on save. This works, but only when I run it one by one. When trying it in a bulk update, it does the query for existing codes before it has been updated, giving a bit of a race condition and hitting that unique constraint error. Was thinking maybe setting batch_size to 1, but that's getting me: * ** (Spark.Options.ValidationError) invalid value for :batch_size option: expected integer, got: nil Any ideas on how I can make this work? Or maybe my approach here is fundamentally flawed?
Solution:
I just pushed a fix to setting batch_size to main
Jump to solution
15 Replies
ZachDaniel
ZachDaniel•4mo ago
šŸ¤” the batch size issue definitely sounds like a bug šŸ¤”
pikdum
pikdumOP•4mo ago
let me see if i still get that on the latest versions
ZachDaniel
ZachDaniel•4mo ago
Something you can also do is define a custom change module and define batch_change
pikdum
pikdumOP•4mo ago
ah that could work
ZachDaniel
ZachDaniel•4mo ago
With that said, it actually should do that action one-by-one currently
pikdum
pikdumOP•4mo ago
Ash.bulk_update(Foobar.Baz, :backfill_code, %{},
actor: context.actor,
resource: Foobar.Baz,
strategy: [:stream],
return_errors?: true,
batch_size: 1
)
Ash.bulk_update(Foobar.Baz, :backfill_code, %{},
actor: context.actor,
resource: Foobar.Baz,
strategy: [:stream],
return_errors?: true,
batch_size: 1
)
can form with the latest released versions i still get:
* ** (Spark.Options.ValidationError) invalid value for :batch_size option: expected integer, got: nil
ZachDaniel
ZachDaniel•4mo ago
whats the stacktrace? the function change should force it to happen one-by-one
pikdum
pikdumOP•4mo ago
* ** (Spark.Options.ValidationError) invalid value for :batch_size option: expected integer, got: nil
(ash 3.5.15) lib/ash.ex:2517: anonymous fn/2 in Ash.StreamOpts.validate!/1
(elixir 1.17.1) lib/enum.ex:2531: Enum."-reduce/3-lists^foldl/2-0-"/3
(ash 3.5.15) lib/ash.ex:2517: Ash.StreamOpts.validate!/1
(ash 3.5.15) lib/ash.ex:2582: Ash.stream!/2
(ash 3.5.15) lib/ash/actions/update/bulk.ex:162: Ash.Actions.Update.Bulk.run/6
(foobar 0.1.0) lib/foobar/baz.ex:219: Foobar.Baz.run_0_generated_A59E54474A3889D70CD5386F328691B6/2
(ash 3.5.15) lib/ash/actions/action.ex:137: Ash.Actions.Action.run/3
(ash 3.5.15) lib/ash.ex:1894: Ash.run_action/2
(ash 3.5.15) lib/ash.ex:1848: Ash.run_action!/2
test/code_backfill_test.exs:81: Foobar.Baz.CodeBackfillTest."test :backfill_codes sets codes"/1
(ex_unit 1.17.1) lib/ex_unit/runner.ex:485: ExUnit.Runner.exec_test/2
(stdlib 6.0) timer.erl:590: :timer.tc/2
(ex_unit 1.17.1) lib/ex_unit/runner.ex:407: anonymous fn/6 in ExUnit.Runner.spawn_test_monitor/4
* ** (Spark.Options.ValidationError) invalid value for :batch_size option: expected integer, got: nil
(ash 3.5.15) lib/ash.ex:2517: anonymous fn/2 in Ash.StreamOpts.validate!/1
(elixir 1.17.1) lib/enum.ex:2531: Enum."-reduce/3-lists^foldl/2-0-"/3
(ash 3.5.15) lib/ash.ex:2517: Ash.StreamOpts.validate!/1
(ash 3.5.15) lib/ash.ex:2582: Ash.stream!/2
(ash 3.5.15) lib/ash/actions/update/bulk.ex:162: Ash.Actions.Update.Bulk.run/6
(foobar 0.1.0) lib/foobar/baz.ex:219: Foobar.Baz.run_0_generated_A59E54474A3889D70CD5386F328691B6/2
(ash 3.5.15) lib/ash/actions/action.ex:137: Ash.Actions.Action.run/3
(ash 3.5.15) lib/ash.ex:1894: Ash.run_action/2
(ash 3.5.15) lib/ash.ex:1848: Ash.run_action!/2
test/code_backfill_test.exs:81: Foobar.Baz.CodeBackfillTest."test :backfill_codes sets codes"/1
(ex_unit 1.17.1) lib/ex_unit/runner.ex:485: ExUnit.Runner.exec_test/2
(stdlib 6.0) timer.erl:590: :timer.tc/2
(ex_unit 1.17.1) lib/ex_unit/runner.ex:407: anonymous fn/6 in ExUnit.Runner.spawn_test_monitor/4
hm
ZachDaniel
ZachDaniel•4mo ago
ohhh
pikdum
pikdumOP•4mo ago
that's when i try to set it to 1 when running normally, like in my first post, the bulk update completes but gets me a partial success with some having hit the database unique constraint and errored, since it queried existing codes before previous ones were saved and got [nil] and similar
ZachDaniel
ZachDaniel•4mo ago
Pushed to main šŸ¤” yeah, okay you're right
Solution
ZachDaniel
ZachDaniel•4mo ago
I just pushed a fix to setting batch_size to main
pikdum
pikdumOP•4mo ago
let me give it a try can confirm with batch_size: 1 my tests are now passing, no more validation error should they still be failing without batch_size: 1 though?
ZachDaniel
ZachDaniel•4mo ago
Honestly its not something I had considered originally, but any self-referential update action would have to be done 1-by-1 as a batch for updates and it doesn't work that way currently but not sure its something I'd "fix"
pikdum
pikdumOP•4mo ago
makes sense, manually setting batch_size to 1 works for me here thanks!

Did you find this page helpful?