Ash FrameworkAF
Ash Framework8mo ago
26 replies
pikdum

Having some trouble with a bulk update

I have something like:

    update :backfill_code do
      change fn changeset, context ->
        Ash.Changeset.before_action(changeset, fn changeset ->
          id = Ash.Changeset.get_attribute(changeset, :id)
          guide_id = Ash.Changeset.get_attribute(changeset, :guide_id)

          existing_codes =
            Foobar.Baz
            |> Ash.Query.filter(guide_id == ^guide_id and id != ^id)
            |> Ash.Query.select([:code])
            |> Ash.read!(actor: context.actor)
            |> Enum.map(& &1.code)

          Ash.Changeset.change_attribute(
            changeset,
            :code,
            Foobar.Utils.CodeGenerator.generate_unique_code(id, existing_codes)
          )
        end)
      end

      require_atomic? false
    end


        Ash.bulk_update(Foobar.Baz, :backfill_code, %{},
          actor: actor,
          resource: Foobar.Baz,
          strategy: [:stream],
          return_errors?: true
        )


For some context, I'm generating 4 character codes in generate_unique_code/2, and existing_codes is a list of all existing ones so the function can retry so it won't hit a unique constraint error on save.

This works, but only when I run it one by one. When trying it in a bulk update, it does the query for existing codes before it has been updated, giving a bit of a race condition and hitting that unique constraint error.
Was thinking maybe setting batch_size to 1, but that's getting me: * ** (Spark.Options.ValidationError) invalid value for :batch_size option: expected integer, got: nil

Any ideas on how I can make this work? Or maybe my approach here is fundamentally flawed?
Solution
I just pushed a fix to setting batch_size to main
Was this page helpful?