editing it to make it resilient against the inevitable bug fix
editing it to make it resilient against the inevitable bug fix
const buffer = await new Response(resp).arrayBuffer();.


@cf/qwen/qwen1.5-14b-chat-awq still isnt working with functions, is there any ETA on this?

Error in runWithTools: Error in runAndProcessToolCall: Unknown internal errorllama-3.1-8b-instruct-awq? I could not find this information. I am particularly interested in knowing whether that one is significantly cheaper than llama-3.1-8b-instruct-fp8. It seems that overall the neuron cost table is not up to date with the model list.awq and the fp8 once they are out of beta? Just to know whether it is worth trying to stick with the smaller weight one during development.4.20240909.0, the current most recent version of workers-types. however, i only started getting the error when i migrated from using the now deprecated @cloudflare/ai package to using await env.AI.run(…) directly, so it seems like the @cloudflare/ai types allow me to use one of the BaseAiTextGenerationModels, whereas the types for the direct AI binding (which use function overloading for usage with all possible types of AI generation) are, in my case, forcing the model argument to be type BaseAiImageToTextModels . i am using typescript v5.6.2.
const buffer = await new Response(resp).arrayBuffer();@cf/qwen/qwen1.5-14b-chat-awqError in runWithTools: Error in runAndProcessToolCall: Unknown internal errorllama-3.1-8b-instruct-awqllama-3.1-8b-instruct-fp8awqfp84.20240909.0@cloudflare/ai@cloudflare/aiawait env.AI.run(…)BaseAiTextGenerationModelsAImodelBaseAiImageToTextModelsArgument of type '"@cf/meta/llama-3-8b-instruct" | "@hf/thebloke/zephyr-7b-beta-awq" | "@hf/nousresearch/hermes-2-pro-mistral-7b" | "@hf/nexusflow/starling-lm-7b-beta" | "@cf/defog/sqlcoder-7b-2" | "@cf/openchat/openchat-3.5-0106"' is not assignable to parameter of type 'BaseAiImageToTextModels'.