Has anyone successfully setup tool call streaming with any of the function calling models? I can get
Has anyone successfully setup tool call streaming with any of the function calling models? I can get tool calling to work when it's not streaming.
{" or <function (it was I think) it's not a function call, else it is one. In theory @cloudflare/ai-utils can do that, but it calls the model twice (once without streaming and in case it does not call a tool - again with streaming), so I didn't want to use itgemini-3n and chatterbox-tts? is there any place to request and vote?/ai/run/@cf/meta/llama-3.2-11b-vision-instruct with LORA > 



FAIL tests/index.test.ts [ tests/index.test.ts ]
SyntaxError: Unexpected token ':'
❯ Users/advany/Documents/GitHub/generator-agent/node_modules/ajv/lib/definition_schema.js?mf_vitest_no_cjs_esm_shim:3:18
❯ Users/advany/Documents/GitHub/generator-agent/node_modules/ajv/lib/keyword.js?mf_vitest_no_cjs_esm_shim:5:24
❯ Users/advany/Documents/GitHub/generator-agent/node_modules/ajv/lib/ajv.js?mf_vitest_no_cjs_esm_shim:29:21 resolve: {
alias: {
ajv: 'ajv/dist/ajv.min.js',
},
},
optimizeDeps: {
include: ['ajv'],
},"errors": [{
"message": "AiError: AiError: Lora not compatible with model. (c8afa8e1-48d6-42d4-871d-b04bce9b4c67)",
"code": 3030
}]const aiResponse = await env.AI.run(
'@cf/meta/llama-3.1-8b-instruct',
{
messages: [
{
role: 'system',
content: 'You are a helpful assistant that transforms text into different tones and styles.'
},
{
role: 'user',
content: fullPrompt
}
],
max_tokens: 2048
},
{
gateway: {
skipCache: false, // Enable caching (default behavior)
cacheTtl: 86400 // Cache for 24 hours (24 * 60 * 60 seconds)
}
}
);{"<function{"errors":[{"message":"Error: Network connection lost.","code":6001}],"success":false,"result":{},"messages":[]}curl https://api.cloudflare.com/client/v4/accounts/.../ai/run/@cf/openai/whisper \
-X POST \
-H "Authorization: Bearer ..." \
--data-binary "@media.opus"