@cloudflare/ai-utils.llama-3.3-70b-instruct-fp8-fast, functions are executed according to the console, but not included in the response. The same applies to @cf/meta/llama-4-scout-17b-16e-instruct. However, the consumption is then returned correctly.streamFinalResponse = true would first execute AI.run without stream, and if streamFinalResponse = true is then set, it would be executed again with stream.
buffer.startsWith("<tool_call")<@896061789899485195>/ai-utils@cloudflare/ai-utilsllama-3.3-70b-instruct-fp8-fast@cf/meta/llama-4-scout-17b-16e-instructstreamFinalResponse = truestreamFinalResponse = trueAI.runbuffer.startsWith("<tool_call")<@896061789899485195>/ai-utils