Hi @samjs any updates on this? Sorry to
Hi @samjs any updates on this? Sorry to mention you directly, but saw that you created threads for other questions.
2 Replies
Hey @Ana ! Thanks for the ping, been catching up on threads. Let me see if I can repro this on my side
So I see you found the fix for the incorrect schema (function tools need to use the nested
{type: "function", function: { ... }
form.
For the other issue, are you using it with one or multiple tool calls? It seems like some of the models might struggle with too many tools: https://discord.com/channels/595317990191398933/1414659501029855303/1415639390071554191
For models that should support function calling we generally test it with a payload like:
and verify that we get a tool call back. If you have any examples of model + prompt pairs that fail I can investigate those for you?
It may also be worth trying the openai models? e.g. gpt-oss-20b/120b. Those tend to do pretty well with tool calling.Thank you for your reply! I was able to test with your payload and function calling is working properly with other models! I realized the models have been sending my tool calls as raw JSON in the I believe that It does 🙂
response
field instead of the tool_calls
field. Most likely because my system-role prompt is quite big. Still funny that llama-4-scout-17b-16e-instruct
fails at this while hermes-2-pro-mistral-7b
works super well...
Thanks for the openai models suggestion! gpt-oss-20b/120b
does not have function calling.