
llama-3.3-70b-instruct-fp8-fast is reliable about 70% of the time, but will occasionally throw in an invalid bracket here and there. Any other models people are using for this? (Yes I know there's Function Calls as well.)
response_format. Since it is possible to access of CloudFlare's models via the OpenAI API.

llama-3.3-70b-instruct-fp8-fastresponse_formatimport { ChatMistralAI } from "@langchain/mistralai"
router.get('/mistral', async req => {
const model = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0,
apiKey: <MISTRAL-API-KEY>
});
const structuredLlm = model.withStructuredOutput(<JSON-SCHEMA-HERE>);
return await structuredLlm.invoke(<YOUR-PROMPT-HERE>);
})