is there a list of text models on workers AI support json mode , if any?
is there a list of text models on workers AI support json mode , if any?

llama-3-8b-instruct with the REST API. It works somewhat but it sometimes feels like it's trying to chat, remembering context from previous HTTP calls. Has anyone ran into similar issues? I've been refining the prompts using system, user and assistant roles.gemma-7b-it finetune at all ('file' should be of valid safetensors type), and I can successfully upload/run inference for mistral/mistral-7b-instruct-v0.2-lora, but the results are worse than the vanilla modelq_proj and v_projmistralai/mistral-7b-instruct-v0.2 using the AutoTrain_LLM notebook , then cURL to https://api.cloudflare.com/client/v4/accounts/${account_id}/ai/run/@cf/mistral/mistral-7b-instruct-v0.2-loraArgument of type '"@hf/thebloke/deepseek-coder-6.7b-instruct-awq"' is not assignable to parameter of type 'BaseAiImageToTextModels'. typescript (2769) [84, 37]// @ts-expect-errorArgument of type '"@hf/thebloke/deepseek-coder-6.7b-instruct-awq"' is not assignable to parameter of type 'BaseAiImageToTextModels'. typescript (2769) [84, 37]llama-3-8b-instructgemma-7b-it'file' should be of valid safetensors typemistral/mistral-7b-instruct-v0.2-loraq_projv_projmistralai/mistral-7b-instruct-v0.2AutoTrain_LLMhttps://api.cloudflare.com/client/v4/accounts/${account_id}/ai/run/@cf/mistral/mistral-7b-instruct-v0.2-loraconst systemContent = `You are a knowledgeable employee familiar with the company ${companyName}, responding to customer inquiries. Follow these guidelines:
- Answer in the same language as the question.
- Do not reveal your identity.
- If you don't know the answer, admit it without making anything up.
- Maintain a neutral tone.
- Do not provide opinions or personal views.
- Avoid asking for feedback.
- Keep the conversation strictly to the point; do not engage in small talk or recommendations.
- Do not apologize.
- Do not initiate or continue small talk.
- Do not use phrases like "I'm sorry" or "I apologize."`;
await got.post(`https://api.cloudflare.com/client/v4/accounts/${Env.CLOUDFLARE_ACCOUNT_ID}/ai/run/${model}`, {
headers: { Authorization: `Bearer ${Env.CLOUDFLARE_WORKERS_AI_KEY}` },
json: {
max_tokens: 350,
messages: [
{ role: 'system', content: systemContent },
{ role: 'user', content: `Question:${question}` },
{ role: 'assistant', content: context }
],
temperature: 0.5
}
});