Mixtral uses the Mistral tokenizer.
Mixtral uses the Mistral tokenizer.


AiError: AiError: Invalid or incomplete input for the model: model returned: Invalid input3028 when using LoRAapp.get("/transcribe", async (c: Context) => {
const res = await fetch(
"https://github.com/Azure-Samples/cognitive-services-speech-sdk/raw/master/samples/cpp/windows/console/samples/enrollment_audio_katie.wav"
);
const blob = await res.arrayBuffer();
const input = {
audio: Buffer.from(blob).toString("base64"),
};
console.log("AI");
console.log(c.env);
console.log(c.env.AI);
const response = await c.env.AI.run(
"@cf/openai/whisper",
input
);
return Response.json({ input: { audio: [] }, response });
});image: imageBytes to image: [...imageBytes].npx wrangler dev --remote where I thus far haven't been able to reproduce it.WRANGLER_LOG_SANITIZE=false) I noticed that the error coincides with a 401 unauthorized response from upstream. Ai._parseError is unable to interpret the format of the error response, leading to "unknown: unknown". (Looks like there's a recent commit that would fix this specific part.)app.post(
'/get-alt',
zValidator('form', getAltSchema),
async (c) => {
const { image } = c.req.valid('form')
try {
let imageData: ArrayBuffer;
if (image instanceof File) {
imageData = await image.arrayBuffer();
} else {
const response = await fetch(image.toString());
if (!response.ok) {
return c.json({ error: 'Failed to fetch image from URL' }, 400);
}
imageData = await response.arrayBuffer();
}
const imageBytes = new Uint8Array(imageData);
try {
const response = await c.env.AI.run('@cf/meta/llama-3.2-11b-vision-instruct', {
prompt: 'Create a concise alt text for this image. Use simple language and keep it under 160 characters.',
image: imageBytes
});
return c.json(response);
} catch (error) {
console.error('AI processing error:', error);
return c.json({ error: 'Failed to process image with AI model' }, 500);
}
} catch (error) {
console.error('Image processing error:', error);
return c.json({ error: 'Failed to process image data' }, 500);
}
}
)AI processing error: InferenceUpstreamError: undefined: undefined
at Ai._parseError (cloudflare-internal:ai-api:81:20)
at async Ai.run (cloudflare-internal:ai-api:61:23)
at null.<app.get("/transcribe", async (c: Context) => {
const res = await fetch(
"https://github.com/Azure-Samples/cognitive-services-speech-sdk/raw/master/samples/cpp/windows/console/samples/enrollment_audio_katie.wav"
);
const blob = await res.arrayBuffer();
const input = {
audio: Buffer.from(blob).toString("base64"),
};
console.log("AI");
console.log(c.env);
console.log(c.env.AI);
const response = await c.env.AI.run(
"@cf/openai/whisper",
input
);
return Response.json({ input: { audio: [] }, response });
});