
InferenceUpstreamError: ERROR 3001: Unknown internal error@cf/baai/bge-large-en-v1.5 ?@cloudflare/ai from 1.0.53 to 1.1.0, using the stream: true option when invoking ai.run(…) no longer streams in the response as would be expected. instead, there are no messages for the most part until the very end when suddenly the entirety of the response streams in at once (still broken into chunks but in extremely rapid succession). here’s how i am using it:Error: A Node.js API is used (DecompressionStream) which is not supported in the Edge Runtime. Learn more: https://nextjs.org/docs/api-reference/edge-runtime I'm guessing there some decompression going on under the hood with the AI interface?@cloudflare/ai types to narrow the return type based on the received stream variable (stream: true means ReadableStream<any>, stream: false means AiTextToImageOutput or the like), which would remove the need for this workaround.stream: true to the ai.run(…) function, you would want to cast the result to ReadableStream<any>:format: "json" in your request and it handles applying the grammar for you. See here for an example: https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completionstrengthparameter for @cf/runwayml/stable-diffusion-v1-5-img2img clamped and/or quantized in some weird way? Strength 0.095 generates an output that's close to identical to the source (albeit degraded), while 0.095001 generates a noisy version of the source. Strength 0.0 seems to produce as wildly different output as if it was set to 1.0. (Values outside the range 0.0 to 1.0 throws InferenceUpstreamError.)

@cf/runwayml/stable-diffusion-v1-5-inpainting throws InferenceUpstreamError on strength values below 0.05 (except 0). Haven't tested this on version 1.1.0, though.format: 'json' option), and i couldn’t find a satisfactory solution out there, so i made a parseAsJSON util that takes a string and parses it as JSON even if it has syntax errors or too many closing curly braces, or not enough curly braces, or overly escaped characters, or unescaped characters, or pre- and post-ambles, or…@acusti/parsing. here’s the readme: https://github.com/acusti/uikit/tree/main/packages/parsing
wrangler dev, and I get the following error when I try using the '@cf/facebook/bart-large-cnn' model:wrangler dev --remote, it does work, but I cannot use remote mode as I also use Cloudflare Queues in my app together with workers AI, and queues do not work in remote mode."wrangler": "^3.36.0" and "@cloudflare/ai": "^1.1.0"Authentication error i run wrangler login. it surprises me how often i have to re-authenticate when doing local dev (once every few days). i’ve speculated that it might have to do with how often i’m connecting to a different wifi network and changing my IP address, but that might be totally wrong. regardless, re-authing has always resolved it for me.Access to fetch at 'https://api.cloudflare.com/client/v4/accounts/.../ai/run/@cf/meta/llama-2-7b-chat-int8' from origin 'http://localhost:3100' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. const url = `https://api.cloudflare.com/client/v4/accounts/d1a..../ai/run/@cf/bytedance/stable-diffusion-xl-lightning`;
const headers = {
Authorization: `Bearer ${CFAI_TOKEN}`,
};
const response = await axios.post(url, {prompt}, {headers});
const fs = require("fs");
fs.writeFileSync("image.png", response.data);curl https://api.cloudflare.com/client/v4/accounts/d1a65..../ai/run/@cf/bytedance/stable-diffusion-xl-lightning \
-X POST \
-H "Authorization: Bearer ****" \
-d '{ "prompt": "cyberpunk cat" }' --output "cli.png" const response = await axios({
method: "post",
url,
data: {prompt},
headers,
responseType: "arraybuffer",
});InferenceUpstreamError: ERROR 3001: Unknown internal error@cf/baai/bge-large-en-v1.5@cloudflare/ai@cloudflare/aistream: truestream: truestream: trueai.run(…)ai.run(…) const ai = new Ai(env.AI);
const model = '@cf/openchat/openchat-3.5-0106';
const stream = await ai.run(model, {
messages: getMessagesFromPrompt({ model, prompt }),
stream: true,
});
return new Response(stream, {
headers: { 'content-type': 'text/event-stream' },
});"use server";
import { getRequestContext } from "@cloudflare/next-on-pages";
import { Ai } from "@cloudflare/ai";
export async function createThread(prevState: any, formData: FormData) {
const { env } = getRequestContext();
const ai = new Ai(env.AI);
const title = formData.get("title");
const response = await ai.run(
"@cf/stabilityai/stable-diffusion-xl-base-1.0",
{
prompt: title as string,
}
);Error: A Node.js API is used (DecompressionStream) which is not supported in the Edge Runtime. Learn more: https://nextjs.org/docs/api-reference/edge-runtimestreamReadableStream<any>ReadableStream<any>stream: falseAiTextToImageOutputconst response = (await ai.run("@cf/stabilityai/stable-diffusion-xl-base-1.0", input)) as AiTextToImageOutput;const response = (await ai.run("@cf/stabilityai/stable-diffusion-xl-base-1.0", {...input, stream: true})) as ReadableStream<any>;format: "json"strength@cf/runwayml/stable-diffusion-v1-5-img2img@cf/runwayml/stable-diffusion-v1-5-inpaintingformat: 'json'parseAsJSON@acusti/parsing'@cf/facebook/bart-large-cnn'Uncaught (async) Error: InferenceUpstreamError: {"success":false,"errors":[{"code":10000,"message":"Authentication error"}]}wrangler dev --remote"wrangler": "^3.36.0""@cloudflare/ai": "^1.1.0"Authentication errorwrangler loginconst formData = await request.formData();
const inputImg = formData.get("image");const inputs = {
image: [...new Uint8Array(await inputImg.arrayBuffer())],
};
const response = await ai.run(model, inputs);const inputs = {
image: [...new Uint8Array(await inputImg.arrayBuffer())],
};
const fetchOptions = {
method: "POST",
body: JSON.stringify(inputs),
headers: { authorization: `Bearer ${apiToken}`, "content-type": "application/json" }
};
const response = await fetch(`https://api.cloudflare.com/client/v4/accounts/${apiAccount}/ai/run/${model}`, fetchOptions);const fetchOptions = {
method: "POST",
body: inputImg,
headers: { authorization: `Bearer ${apiToken}`, "content-type": "image/png" }
};
const response = await fetch(`https://api.cloudflare.com/client/v4/accounts/${apiAccount}/ai/run/${model}`, fetchOptions);const response: AiTextToImageOutput = await ai.run("@cf/stabilityai/stable-diffusion-xl-base-1.0", input);env.MY_BUCKET.put(newImageName, response)