@cf/meta/llama-3.1-8b-instruct-fp8 has 32k token limit

llama-2 , CF ai seem to be useless to me atm.create me an image that is 512x512, with text that says "my text here" and a red background, with yellow textred background, text saying "my text here", arial font, yellow color
New blob([res]) returns a blob with raw content of [object Object]?{}{"0":0}, but when i log the response from ai.run it's just {}console.log(res instanceof Uint8Array) logs "false", soconst resp = await this.ai.run(model, input); it is returning a ReadableStream<Uint8Array> rather than a Uint8Array like the types are saying. I'm having to coerce it into the correct type before being able to manipulate itfromUint8Array(buffer) is from js-base64const buffer = await new Response(resp).arrayBuffer();.
@cf/meta/llama-3.1-8b-instruct-fp8VECTOR_UPSERT_ERROR (code = 40012): invalid vector for id="sdfksd", expected 768 dimensions, and got 1024 dimensionsconst embeddingResult = await env.AI.run('@cf/baai/bge-large-en-v1.5', {
text: value,
});
const embeddingBatch: number[][] = embeddingResult.data;
await env.VECTORIZE.upsert(
embeddingBatch.map((embedding, index) => ({
id: sourceId,
values: embedding,
namespace: 'default',
metadata: {
id: sessionId
},
}))
);llama-2create me an image that is 512x512, with text that says "my text here" and a red background, with yellow textred background, text saying "my text here", arial font, yellow colorNew blob([res])[object Object]{}{}{"0":0}console.log(res instanceof Uint8Array)const resp = await this.ai.run(model, input);ReadableStream<Uint8Array>Uint8ArrayfromUint8Array(buffer)js-base64const buffer = await new Response(resp).arrayBuffer();export default {
async fetch(request, env) {
const inputs = {
prompt: "create an image that is 512x512. the background should be a solid, plain, yellow color. text over the background should say 'Learn How to Pronounce MySQL' in English. Text should be red and use an Arial font. ",
negative_prompt: "There shOuld not be any other effects or images.",
height: 512,
width: 1024
};
const response = await env.AI.run(
"@cf/bytedance/stable-diffusion-xl-lightning",
inputs
);
return new Response(response, {
headers: {
"content-type": "image/png",
},
});
},
}; const input: TTIInput = {
prompt: prompt,
strength: strength,
};
const res = await env.AI.run("@cf/stabilityai/stable-diffusion-xl-base-1.0", input);
const blob = new Blob([res]);
console.log(await blob.text());let a = new Uint8Array(1)
console.log(JSON.stringify(a)) const resp = (await this.ai.run(model, input)) as unknown;
let buffer: Uint8Array;
if (resp instanceof ReadableStream) {
const chunks: Uint8Array[] = [];
// eslint-disable-next-line no-restricted-syntax
for await (const chunk of resp as ReadableStream<Uint8Array>) {
chunks.push(chunk);
}
buffer = new Uint8Array(chunks.reduce((acc, chunk) => acc + chunk.length, 0));
let offset = 0;
// eslint-disable-next-line no-restricted-syntax
for (const chunk of chunks) {
buffer.set(chunk, offset);
offset += chunk.length;
}
}
else if (resp instanceof Uint8Array) {
buffer = resp;
}
else {
throw new Error("Unknown return type for the ai run");
}
const base64Image = fromUint8Array(buffer);