Have others been able to run text-to-image models in NextJS apps that run on Pages? I have a nextJS app set up via C3 and a server action performing the following:
"use server";import { getRequestContext } from "@cloudflare/next-on-pages";import { Ai } from "@cloudflare/ai";export async function createThread(prevState: any, formData: FormData) { const { env } = getRequestContext(); const ai = new Ai(env.AI); const title = formData.get("title"); const response = await ai.run( "@cf/stabilityai/stable-diffusion-xl-base-1.0", { prompt: title as string, } );
"use server";import { getRequestContext } from "@cloudflare/next-on-pages";import { Ai } from "@cloudflare/ai";export async function createThread(prevState: any, formData: FormData) { const { env } = getRequestContext(); const ai = new Ai(env.AI); const title = formData.get("title"); const response = await ai.run( "@cf/stabilityai/stable-diffusion-xl-base-1.0", { prompt: title as string, } );
However, I'm running into
Error: A Node.js API is used (DecompressionStream) which is not supported in the Edge Runtime. Learn more: https://nextjs.org/docs/api-reference/edge-runtime
Error: A Node.js API is used (DecompressionStream) which is not supported in the Edge Runtime. Learn more: https://nextjs.org/docs/api-reference/edge-runtime
I'm guessing there some decompression going on under the hood with the AI interface?
If it matters, my next step is to write the response as a file to R2, so I'm not sure that I actually need to decompress for my purposes...