hey all, i'm getting 400s when using `@cf/openai/gpt-oss-120b`. is this a known thing? using the com

hey all, i'm getting 400s when using @cf/openai/gpt-oss-120b. is this a known thing? using the completions API:

import { createClient as createOpenAIClient } from "./client";
import type { ChatCompletionMessageParam } from "openai/resources/chat/completions";

export async function callCloudflareAI(
  model: string,
  messages: ChatCompletionMessageParam[],
  options: {
    temperature?: number;
    max_tokens?: number;
  } = {}
): Promise<string> {
  const client = createOpenAIClient();

  const response = await client.chat.completions.create({
    model,
    messages,
    temperature: options.temperature,
    max_tokens: options.max_tokens
  });

  return response.choices[0]?.message?.content || "";
}


should I be using responses instead? thanks.
Was this page helpful?