You wouldn't want to use AI to compress, you'd want to use something like WebP, AVIF, etc
You wouldn't want to use AI to compress, you'd want to use something like WebP, AVIF, etc
hermes-2-pro-mistral-7b model. I have a very basic app based on CF's function calling tutorial. I made the app about a month ago and has been working fine since. But over the past few days it has stopped working and it appears that the model has changed the format of its response. Here is how it is responding now:<tool_call> but the response no longer returns a tool_calls prop, so my code isn't calling anything it only shows the malformed response.hermes-2-pro-mistral-7b with @cf/meta/llama-3.3-70b-instruct-fp8-fast then it works fine. Something must have changed with the Mistral model@cf/meta/llama-guard-3-8b. What's the proper way to tweak this to modify the UNSAFE CONTENT CATEGORIES? A lot of sexually explict stuff is getting through when the prompts aren't very long. I want a really robust NSFW content filter but asis this model isn't working wellimage, mask) for several models, but the only model that does inpainting is @cf/runwayml/stable-diffusion-v1-5-inpainting. image and mask when calling it. The format is mask: [...new Uint8Array(await maskImage.arrayBuffer())], where maskImage could be the result of e.g. await fetch(/* mask image url */).@cf/openai/whisper-large-v3-turbo to convert audio into text. It works fine when recording on Pixel 9 in AAC format, but fails when recording on an older Oneplus 7T with AAC (I'm using Flutter to build the app). Using an AMR WB format works fine, though. Also all the AAC formats work fine when given to Google Gemini Flash/Pro modelsMPEG-4 AAC Low complexity and MPEG-4 High Efficiency AAC formats with identical resultsffprobe says this about the one recorded on Oneplus,@cf/baai/bge-m3 model but I keep getting the error below. Previously I was using the @cf/baai/bge-base-en-v1.5 model and everything was just fine.@cf/deepseek-ai/deepseek-r1-distill-qwen-32b missing the opening <think> tagThis is due to a quirk in the prompt template, which forces the <think> tag in order to require the model to think through it's response, but which results in this behavior
User Prompt: show me the color of an avocado?
AI result is: {
response: '<tool_call>\n' +
"{'arguments': {'r': 69, 'g': 139, 'b': 69}, 'name': 'switchLightColor'}\n" +
'</tool_call>\n' +
'\n' +
'I have chosen a color that represents an avocado. The color of an avocado is typically a pale green with a hint of creaminess. To represent this, I have selected a color with a higher green value (139) and balanced amounts of red (69) and blue (69) values. This creates a color that is reminiscent of the natural hue of an avocado, bringing the essence of the fruit into the room.',
usage: { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 }
}err : AiError: 5006: Error: required properties at '/' are 'contexts'data: {"response":""}
data: {"response":""}
data: {"response":"","usage":{"prompt_tokens":0,"completion_tokens":1,"total_tokens":1}}