Does Cloudflare run some kind of moderation check on input to models? I'm looking into using Workers AI to identify content which may be CSAM related, as unfortunately OpenAI's free moderation API was not sufficient for my use case and mostly gave me false negatives. Hopefully the user-generated input I'll give to models won't trigger some kind of flag/check, so just want to make sure that my use case is alright.
Hello, I build a LLMs/AI chat platform build and design for Cloudflare worker AI. with many features like function calling, image to text, text to image, text to speech. all build with Cloudflare worker. https://github.com/akazwz/antonai
a LLMs/AI chat platform build and design for Cloudflare worker AI. with many features like function calling, image to text, text to image, text to speech - akazwz/antonai
We introduce Llama Guard, an LLM-based input-output safeguard model geared towards Human-AI conversation use cases. Our model incorporates a safety risk...
I think a good idea would be to have either max_tokens be the max tokens for each call, or have the user able to configure the max tokens for each call
the second option being something like initial_max_tokens, tool_max_tokens, and possibly trim_max_tokens (very scuffed index names, but you get the point)