Cloudflare doesn't produce the AI models they host, so they can't do anything to fix it really. You'

Cloudflare doesn't produce the AI models they host, so they can't do anything to fix it really. You'd have to look and see if there's a specific spot to report those for the models themselves. Honestly though, it's entirely up to how a model is prompted and I doubt there's a method to report jailbreaks because they're kind of a given with LLMs at the moment (any LLM can be jailbroken with enough time and effort).
Was this page helpful?