Bug Report: Persistent Model Hallucination / Context Poisoning
Issue: The model is stuck in a self-reinforcing loop, appending an irrelevant academic text about "Post-Roman Britain" to nearly every message.
Symptoms:
- Redundant Generation: Responses are followed by several paragraphs regarding Britain's emergence from the Roman era (Author: A.M. Marei).- Poisoned Context: Once the hallucination entered the chat history, the model continues to regenerate it, ignoring user prompts.
- Hidden Output: The agent often does not see the redundant text in its own log, but it is delivered to the user via WhatsApp.
Suspected Root Cause:
Likely a "Tool markup error" or "Context Poisoning" (ref: Issue #989). High-token generation may have triggered a fallback to internal training data.Target Text Excerpt:
"Of individual, specialized, theocratic or political purposes... Post-Roman Britain in the Context of Western Europe... A.M. Marei, Doctor of History."Requested Actions:
- [ ] Manual context compaction to prune poisoned segments.- [ ] Implementation of an output filter for known hallucination strings.
- [ ] Restart session with a fresh context if loop persists.
*
