but it's not supported yet, I think. However, some providers provide a thinking model as a default parameters like openai. In this case, this processors occurs slow response and consume lots of tokens.
If this processors can receive a parameter to handle reasoning effort or something, user experience may be improved. Do you have some plan of this matter?
I make a custom processors to avoid this problem, but it's bit reinventing the wheel...