I get this error how to fix it and use vllm-flash-attn which is faster. Current Qwen2-VL implementation has a bug with vllm-flash-attn inside vision module, so we use xformers backend instead. You can run `pip install flash-attn to use flash-attention backend.
No replies yet
Join the Discord to continue the conversation
R
Runpod
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!