RunpodR
Runpod15mo ago
jules.dix

Vllm error flash-attn

I get this error how to fix it and use vllm-flash-attn which is faster. Current Qwen2-VL implementation has a bug with vllm-flash-attn inside vision module, so we use xformers backend instead. You can run `pip install flash-attn to use flash-attention backend.
Was this page helpful?