© 2026 Hedgehog Software, LLC
Twitter
GitHub
Discord
System
Light
Dark
More
Communities
Docs
About
Terms
Privacy
Search
Star
Feedback
Setup for Free
Do 2 GPUs will fine tune 2 times faster than 1 GPU on axolotl ? - Runpod
R
Runpod
•
2y ago
•
22 replies
Volko
Do 2 GPUs will fine tune 2 times faster than 1 GPU on axolotl ?
Do 2 GPUs will fine tune 2 times faster than 1 GPU on axolotl
?
Solution
It seems
Jump to solution
Runpod
Join
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!
21,906
Members
View on Discord
Resources
ModelContextProtocol
ModelContextProtocol
MCP Server
Similar Threads
Was this page helpful?
Yes
No
Recent Announcements
Similar Threads
Axolotl Fine Tune Error (flash_attn_2_cuda.cpython-311-x86_64-linux-gnu.so: undefined symbol)
R
Runpod / ⛅|pods
12mo ago
The "Fine tune an LLM with Axolotl on RunPod" tutorial should mention uploading public key first
R
Runpod / ⛅|pods
16mo ago
How to fetch more than 8 gpus on RunPod (2 nodes)
R
Runpod / ⛅|pods
2y ago
is AWQ faster than GGUF ?
R
Runpod / ⛅|pods
2y ago