Just curious about whether we are able to manually assgin more RAM for our endpoints since I want to use 4090 due to its high inference performance while the RAM is just 24GB which could be a bit low for video combine process
Continue the conversation
Join the Discord to ask follow-up questions and connect with the community
R
Runpod
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!