Search
Star
Feedback
Setup for Free
© 2026 Hedgehog Software, LLC
Twitter
GitHub
Discord
System
Light
Dark
More
Communities
Docs
About
Terms
Privacy
meta-llama/Meta-Llama-3-8B-Instruct serverless - Runpod
R
Runpod
•
2y ago
•
1 reply
james3000
meta-llama/Meta-Llama-3-8B-Instruct serverless
I am bit confused
, trying to get this tested using Python but it seems to point me to using openai in the tutorial
@
https://docs.runpod.io/serverless/workers/vllm/get-started
Can we still use the openai python library or we need to use another one to connect to the endpoint
? Can anyone help me please
?
Get started | RunPod Documentation
Deploy a Serverless Endpoint for large language models
(LLMs
) with RunPod
, a simple and efficient way to run vLLM Workers with minimal configuration
.
Runpod
Join
We're a community of enthusiasts, engineers, and enterprises, all sharing insights on AI, Machine Learning and GPUs!
21,202
Members
View on Discord
Resources
ModelContextProtocol
ModelContextProtocol
MCP Server
Similar Threads
Was this page helpful?
Yes
No
Similar Threads
Length of output of serverless meta-llama/Llama-3.1-8B-Instruct
R
Runpod / ⚡|serverless
11mo ago
using meta-llama/Meta-Llama-3.1-8B-Instruct in servelss
R
Runpod / ⚡|serverless
3d ago
I am trying to deploy a "meta-llama/Llama-3.1-8B-Instruct" model on Serverless vLLM
R
Runpod / ⚡|serverless
11mo ago
Llama-3.1-Nemotron-70B-Instruct in Serverless
R
Runpod / ⚡|serverless
16mo ago