Ollama on RunPod
Hey all,
I am attempting to set up Ollama on a Nvidia GeForce RTX 4090 pod. The commands for that are pretty straightforward (link to article: https://docs.runpod.io/tutorials/pods/run-ollama#:~:text=Set%20up%20Ollama%20on%20your%20GPU%20Pod%201,4%3A%20Interact%20with%20Ollama%20via%20HTTP%20API%20). All I do is run the following two commands on the pod's web terminal after it starts up, and I'm good to go:
1) (curl -fsSL https://ollama.com/install.sh | sh && ollama serve > ollama.log 2>&1) &
2) ollama run [model_name]
However, what I would like to do is have these commands run automatically upon starting the pod. My initial thought was to enter the above two commands into the 'Container Start Command' field on the pod deployment page (as seen in image attached). I'm not sure how to write these start-up commands and would be grateful for any assistance.
I am attempting to set up Ollama on a Nvidia GeForce RTX 4090 pod. The commands for that are pretty straightforward (link to article: https://docs.runpod.io/tutorials/pods/run-ollama#:~:text=Set%20up%20Ollama%20on%20your%20GPU%20Pod%201,4%3A%20Interact%20with%20Ollama%20via%20HTTP%20API%20). All I do is run the following two commands on the pod's web terminal after it starts up, and I'm good to go:
1) (curl -fsSL https://ollama.com/install.sh | sh && ollama serve > ollama.log 2>&1) &
2) ollama run [model_name]
However, what I would like to do is have these commands run automatically upon starting the pod. My initial thought was to enter the above two commands into the 'Container Start Command' field on the pod deployment page (as seen in image attached). I'm not sure how to write these start-up commands and would be grateful for any assistance.

Set up Ollama server and run LLMs with RunPod GPUs
