Mélony QIN 3f71ba406a Correct the kubernetes terminology (#3843) vor 8 Monaten
..
README.md 3f71ba406a Correct the kubernetes terminology (#3843) vor 8 Monaten
cpu.yaml dccac8c8fa k8s example vor 1 Jahr
gpu.yaml 0409c1fa59 docker: set PATH, LD_LIBRARY_PATH, and capabilities (#1336) vor 1 Jahr

README.md

Deploy Ollama to Kubernetes

Prerequisites

Steps

  1. Create the Ollama namespace, deployment, and service
   kubectl apply -f cpu.yaml

(Optional) Hardware Acceleration

Hardware acceleration in Kubernetes requires NVIDIA's k8s-device-plugin which is deployed in Kubernetes in form of daemonset. Follow the link for more details.

Once configured, create a GPU enabled Ollama deployment.

kubectl apply -f gpu.yaml

Test

  1. Port forward the Ollama service to connect and use it locally
   kubectl -n ollama port-forward service/ollama 11434:80
  1. Pull and run a model, for example orca-mini:3b
   ollama run orca-mini:3b