تاریخچه Commit ها

نویسنده SHA1 پیام تاریخ
  Jeffrey Morgan efbf41ed81 llm: dont link cuda with compat libs (#5621) 5 ماه پیش
  Jeffrey Morgan 4e262eb2a8 remove `GGML_CUDA_FORCE_MMQ=on` from build (#5588) 5 ماه پیش
  Daniel Hiltgen 0bacb30007 Workaround broken ROCm p2p copy 5 ماه پیش
  Jeffrey Morgan 4607c70641 llm: add `-DBUILD_SHARED_LIBS=off` to common cpu cmake flags (#5520) 5 ماه پیش
  Jeffrey Morgan 2cc854f8cb llm: fix missing dylibs by restoring old build behavior on Linux and macOS (#5511) 5 ماه پیش
  Jeffrey Morgan 8f8e736b13 update llama.cpp submodule to `d7fd29f` (#5475) 5 ماه پیش
  Daniel Hiltgen b0930626c5 Add back lower level parallel flags 6 ماه پیش
  Jeffrey Morgan 152fc202f5 llm: update llama.cpp commit to `7c26775` (#4896) 6 ماه پیش
  Daniel Hiltgen ab8c929e20 Add ability to skip oneapi generate 6 ماه پیش
  Daniel Hiltgen 646371f56d Merge pull request #3278 from zhewang1-intc/rebase_ollama_main 7 ماه پیش
  Wang,Zhe fd5971be0b support ollama run on Intel GPUs 7 ماه پیش
  Daniel Hiltgen c48c1d7c46 Port cuda/rocm skip build vars to linux 7 ماه پیش
  Roy Yang 5f73c08729 Remove trailing spaces (#3889) 8 ماه پیش
  Daniel Hiltgen cc5a71e0e3 Merge pull request #3709 from remy415/custom-gpu-defs 8 ماه پیش
  Jeremy 440b7190ed Update gen_linux.sh 8 ماه پیش
  Jeremy 52f5370c48 add support for custom gpu build flags for llama.cpp 8 ماه پیش
  Jeremy 7c000ec3ed adds support for OLLAMA_CUSTOM_GPU_DEFS to customize GPU build flags 8 ماه پیش
  Jeremy 8aec92fa6d rearranged conditional logic for static build, dockerfile updated 8 ماه پیش
  Jeremy 70261b9bb6 move static build to its own flag 8 ماه پیش
  Blake Mizerany 1524f323a3 Revert "build.go: introduce a friendlier way to build Ollama (#3548)" (#3564) 8 ماه پیش
  Blake Mizerany fccf3eecaa build.go: introduce a friendlier way to build Ollama (#3548) 8 ماه پیش
  Jeffrey Morgan 63efa075a0 update generate scripts with new `LLAMA_CUDA` variable, set `HIP_PLATFORM` to avoid compiler errors (#3528) 8 ماه پیش
  Daniel Hiltgen 58d95cc9bd Switch back to subprocessing for llama.cpp 9 ماه پیش
  Jeremy dfc6721b20 add support for libcudart.so for CUDA devices (adds Jetson support) 9 ماه پیش
  Daniel Hiltgen d4c10df2b0 Add Radeon gfx940-942 GPU support 9 ماه پیش
  Daniel Hiltgen bc13da2bfe Avoid rocm runner and dependency clash 9 ماه پیش
  Daniel Hiltgen 3dc1bb6a35 Harden for deps file being empty (or short) 9 ماه پیش
  Daniel Hiltgen 6c5ccb11f9 Revamp ROCm support 10 ماه پیش
  Daniel Hiltgen 6d84f07505 Detect AMD GPU info via sysfs and block old cards 10 ماه پیش
  mraiser 4c4c730a0a Merge branch 'ollama:main' into main 11 ماه پیش