Histórico de commits

Autor SHA1 Mensagem Data
  Jeffrey Morgan efbf41ed81 llm: dont link cuda with compat libs (#5621) 5 meses atrás
  Jeffrey Morgan 4e262eb2a8 remove `GGML_CUDA_FORCE_MMQ=on` from build (#5588) 5 meses atrás
  Daniel Hiltgen 0bacb30007 Workaround broken ROCm p2p copy 5 meses atrás
  Jeffrey Morgan 4607c70641 llm: add `-DBUILD_SHARED_LIBS=off` to common cpu cmake flags (#5520) 5 meses atrás
  Jeffrey Morgan 2cc854f8cb llm: fix missing dylibs by restoring old build behavior on Linux and macOS (#5511) 5 meses atrás
  Jeffrey Morgan 8f8e736b13 update llama.cpp submodule to `d7fd29f` (#5475) 5 meses atrás
  Daniel Hiltgen b0930626c5 Add back lower level parallel flags 6 meses atrás
  Jeffrey Morgan 152fc202f5 llm: update llama.cpp commit to `7c26775` (#4896) 6 meses atrás
  Daniel Hiltgen ab8c929e20 Add ability to skip oneapi generate 6 meses atrás
  Daniel Hiltgen 646371f56d Merge pull request #3278 from zhewang1-intc/rebase_ollama_main 7 meses atrás
  Wang,Zhe fd5971be0b support ollama run on Intel GPUs 7 meses atrás
  Daniel Hiltgen c48c1d7c46 Port cuda/rocm skip build vars to linux 7 meses atrás
  Roy Yang 5f73c08729 Remove trailing spaces (#3889) 8 meses atrás
  Daniel Hiltgen cc5a71e0e3 Merge pull request #3709 from remy415/custom-gpu-defs 8 meses atrás
  Jeremy 440b7190ed Update gen_linux.sh 8 meses atrás
  Jeremy 52f5370c48 add support for custom gpu build flags for llama.cpp 8 meses atrás
  Jeremy 7c000ec3ed adds support for OLLAMA_CUSTOM_GPU_DEFS to customize GPU build flags 8 meses atrás
  Jeremy 8aec92fa6d rearranged conditional logic for static build, dockerfile updated 8 meses atrás
  Jeremy 70261b9bb6 move static build to its own flag 8 meses atrás
  Blake Mizerany 1524f323a3 Revert "build.go: introduce a friendlier way to build Ollama (#3548)" (#3564) 8 meses atrás
  Blake Mizerany fccf3eecaa build.go: introduce a friendlier way to build Ollama (#3548) 8 meses atrás
  Jeffrey Morgan 63efa075a0 update generate scripts with new `LLAMA_CUDA` variable, set `HIP_PLATFORM` to avoid compiler errors (#3528) 8 meses atrás
  Daniel Hiltgen 58d95cc9bd Switch back to subprocessing for llama.cpp 9 meses atrás
  Jeremy dfc6721b20 add support for libcudart.so for CUDA devices (adds Jetson support) 9 meses atrás
  Daniel Hiltgen d4c10df2b0 Add Radeon gfx940-942 GPU support 9 meses atrás
  Daniel Hiltgen bc13da2bfe Avoid rocm runner and dependency clash 9 meses atrás
  Daniel Hiltgen 3dc1bb6a35 Harden for deps file being empty (or short) 9 meses atrás
  Daniel Hiltgen 6c5ccb11f9 Revamp ROCm support 10 meses atrás
  Daniel Hiltgen 6d84f07505 Detect AMD GPU info via sysfs and block old cards 10 meses atrás
  mraiser 4c4c730a0a Merge branch 'ollama:main' into main 11 meses atrás