Jeffrey Morgan 791650ddef sched: only error when over-allocating system memory (#5626) 5 月之前
..
ext_server d8def1ff94 llm: allow gemma 2 to context shift (#5534) 5 月之前
generate efbf41ed81 llm: dont link cuda with compat libs (#5621) 5 月之前
llama.cpp @ a8db2a9ce6 571dc61955 Update llama.cpp submodule to `a8db2a9c` (#5530) 5 月之前
patches 571dc61955 Update llama.cpp submodule to `a8db2a9c` (#5530) 5 月之前
filetype.go d6f692ad1a Add support for IQ1_S, IQ3_S, IQ2_S, IQ4_XS. IQ4_NL (#4322) 7 月之前
ggla.go cb42e607c5 llm: speed up gguf decoding by a lot (#5246) 6 月之前
ggml.go 5a739ff4cb chatglm graph 5 月之前
ggml_test.go cb42e607c5 llm: speed up gguf decoding by a lot (#5246) 6 月之前
gguf.go cb42e607c5 llm: speed up gguf decoding by a lot (#5246) 6 月之前
llm.go b51e3b63ac Statically link c++ and thread lib 5 月之前
llm_darwin_amd64.go 58d95cc9bd Switch back to subprocessing for llama.cpp 8 月之前
llm_darwin_arm64.go 58d95cc9bd Switch back to subprocessing for llama.cpp 8 月之前
llm_linux.go 58d95cc9bd Switch back to subprocessing for llama.cpp 8 月之前
llm_windows.go 058f6cd2cc Move nested payloads to installer and zip file on windows 8 月之前
memory.go 8e0641a9bf handle asymmetric embedding KVs 6 月之前
memory_test.go cb42e607c5 llm: speed up gguf decoding by a lot (#5246) 6 月之前
payload.go 0e982bc1f4 Fix corner cases on tmp cleaner on mac 5 月之前
server.go 791650ddef sched: only error when over-allocating system memory (#5626) 5 月之前
status.go 4d71c559b2 fix error detection by limiting model loading error parsing (#5472) 5 月之前