.. |
ext_server
|
d8def1ff94
llm: allow gemma 2 to context shift (#5534)
|
5 months ago |
generate
|
efbf41ed81
llm: dont link cuda with compat libs (#5621)
|
5 months ago |
llama.cpp @ a8db2a9ce6
|
571dc61955
Update llama.cpp submodule to `a8db2a9c` (#5530)
|
5 months ago |
patches
|
571dc61955
Update llama.cpp submodule to `a8db2a9c` (#5530)
|
5 months ago |
filetype.go
|
d6f692ad1a
Add support for IQ1_S, IQ3_S, IQ2_S, IQ4_XS. IQ4_NL (#4322)
|
7 months ago |
ggla.go
|
cb42e607c5
llm: speed up gguf decoding by a lot (#5246)
|
6 months ago |
ggml.go
|
5a739ff4cb
chatglm graph
|
5 months ago |
ggml_test.go
|
cb42e607c5
llm: speed up gguf decoding by a lot (#5246)
|
6 months ago |
gguf.go
|
cb42e607c5
llm: speed up gguf decoding by a lot (#5246)
|
6 months ago |
llm.go
|
b51e3b63ac
Statically link c++ and thread lib
|
5 months ago |
llm_darwin_amd64.go
|
58d95cc9bd
Switch back to subprocessing for llama.cpp
|
8 months ago |
llm_darwin_arm64.go
|
58d95cc9bd
Switch back to subprocessing for llama.cpp
|
8 months ago |
llm_linux.go
|
58d95cc9bd
Switch back to subprocessing for llama.cpp
|
8 months ago |
llm_windows.go
|
058f6cd2cc
Move nested payloads to installer and zip file on windows
|
8 months ago |
memory.go
|
8e0641a9bf
handle asymmetric embedding KVs
|
6 months ago |
memory_test.go
|
cb42e607c5
llm: speed up gguf decoding by a lot (#5246)
|
6 months ago |
payload.go
|
0e982bc1f4
Fix corner cases on tmp cleaner on mac
|
5 months ago |
server.go
|
791650ddef
sched: only error when over-allocating system memory (#5626)
|
5 months ago |
status.go
|
4d71c559b2
fix error detection by limiting model loading error parsing (#5472)
|
5 months ago |