LocalAI/backend/cpp
Ettore Di Giacinto 424acd66ad
feat(llama.cpp): allow to set cache-ram and ctx_shift (#7009)
* feat(llama.cpp): allow to set cache-ram and ctx_shift

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Apply suggestion from @mudler

Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-11-02 17:33:29 +01:00
..
grpc fix: speedup git submodule update with --single-branch (#2847) 2024-07-13 22:32:25 +02:00
llama-cpp feat(llama.cpp): allow to set cache-ram and ctx_shift (#7009) 2025-11-02 17:33:29 +01:00