LocalAI/backend/cpp/llama-cpp
Ettore Di Giacinto 791bc769c1
chore(deps): bump llama.cpp to '1deee0f8d494981c32597dca8b5f8696d399b0f2' (#6421)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-10-10 09:51:22 +02:00
..
patches feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
CMakeLists.txt feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
grpc-server.cpp chore(deps): bump llama.cpp to '1deee0f8d494981c32597dca8b5f8696d399b0f2' (#6421) 2025-10-10 09:51:22 +02:00
Makefile chore(deps): bump llama.cpp to '1deee0f8d494981c32597dca8b5f8696d399b0f2' (#6421) 2025-10-10 09:51:22 +02:00
package.sh feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
prepare.sh feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
run.sh fix(llama-cpp/darwin): make sure to bundle libutf8 libs (#6060) 2025-08-14 17:56:35 +02:00