LocalAI/backend/cpp/llama-cpp
LocalAI [bot] d25145e641
chore: ⬆️ Update ggml-org/llama.cpp to bf78f5439ee8e82e367674043303ebf8e92b4805 (#5927)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-27 21:08:32 +00:00
..
patches feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
CMakeLists.txt feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
grpc-server.cpp feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
Makefile chore: ⬆️ Update ggml-org/llama.cpp to bf78f5439ee8e82e367674043303ebf8e92b4805 (#5927) 2025-07-27 21:08:32 +00:00
package.sh feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
prepare.sh feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
run.sh feat: refactor build process, drop embedded backends (#5875) 2025-07-22 16:31:04 +02:00