LocalAI/backend/cpp/llama-cpp
LocalAI [bot] 59af928379
chore: ⬆️ Update ggml-org/llama.cpp to c4df49a42d396bdf7344501813e7de53bc9e7bb3 (#6209)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-09-06 21:05:07 +00:00
..
patches feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
CMakeLists.txt feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
grpc-server.cpp feat(flash_attention): set auto for flash_attention in llama.cpp (#6168) 2025-08-31 17:59:09 +02:00
Makefile chore: ⬆️ Update ggml-org/llama.cpp to c4df49a42d396bdf7344501813e7de53bc9e7bb3 (#6209) 2025-09-06 21:05:07 +00:00
package.sh feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
prepare.sh feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
run.sh fix(llama-cpp/darwin): make sure to bundle libutf8 libs (#6060) 2025-08-14 17:56:35 +02:00