LocalAI/backend/cpp/llama-cpp
LocalAI [bot] 4c870288d9
chore: ⬆️ Update ggml-org/llama.cpp to 59d840209a5195c2f6e2e81b5f8339a0637b59d9 (#9144)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-03-28 18:18:06 +01:00
..
CMakeLists.txt fix: BMI2 crash on AVX-only CPUs (Intel Ivy Bridge/Sandy Bridge) (#7864) 2026-01-06 00:13:48 +00:00
grpc-server.cpp feat: inferencing default, automatic tool parsing fallback and wire min_p (#9092) 2026-03-22 00:57:15 +01:00
Makefile chore: ⬆️ Update ggml-org/llama.cpp to 59d840209a5195c2f6e2e81b5f8339a0637b59d9 (#9144) 2026-03-28 18:18:06 +01:00
package.sh fix(llama.cpp): bundle libdl, librt, libpthread in llama-cpp backend (#9099) 2026-03-22 00:58:14 +01:00
prepare.sh chore: ⬆️ Update ggml-org/llama.cpp to 7f8ef50cce40e3e7e4526a3696cb45658190e69a (#7402) 2025-12-01 07:50:40 +01:00
run.sh fix(llama-cpp/darwin): make sure to bundle libutf8 libs (#6060) 2025-08-14 17:56:35 +02:00