LocalAI/backend/cpp/llama-cpp
LocalAI [bot] 8bb1e8f21f
chore: ⬆️ Update ggml-org/llama.cpp to cf8b0dbda9ac0eac30ee33f87bc6702ead1c4664 (#9448)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2026-04-21 11:15:45 +02:00
..
CMakeLists.txt fix(turboquant): resolve common.h by detecting llama-common vs common target (#9413) 2026-04-18 20:30:28 +02:00
grpc-server.cpp fix(vision): propagate mtmd media marker from backend via ModelMetadata (#9412) 2026-04-18 20:30:13 +02:00
Makefile chore: ⬆️ Update ggml-org/llama.cpp to cf8b0dbda9ac0eac30ee33f87bc6702ead1c4664 (#9448) 2026-04-21 11:15:45 +02:00
package.sh fix(llama.cpp): bundle libdl, librt, libpthread in llama-cpp backend (#9099) 2026-03-22 00:58:14 +01:00
prepare.sh chore: ⬆️ Update ggml-org/llama.cpp to 7f8ef50cce40e3e7e4526a3696cb45658190e69a (#7402) 2025-12-01 07:50:40 +01:00
run.sh feat(rocm): bump to 7.x (#9323) 2026-04-12 08:51:30 +02:00