LocalAI/backend
LocalAI [bot] 2a9d675d62
chore: ⬆️ Update ggml-org/llama.cpp to 5c0eb5ef544aeefd81c303e03208f768e158d93c (#5959)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-08-02 23:35:24 +02:00
..
cpp chore: ⬆️ Update ggml-org/llama.cpp to 5c0eb5ef544aeefd81c303e03208f768e158d93c (#5959) 2025-08-02 23:35:24 +02:00
go chore: ⬆️ Update ggml-org/whisper.cpp to 0becabc8d68d9ffa6ddfba5240e38cd7a2642046 (#5958) 2025-08-02 21:04:13 +00:00
python feat(rfdetr): add object detection API (#5923) 2025-07-27 22:02:51 +02:00
backend.proto feat(stablediffusion-ggml): add support to ref images (flux Kontext) (#5935) 2025-07-30 22:42:34 +02:00
Dockerfile.golang fix(intel): Set GPU vendor on Intel images and cleanup (#5945) 2025-07-31 19:44:46 +02:00
Dockerfile.llama-cpp feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
Dockerfile.python feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
index.yaml fix(backend gallery): intel images for python-based backends, re-add exllama2 (#5928) 2025-07-28 15:15:19 +02:00