LocalAI/backend
LocalAI [bot] 4d90971424
chore: ⬆️ Update ggml-org/llama.cpp to d31192b4ee1441bbbecd3cbf9e02633368bdc4f5 (#5965)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-03 21:03:20 +00:00
..
cpp chore: ⬆️ Update ggml-org/llama.cpp to d31192b4ee1441bbbecd3cbf9e02633368bdc4f5 (#5965) 2025-08-03 21:03:20 +00:00
go chore(stable-diffusion): bump, set GGML_MAX_NAME (#5961) 2025-08-03 10:47:02 +02:00
python feat(rfdetr): add object detection API (#5923) 2025-07-27 22:02:51 +02:00
backend.proto feat(stablediffusion-ggml): add support to ref images (flux Kontext) (#5935) 2025-07-30 22:42:34 +02:00
Dockerfile.golang fix(intel): Set GPU vendor on Intel images and cleanup (#5945) 2025-07-31 19:44:46 +02:00
Dockerfile.llama-cpp feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
Dockerfile.python feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
index.yaml fix(backend gallery): intel images for python-based backends, re-add exllama2 (#5928) 2025-07-28 15:15:19 +02:00