LocalAI/backend
LocalAI [bot] 1929eb2894
chore: ⬆️ Update ggml-org/llama.cpp to bf9087f59aab940cf312b85a67067ce33d9e365a (#5860)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-19 08:52:07 +02:00
..
cpp chore: ⬆️ Update ggml-org/llama.cpp to bf9087f59aab940cf312b85a67067ce33d9e365a (#5860) 2025-07-19 08:52:07 +02:00
go feat: split piper from main binary (#5858) 2025-07-19 08:31:33 +02:00
python fix: Diffusers and XPU fixes (#5737) 2025-07-01 12:36:17 +02:00
backend.proto feat: split piper from main binary (#5858) 2025-07-19 08:31:33 +02:00
Dockerfile.go feat: split piper from main binary (#5858) 2025-07-19 08:31:33 +02:00
Dockerfile.llama-cpp feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
Dockerfile.python feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
index.yaml feat: split piper from main binary (#5858) 2025-07-19 08:31:33 +02:00