LocalAI/backend
LocalAI [bot] fa284f7445
chore: ⬆️ Update ggml-org/llama.cpp to 2be60cbc2707359241c2784f9d2e30d8fc7cdabb (#5867)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-07-21 09:14:09 +02:00
..
cpp chore: ⬆️ Update ggml-org/llama.cpp to 2be60cbc2707359241c2784f9d2e30d8fc7cdabb (#5867) 2025-07-21 09:14:09 +02:00
go feat: split whisper from main binary (#5863) 2025-07-20 22:52:45 +02:00
python fix: Diffusers and XPU fixes (#5737) 2025-07-01 12:36:17 +02:00
backend.proto feat: split piper from main binary (#5858) 2025-07-19 08:31:33 +02:00
Dockerfile.go feat: split whisper from main binary (#5863) 2025-07-20 22:52:45 +02:00
Dockerfile.llama-cpp feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
Dockerfile.python feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
index.yaml Update index.yaml 2025-07-20 22:54:12 +02:00