LocalAI/backend
LocalAI [bot] 4e40a8d1ed
chore: ⬆️ Update ggml-org/llama.cpp to a0552c8beef74e843bb085c8ef0c63f9ed7a2b27 (#5992)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-07 21:13:14 +00:00
..
cpp chore: ⬆️ Update ggml-org/llama.cpp to a0552c8beef74e843bb085c8ef0c63f9ed7a2b27 (#5992) 2025-08-07 21:13:14 +00:00
go chore(stable-diffusion): bump, set GGML_MAX_NAME (#5961) 2025-08-03 10:47:02 +02:00
python feat(transformers): add support to Dia (#5991) 2025-08-07 21:51:52 +02:00
backend.proto feat(stablediffusion-ggml): add support to ref images (flux Kontext) (#5935) 2025-07-30 22:42:34 +02:00
Dockerfile.golang fix(intel): Set GPU vendor on Intel images and cleanup (#5945) 2025-07-31 19:44:46 +02:00
Dockerfile.llama-cpp feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
Dockerfile.python feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
index.yaml feat(backends): add KittenTTS (#5977) 2025-08-06 12:38:45 +02:00