LocalAI/backend
Richard Palethorpe 8fe9fa98f2
fix(stablediffusion-cpp): Switch back to upstream and update (#5880)
* sync(stablediffusion-cpp): Switch back to upstream and update

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(stablediffusion-ggml): NULL terminate options array to prevent segfault

Signed-off-by: Richard Palethorpe <io@richiejp.com>

* fix(build): Add BUILD_TYPE and BASE_IMAGE to all backends

Signed-off-by: Richard Palethorpe <io@richiejp.com>

---------

Signed-off-by: Richard Palethorpe <io@richiejp.com>
2025-07-24 16:03:18 +02:00
..
cpp chore: ⬆️ Update ggml-org/llama.cpp to a86f52b2859dae4db5a7a0bbc0f1ad9de6b43ec6 (#5894) 2025-07-24 15:02:37 +02:00
go fix(stablediffusion-cpp): Switch back to upstream and update (#5880) 2025-07-24 16:03:18 +02:00
python fix: Diffusers and XPU fixes (#5737) 2025-07-01 12:36:17 +02:00
backend.proto feat: split piper from main binary (#5858) 2025-07-19 08:31:33 +02:00
Dockerfile.golang fix: rename Dockerfile.go --> Dockerfile.golang to avoid IDE errors (#5892) 2025-07-23 21:33:26 +02:00
Dockerfile.llama-cpp feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
Dockerfile.python feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
index.yaml chore(backend gallery): add name to 'diffusers' meta 2025-07-23 09:21:04 +02:00