LocalAI/backend/go
Ettore Di Giacinto 60633c4dd5
fix(stable-diffusion.ggml): force mp4 container in ffmpeg mux (#9435)
gen_video's ffmpeg subprocess was relying on the filename extension to
choose the output container. Distributed LocalAI hands the backend a
staging path (e.g. /staging/localai-output-NNN.tmp) that is renamed to
.mp4 only after the backend returns, so ffmpeg saw a .tmp extension and
bailed with "Unable to choose an output format". Inference had already
completed and the frames were piped in, producing the cryptic
"video inference failed (code 1)" at the API layer.

Pass -f mp4 explicitly so the container is selected by flag instead of
by filename suffix.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-20 00:41:54 +02:00
..
acestep-cpp chore: ⬆️ Update ace-step/acestep.cpp to e0c8d75a672fca5684c88c68dbf6d12f58754258 (#9261) 2026-04-07 00:39:24 +02:00
llm/llama feat: add distributed mode (#9124) 2026-03-30 00:47:27 +02:00
local-store feat: add distributed mode (#9124) 2026-03-30 00:47:27 +02:00
opus feat: add distributed mode (#9124) 2026-03-30 00:47:27 +02:00
piper fix(package.sh): drop redundant -a and -R 2026-02-05 16:39:38 +01:00
qwen3-tts-cpp feat(qwen3tts.cpp): add new backend (#9316) 2026-04-11 23:14:26 +02:00
sam3-cpp feat(rocm): bump to 7.x (#9323) 2026-04-12 08:51:30 +02:00
silero-vad fix(package.sh): drop redundant -a and -R 2026-02-05 16:39:38 +01:00
stablediffusion-ggml fix(stable-diffusion.ggml): force mp4 container in ffmpeg mux (#9435) 2026-04-20 00:41:54 +02:00
voxtral feat: wire transcription for llama.cpp, add streaming support (#9353) 2026-04-14 16:13:40 +02:00
whisper chore: ⬆️ Update ggml-org/whisper.cpp to 166c20b473d5f4d04052e699f992f625ea2a2fdd (#9403) 2026-04-18 00:42:32 +02:00