LocalAI/backend/cpp/llama-cpp
Ettore Di Giacinto 87e6de1989
feat: wire transcription for llama.cpp, add streaming support (#9353)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-04-14 16:13:40 +02:00
..
CMakeLists.txt fix: BMI2 crash on AVX-only CPUs (Intel Ivy Bridge/Sandy Bridge) (#7864) 2026-01-06 00:13:48 +00:00
grpc-server.cpp feat: wire transcription for llama.cpp, add streaming support (#9353) 2026-04-14 16:13:40 +02:00
Makefile feat: wire transcription for llama.cpp, add streaming support (#9353) 2026-04-14 16:13:40 +02:00
package.sh fix(llama.cpp): bundle libdl, librt, libpthread in llama-cpp backend (#9099) 2026-03-22 00:58:14 +01:00
prepare.sh chore: ⬆️ Update ggml-org/llama.cpp to 7f8ef50cce40e3e7e4526a3696cb45658190e69a (#7402) 2025-12-01 07:50:40 +01:00
run.sh feat(rocm): bump to 7.x (#9323) 2026-04-12 08:51:30 +02:00