LocalAI/backend/cpp
jongames f2b9452ec4
fix: reranking models limited to 512 tokens in llama.cpp backend (#6344)
Fix reranking models being limited to 512 tokens input in llama.cpp backend

Signed-off-by: JonGames <18472148+jongames@users.noreply.github.com>
2025-09-25 23:32:07 +00:00
..
grpc fix: speedup git submodule update with --single-branch (#2847) 2024-07-13 22:32:25 +02:00
llama-cpp fix: reranking models limited to 512 tokens in llama.cpp backend (#6344) 2025-09-25 23:32:07 +00:00