LocalAI/backend/cpp/llama-cpp
Ettore Di Giacinto 6410c99bf2
fix(llama-cpp): correctly calculate embeddings (#6259)
* chore(tests): check embeddings differs in llama.cpp

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(llama.cpp): use the correct field for embedding

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(llama.cpp): use embedding type none

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* chore(tests): add test-cases in aio-e2e suite

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-09-13 23:11:54 +02:00
..
patches feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
CMakeLists.txt feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
grpc-server.cpp fix(llama-cpp): correctly calculate embeddings (#6259) 2025-09-13 23:11:54 +02:00
Makefile chore: ⬆️ Update ggml-org/llama.cpp to aa0c461efe3603639af1a1defed2438d9c16ca0f (#6261) 2025-09-13 21:11:18 +00:00
package.sh feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
prepare.sh feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
run.sh fix(llama-cpp/darwin): make sure to bundle libutf8 libs (#6060) 2025-08-14 17:56:35 +02:00