LocalAI/scripts
Ettore Di Giacinto e502e51d78 feat(llama.cpp): add turboquant support
This PR adds patchset from the great work of @TheTom in
https://github.com/TheTom/llama-cpp-turboquant and creates a pipeline
that updates the patches against upstream automatically.

It also creates necessary scaffolding for doing this with other patches
sources.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-04-01 17:57:03 +00:00
..
build chore(backends): do not bundle cuda target directory (#7982) 2026-01-12 07:51:09 +01:00
patch_utils feat(llama.cpp): add turboquant support 2026-04-01 17:57:03 +00:00
changed-backends.js chore(ci): Scope tests extras backend tests (#9170) 2026-03-30 17:46:07 +00:00
latest_hf.py fix(scripts): minor fixup to gallery scripts 2024-07-13 11:36:20 +02:00
model_gallery_info.py chore(scripts): allow to specify quants (#5430) 2025-05-22 11:53:30 +02:00
prepare-libs.sh ci(Makefile): adds tts in binary releases (#2695) 2024-07-05 23:19:24 +02:00