LocalAI/.github/workflows
Ettore Di Giacinto 294f7022f3
feat: do not bundle llama-cpp anymore (#5790)
* Build llama.cpp separately

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* WIP

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* WIP

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* WIP

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Start to try to attach some tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add git and small fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix: correctly autoload external backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Try to run AIO tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Slightly update the Makefile helps

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Adapt auto-bumper

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Try to run linux test

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add llama-cpp into build pipelines

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add default capability (for cpu)

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Drop llama-cpp specific logic from the backend loader

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* drop grpc install in ci for tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Pass by backends path for tests

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Build protogen at start

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fix(tests): set backends path consistently

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Correctly configure the backends path

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Try to build for darwin

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* WIP

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Compile for metal on arm64/darwin

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Try to run build off from cross-arch

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add to the backend index nvidia-l4t and cpu's llama-cpp backends

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Build also darwin-x86 for llama-cpp

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Disable arm64 builds temporary

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Test backend build on PR

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixup build backend reusable workflow

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* pass by skip drivers

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Use crane

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Skip drivers

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* x86 darwin

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Add packaging step for llama.cpp

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* fixups

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Fix leftover from bark-cpp extraction

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

* Try to fix hipblas build

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2025-07-18 13:24:12 +02:00
..
disabled ci: disable comment-pr until it's fixed 2024-07-19 19:00:36 +02:00
backend.yml feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
backend_build.yml feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
bump_deps.yaml feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
bump_docs.yaml chore(deps): Bump peter-evans/create-pull-request from 6 to 7 (#3518) 2024-09-10 01:52:16 +00:00
checksum_checker.yaml chore(ci): move also other jobs to public runner (#5683) 2025-06-18 22:00:12 +02:00
dependabot_auto.yml chore(deps): bump dependabot/fetch-metadata from 2.3.0 to 2.4.0 (#5355) 2025-05-12 22:01:19 +02:00
deploy-explorer.yaml chore(deps): bump appleboy/scp-action from 0.1.7 to 1.0.0 (#5265) 2025-04-28 22:36:30 +00:00
generate_grpc_cache.yaml chore(ci): move also other jobs to public runner (#5683) 2025-06-18 22:00:12 +02:00
generate_intel_image.yaml fix(sycl): kernel not found error by forcing -fsycl (#5115) 2025-04-03 16:22:59 +02:00
image-pr.yml chore(ci): ⚠️ fix latest tag by using docker meta action (#5722) 2025-06-26 18:40:25 +02:00
image.yml fix(ci): enable tag-latest to auto (#5738) 2025-06-27 18:17:01 +02:00
image_build.yml chore(ci): ⚠️ fix latest tag by using docker meta action (#5722) 2025-06-26 18:40:25 +02:00
labeler.yml fix(seed): generate random seed per-request if -1 is set (#1952) 2024-04-03 22:25:47 +02:00
localaibot_automerge.yml fix - correct checkout versions (#2029) 2024-04-13 19:01:17 +02:00
notify-models.yaml chore(deps): bump GrantBirki/git-diff-action from 2.8.0 to 2.8.1 (#5564) 2025-06-04 08:41:47 +02:00
notify-releases.yaml ci: use gemma3 for notifications of releases 2025-04-18 10:19:52 +02:00
prlint.yaml ci: drop description linting 2024-07-12 18:23:13 +02:00
release.yaml feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
secscan.yaml chore(deps): bump securego/gosec from 2.22.4 to 2.22.5 (#5663) 2025-06-16 23:12:27 +00:00
stalebot.yml Update stalebot.yml 2025-06-22 08:51:13 +02:00
test-extra.yml feat(chatterbox): add new backend (#5524) 2025-05-30 10:52:55 +02:00
test.yml feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
update_swagger.yaml chore(deps): Bump peter-evans/create-pull-request from 6 to 7 (#3518) 2024-09-10 01:52:16 +00:00
yaml-check.yml chore(backend gallery): add description for remaining backends (#5679) 2025-06-17 22:21:44 +02:00